Sample records for memory hierarchy optimization

  1. Implementing a bubble memory hierarchy system

    NASA Technical Reports Server (NTRS)

    Segura, R.; Nichols, C. D.

    1979-01-01

    This paper reports on implementation of a magnetic bubble memory in a two-level hierarchial system. The hierarchy used a major-minor loop device and RAM under microprocessor control. Dynamic memory addressing, dual bus primary memory, and hardware data modification detection are incorporated in the system to minimize access time. It is the objective of the system to incorporate the advantages of bipolar memory with that of bubble domain memory to provide a smart, optimal memory system which is easy to interface and independent of user's system.

  2. Fast maximum intensity projections of large medical data sets by exploiting hierarchical memory architectures.

    PubMed

    Kiefer, Gundolf; Lehmann, Helko; Weese, Jürgen

    2006-04-01

    Maximum intensity projections (MIPs) are an important visualization technique for angiographic data sets. Efficient data inspection requires frame rates of at least five frames per second at preserved image quality. Despite the advances in computer technology, this task remains a challenge. On the one hand, the sizes of computed tomography and magnetic resonance images are increasing rapidly. On the other hand, rendering algorithms do not automatically benefit from the advances in processor technology, especially for large data sets. This is due to the faster evolving processing power and the slower evolving memory access speed, which is bridged by hierarchical cache memory architectures. In this paper, we investigate memory access optimization methods and use them for generating MIPs on general-purpose central processing units (CPUs) and graphics processing units (GPUs), respectively. These methods can work on any level of the memory hierarchy, and we show that properly combined methods can optimize memory access on multiple levels of the hierarchy at the same time. We present performance measurements to compare different algorithm variants and illustrate the influence of the respective techniques. On current hardware, the efficient handling of the memory hierarchy for CPUs improves the rendering performance by a factor of 3 to 4. On GPUs, we observed that the effect is even larger, especially for large data sets. The methods can easily be adjusted to different hardware specifics, although their impact can vary considerably. They can also be used for other rendering techniques than MIPs, and their use for more general image processing task could be investigated in the future.

  3. A class Hierarchical, object-oriented approach to virtual memory management

    NASA Technical Reports Server (NTRS)

    Russo, Vincent F.; Campbell, Roy H.; Johnston, Gary M.

    1989-01-01

    The Choices family of operating systems exploits class hierarchies and object-oriented programming to facilitate the construction of customized operating systems for shared memory and networked multiprocessors. The software is being used in the Tapestry laboratory to study the performance of algorithms, mechanisms, and policies for parallel systems. Described here are the architectural design and class hierarchy of the Choices virtual memory management system. The software and hardware mechanisms and policies of a virtual memory system implement a memory hierarchy that exploits the trade-off between response times and storage capacities. In Choices, the notion of a memory hierarchy is captured by abstract classes. Concrete subclasses of those abstractions implement a virtual address space, segmentation, paging, physical memory management, secondary storage, and remote (that is, networked) storage. Captured in the notion of a memory hierarchy are classes that represent memory objects. These classes provide a storage mechanism that contains encapsulated data and have methods to read or write the memory object. Each of these classes provides specializations to represent the memory hierarchy.

  4. SMT-Aware Instantaneous Footprint Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roy, Probir; Liu, Xu; Song, Shuaiwen

    Modern architectures employ simultaneous multithreading (SMT) to increase thread-level parallelism. SMT threads share many functional units and the whole memory hierarchy of a physical core. Without a careful code design, SMT threads can easily contend with each other for these shared resources, causing severe performance degradation. Minimizing SMT thread contention for HPC applications running on dedicated platforms is very challenging, because they usually spawn threads within Single Program Multiple Data (SPMD) models. To address this important issue, we introduce a simple scheme for SMT-aware code optimization, which aims to reduce the memory contention across SMT threads.

  5. GPU color space conversion

    NASA Astrophysics Data System (ADS)

    Chase, Patrick; Vondran, Gary

    2011-01-01

    Tetrahedral interpolation is commonly used to implement continuous color space conversions from sparse 3D and 4D lookup tables. We investigate the implementation and optimization of tetrahedral interpolation algorithms for GPUs, and compare to the best known CPU implementations as well as to a well known GPU-based trilinear implementation. We show that a 500 NVIDIA GTX-580 GPU is 3x faster than a 1000 Intel Core i7 980X CPU for 3D interpolation, and 9x faster for 4D interpolation. Performance-relevant GPU attributes are explored including thread scheduling, local memory characteristics, global memory hierarchy, and cache behaviors. We consider existing tetrahedral interpolation algorithms and tune based on the structure and branching capabilities of current GPUs. Global memory performance is improved by reordering and expanding the lookup table to ensure optimal access behaviors. Per multiprocessor local memory is exploited to implement optimally coalesced global memory accesses, and local memory addressing is optimized to minimize bank conflicts. We explore the impacts of lookup table density upon computation and memory access costs. Also presented are CPU-based 3D and 4D interpolators, using SSE vector operations that are faster than any previously published solution.

  6. Algorithms for Data Intensive Applications on Intelligent and Smart Memories

    DTIC Science & Technology

    2003-03-01

    editors). Parallel Algorithms and Architectures. North Holland, 1986. [8] P. Diniz . USC ISI, Personal Communication, March, 2001. [9] M. Frigo, C. E ...hierarchy as well as the Translation Lookaside Buer TLB aect the e ectiveness of cache friendly optimizations These penalties vary among...processors and cause large variations in the e ectiveness of cache performance optimizations The area of graph problems is fundamental in a wide variety of

  7. A general model for memory interference in a multiprocessor system with memory hierarchy

    NASA Technical Reports Server (NTRS)

    Taha, Badie A.; Standley, Hilda M.

    1989-01-01

    The problem of memory interference in a multiprocessor system with a hierarchy of shared buses and memories is addressed. The behavior of the processors is represented by a sequence of memory requests with each followed by a determined amount of processing time. A statistical queuing network model for determining the extent of memory interference in multiprocessor systems with clusters of memory hierarchies is presented. The performance of the system is measured by the expected number of busy memory clusters. The results of the analytic model are compared with simulation results, and the correlation between them is found to be very high.

  8. Locality Aware Concurrent Start for Stencil Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shrestha, Sunil; Gao, Guang R.; Manzano Franco, Joseph B.

    Stencil computations are at the heart of many physical simulations used in scientific codes. Thus, there exists a plethora of optimization efforts for this family of computations. Among these techniques, tiling techniques that allow concurrent start have proven to be very efficient in providing better performance for these critical kernels. Nevertheless, with many core designs being the norm, these optimization techniques might not be able to fully exploit locality (both spatial and temporal) on multiple levels of the memory hierarchy without compromising parallelism. It is no longer true that the machine can be seen as a homogeneous collection of nodesmore » with caches, main memory and an interconnect network. New architectural designs exhibit complex grouping of nodes, cores, threads, caches and memory connected by an ever evolving network-on-chip design. These new designs may benefit greatly from carefully crafted schedules and groupings that encourage parallel actors (i.e. threads, cores or nodes) to be aware of the computational history of other actors in close proximity. In this paper, we provide an efficient tiling technique that allows hierarchical concurrent start for memory hierarchy aware tile groups. Each execution schedule and tile shape exploit the available parallelism, load balance and locality present in the given applications. We demonstrate our technique on the Intel Xeon Phi architecture with selected and representative stencil kernels. We show improvement ranging from 5.58% to 31.17% over existing state-of-the-art techniques.« less

  9. Memory Benchmarks for SMP-Based High Performance Parallel Computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoo, A B; de Supinski, B; Mueller, F

    2001-11-20

    As the speed gap between CPU and main memory continues to grow, memory accesses increasingly dominates the performance of many applications. The problem is particularly acute for symmetric multiprocessor (SMP) systems, where the shared memory may be accessed concurrently by a group of threads running on separate CPUs. Unfortunately, several key issues governing memory system performance in current systems are not well understood. Complex interactions between the levels of the memory hierarchy, buses or switches, DRAM back-ends, system software, and application access patterns can make it difficult to pinpoint bottlenecks and determine appropriate optimizations, and the situation is even moremore » complex for SMP systems. To partially address this problem, we formulated a set of multi-threaded microbenchmarks for characterizing and measuring the performance of the underlying memory system in SMP-based high-performance computers. We report our use of these microbenchmarks on two important SMP-based machines. This paper has four primary contributions. First, we introduce a microbenchmark suite to systematically assess and compare the performance of different levels in SMP memory hierarchies. Second, we present a new tool based on hardware performance monitors to determine a wide array of memory system characteristics, such as cache sizes, quickly and easily; by using this tool, memory performance studies can be targeted to the full spectrum of performance regimes with many fewer data points than is otherwise required. Third, we present experimental results indicating that the performance of applications with large memory footprints remains largely constrained by memory. Fourth, we demonstrate that thread-level parallelism further degrades memory performance, even for the latest SMPs with hardware prefetching and switch-based memory interconnects.« less

  10. The medial temporal lobe-conduit of parallel connectivity: a model for attention, memory, and perception.

    PubMed

    Mozaffari, Brian

    2014-01-01

    Based on the notion that the brain is equipped with a hierarchical organization, which embodies environmental contingencies across many time scales, this paper suggests that the medial temporal lobe (MTL)-located deep in the hierarchy-serves as a bridge connecting supra- to infra-MTL levels. Bridging the upper and lower regions of the hierarchy provides a parallel architecture that optimizes information flow between upper and lower regions to aid attention, encoding, and processing of quick complex visual phenomenon. Bypassing intermediate hierarchy levels, information conveyed through the MTL "bridge" allows upper levels to make educated predictions about the prevailing context and accordingly select lower representations to increase the efficiency of predictive coding throughout the hierarchy. This selection or activation/deactivation is associated with endogenous attention. In the event that these "bridge" predictions are inaccurate, this architecture enables the rapid encoding of novel contingencies. A review of hierarchical models in relation to memory is provided along with a new theory, Medial-temporal-lobe Conduit for Parallel Connectivity (MCPC). In this scheme, consolidation is considered as a secondary process, occurring after a MTL-bridged connection, which eventually allows upper and lower levels to access each other directly. With repeated reactivations, as contingencies become consolidated, less MTL activity is predicted. Finally, MTL bridging may aid processing transient but structured perceptual events, by allowing communication between upper and lower levels without calling on intermediate levels of representation.

  11. Eye Movement Evidence for Hierarchy Effects on Memory Representation of Discourses.

    PubMed

    Wu, Yingying; Yang, Xiaohong; Yang, Yufang

    2016-01-01

    In this study, we applied the text-change paradigm to investigate whether and how discourse hierarchy affected the memory representation of a discourse. Three kinds of three-sentence discourses were constructed. In the hierarchy-high condition and the hierarchy-low condition, the three sentences of the discourses were hierarchically organized and the last sentence of each discourse was located at the high level and the low level of the discourse hierarchy, respectively. In the linear condition, the three sentences of the discourses were linearly organized. Critical words were always located at the last sentence of the discourses. These discourses were successively presented twice and the critical words were changed to semantically related words in the second presentation. The results showed that during the early processing stage, the critical words were read for longer times when they were changed in the hierarchy-high and the linear conditions, but not in the hierarchy-low condition. During the late processing stage, the changed-critical words were again found to induce longer reading times only when they were in the hierarchy-high condition. These results suggest that words in a discourse have better memory representation when they are located at the higher rather than at the lower level of the discourse hierarchy. Global discourse hierarchy is established as an important factor in constructing the mental representation of a discourse.

  12. Eye Movement Evidence for Hierarchy Effects on Memory Representation of Discourses

    PubMed Central

    Wu, Yingying; Yang, Xiaohong; Yang, Yufang

    2016-01-01

    In this study, we applied the text-change paradigm to investigate whether and how discourse hierarchy affected the memory representation of a discourse. Three kinds of three-sentence discourses were constructed. In the hierarchy-high condition and the hierarchy-low condition, the three sentences of the discourses were hierarchically organized and the last sentence of each discourse was located at the high level and the low level of the discourse hierarchy, respectively. In the linear condition, the three sentences of the discourses were linearly organized. Critical words were always located at the last sentence of the discourses. These discourses were successively presented twice and the critical words were changed to semantically related words in the second presentation. The results showed that during the early processing stage, the critical words were read for longer times when they were changed in the hierarchy-high and the linear conditions, but not in the hierarchy-low condition. During the late processing stage, the changed-critical words were again found to induce longer reading times only when they were in the hierarchy-high condition. These results suggest that words in a discourse have better memory representation when they are located at the higher rather than at the lower level of the discourse hierarchy. Global discourse hierarchy is established as an important factor in constructing the mental representation of a discourse. PMID:26789002

  13. A role for glucocorticoids in the long-term establishment of a social hierarchy.

    PubMed

    Timmer, Marjan; Sandi, Carmen

    2010-11-01

    Stress can affect the establishment and maintenance of social hierarchies. In the present study, we investigated the role of increasing corticosterone levels before or just after a first social encounter between two rats of a dyad in the establishment and the long-term maintenance of a social hierarchy. We show that pre-social encounter corticosterone treatment does not affect the outcome of the hierarchy during a first encounter, but induces a long-term memory for the hierarchy when the corticosterone-injected rat becomes dominant during the encounter, but not when it becomes subordinate. Post-social encounter corticosterone leads to a long-term maintenance of the hierarchy only when the subordinate rat of the dyad is injected with corticosterone. This corticosterone effect mimics previously reported actions of stress on the same model and, hence, implicates glucocorticoids in the consolidation of the memory for a recently established hierarchy. Copyright © 2010 Elsevier Ltd. All rights reserved.

  14. Memory-Scalable GPU Spatial Hierarchy Construction.

    PubMed

    Qiming Hou; Xin Sun; Kun Zhou; Lauterbach, C; Manocha, D

    2011-04-01

    Recent GPU algorithms for constructing spatial hierarchies have achieved promising performance for moderately complex models by using the breadth-first search (BFS) construction order. While being able to exploit the massive parallelism on the GPU, the BFS order also consumes excessive GPU memory, which becomes a serious issue for interactive applications involving very complex models with more than a few million triangles. In this paper, we propose to use the partial breadth-first search (PBFS) construction order to control memory consumption while maximizing performance. We apply the PBFS order to two hierarchy construction algorithms. The first algorithm is for kd-trees that automatically balances between the level of parallelism and intermediate memory usage. With PBFS, peak memory consumption during construction can be efficiently controlled without costly CPU-GPU data transfer. We also develop memory allocation strategies to effectively limit memory fragmentation. The resulting algorithm scales well with GPU memory and constructs kd-trees of models with millions of triangles at interactive rates on GPUs with 1 GB memory. Compared with existing algorithms, our algorithm is an order of magnitude more scalable for a given GPU memory bound. The second algorithm is for out-of-core bounding volume hierarchy (BVH) construction for very large scenes based on the PBFS construction order. At each iteration, all constructed nodes are dumped to the CPU memory, and the GPU memory is freed for the next iteration's use. In this way, the algorithm is able to build trees that are too large to be stored in the GPU memory. Experiments show that our algorithm can construct BVHs for scenes with up to 20 M triangles, several times larger than previous GPU algorithms.

  15. Generating Adaptive Behaviour within a Memory-Prediction Framework

    PubMed Central

    Rawlinson, David; Kowadlo, Gideon

    2012-01-01

    The Memory-Prediction Framework (MPF) and its Hierarchical-Temporal Memory implementation (HTM) have been widely applied to unsupervised learning problems, for both classification and prediction. To date, there has been no attempt to incorporate MPF/HTM in reinforcement learning or other adaptive systems; that is, to use knowledge embodied within the hierarchy to control a system, or to generate behaviour for an agent. This problem is interesting because the human neocortex is believed to play a vital role in the generation of behaviour, and the MPF is a model of the human neocortex. We propose some simple and biologically-plausible enhancements to the Memory-Prediction Framework. These cause it to explore and interact with an external world, while trying to maximize a continuous, time-varying reward function. All behaviour is generated and controlled within the MPF hierarchy. The hierarchy develops from a random initial configuration by interaction with the world and reinforcement learning only. Among other demonstrations, we show that a 2-node hierarchy can learn to successfully play “rocks, paper, scissors” against a predictable opponent. PMID:22272231

  16. A memristor-based nonvolatile latch circuit

    NASA Astrophysics Data System (ADS)

    Robinett, Warren; Pickett, Matthew; Borghetti, Julien; Xia, Qiangfei; Snider, Gregory S.; Medeiros-Ribeiro, Gilberto; Williams, R. Stanley

    2010-06-01

    Memristive devices, which exhibit a dynamical conductance state that depends on the excitation history, can be used as nonvolatile memory elements by storing information as different conductance states. We describe the implementation of a nonvolatile synchronous flip-flop circuit that uses a nanoscale memristive device as the nonvolatile memory element. Controlled testing of the circuit demonstrated successful state storage and restoration, with an error rate of 0.1%, during 1000 power loss events. These results indicate that integration of digital logic devices and memristors could open the way for nonvolatile computation with applications in small platforms that rely on intermittent power sources. This demonstrated feasibility of tight integration of memristors with CMOS (complementary metal-oxide-semiconductor) circuitry challenges the traditional memory hierarchy, in which nonvolatile memory is only available as a large, slow, monolithic block at the bottom of the hierarchy. In contrast, the nonvolatile, memristor-based memory cell can be fast, fine-grained and small, and is compatible with conventional CMOS electronics. This threatens to upset the traditional memory hierarchy, and may open up new architectural possibilities beyond it.

  17. Stress amplifies memory for social hierarchy.

    PubMed

    Cordero, María Isabel; Sandi, Carmen

    2007-11-01

    Individuals differ in their social status and societies in the extent of social status differences among their members. There is great interest in understanding the key factors that contribute to the establishment of social dominance structures. Given that stress can affect behavior and cognition, we hypothesized that, given equal opportunities to become either dominant or submissive, stress experienced by one of the individuals during their first encounter would determine the long-term establishment of a social hierarchy by acting as a two-stage rocket: (1) by influencing the rank achieved after a social encounter and (2) by facilitating and/or promoting a long-term memory for the specific hierarchy. Using a novel model for the assessment of long-term dominance hierarchies in rats, we present here the first evidence supporting such hypothesis. In control conditions, the social rank established through a first interaction and food competition test between two male rats is not maintained when animals are confronted 1 week later. However, if one of the rats is stressed just before their first encounter, the dominance hierarchy developed on day 1 is still clearly observed 1 week later, with the stressed animal becoming submissive (i.e., looser in competition tests) in both social interactions. Our findings also allow us to propose that stress potentiates a hierarchy-linked recognition memory between "specific" individuals through mechanisms that involve de novo protein synthesis. These results implicate stress among the key mechanisms contributing to create social imbalance and highlight memory mechanisms as key mediators of stress-induced long-term establishment of social rank.

  18. MemAxes: Visualization and Analytics for Characterizing Complex Memory Performance Behaviors.

    PubMed

    Gimenez, Alfredo; Gamblin, Todd; Jusufi, Ilir; Bhatele, Abhinav; Schulz, Martin; Bremer, Peer-Timo; Hamann, Bernd

    2018-07-01

    Memory performance is often a major bottleneck for high-performance computing (HPC) applications. Deepening memory hierarchies, complex memory management, and non-uniform access times have made memory performance behavior difficult to characterize, and users require novel, sophisticated tools to analyze and optimize this aspect of their codes. Existing tools target only specific factors of memory performance, such as hardware layout, allocations, or access instructions. However, today's tools do not suffice to characterize the complex relationships between these factors. Further, they require advanced expertise to be used effectively. We present MemAxes, a tool based on a novel approach for analytic-driven visualization of memory performance data. MemAxes uniquely allows users to analyze the different aspects related to memory performance by providing multiple visual contexts for a centralized dataset. We define mappings of sampled memory access data to new and existing visual metaphors, each of which enabling a user to perform different analysis tasks. We present methods to guide user interaction by scoring subsets of the data based on known performance problems. This scoring is used to provide visual cues and automatically extract clusters of interest. We designed MemAxes in collaboration with experts in HPC and demonstrate its effectiveness in case studies.

  19. Two-Hierarchy Entanglement Swapping for a Linear Optical Quantum Repeater

    NASA Astrophysics Data System (ADS)

    Xu, Ping; Yong, Hai-Lin; Chen, Luo-Kan; Liu, Chang; Xiang, Tong; Yao, Xing-Can; Lu, He; Li, Zheng-Da; Liu, Nai-Le; Li, Li; Yang, Tao; Peng, Cheng-Zhi; Zhao, Bo; Chen, Yu-Ao; Pan, Jian-Wei

    2017-10-01

    Quantum repeaters play a significant role in achieving long-distance quantum communication. In the past decades, tremendous effort has been devoted towards constructing a quantum repeater. As one of the crucial elements, entanglement has been created in different memory systems via entanglement swapping. The realization of j -hierarchy entanglement swapping, i.e., connecting quantum memory and further extending the communication distance, is important for implementing a practical quantum repeater. Here, we report the first demonstration of a fault-tolerant two-hierarchy entanglement swapping with linear optics using parametric down-conversion sources. In the experiment, the dominant or most probable noise terms in the one-hierarchy entanglement swapping, which is on the same order of magnitude as the desired state and prevents further entanglement connections, are automatically washed out by a proper design of the detection setting, and the communication distance can be extended. Given suitable quantum memory, our techniques can be directly applied to implementing an atomic ensemble based quantum repeater, and are of significant importance in the scalable quantum information processing.

  20. Two-Hierarchy Entanglement Swapping for a Linear Optical Quantum Repeater.

    PubMed

    Xu, Ping; Yong, Hai-Lin; Chen, Luo-Kan; Liu, Chang; Xiang, Tong; Yao, Xing-Can; Lu, He; Li, Zheng-Da; Liu, Nai-Le; Li, Li; Yang, Tao; Peng, Cheng-Zhi; Zhao, Bo; Chen, Yu-Ao; Pan, Jian-Wei

    2017-10-27

    Quantum repeaters play a significant role in achieving long-distance quantum communication. In the past decades, tremendous effort has been devoted towards constructing a quantum repeater. As one of the crucial elements, entanglement has been created in different memory systems via entanglement swapping. The realization of j-hierarchy entanglement swapping, i.e., connecting quantum memory and further extending the communication distance, is important for implementing a practical quantum repeater. Here, we report the first demonstration of a fault-tolerant two-hierarchy entanglement swapping with linear optics using parametric down-conversion sources. In the experiment, the dominant or most probable noise terms in the one-hierarchy entanglement swapping, which is on the same order of magnitude as the desired state and prevents further entanglement connections, are automatically washed out by a proper design of the detection setting, and the communication distance can be extended. Given suitable quantum memory, our techniques can be directly applied to implementing an atomic ensemble based quantum repeater, and are of significant importance in the scalable quantum information processing.

  1. The medial temporal lobe—conduit of parallel connectivity: a model for attention, memory, and perception

    PubMed Central

    Mozaffari, Brian

    2014-01-01

    Based on the notion that the brain is equipped with a hierarchical organization, which embodies environmental contingencies across many time scales, this paper suggests that the medial temporal lobe (MTL)—located deep in the hierarchy—serves as a bridge connecting supra- to infra—MTL levels. Bridging the upper and lower regions of the hierarchy provides a parallel architecture that optimizes information flow between upper and lower regions to aid attention, encoding, and processing of quick complex visual phenomenon. Bypassing intermediate hierarchy levels, information conveyed through the MTL “bridge” allows upper levels to make educated predictions about the prevailing context and accordingly select lower representations to increase the efficiency of predictive coding throughout the hierarchy. This selection or activation/deactivation is associated with endogenous attention. In the event that these “bridge” predictions are inaccurate, this architecture enables the rapid encoding of novel contingencies. A review of hierarchical models in relation to memory is provided along with a new theory, Medial-temporal-lobe Conduit for Parallel Connectivity (MCPC). In this scheme, consolidation is considered as a secondary process, occurring after a MTL-bridged connection, which eventually allows upper and lower levels to access each other directly. With repeated reactivations, as contingencies become consolidated, less MTL activity is predicted. Finally, MTL bridging may aid processing transient but structured perceptual events, by allowing communication between upper and lower levels without calling on intermediate levels of representation. PMID:25426036

  2. Exploring Machine Learning Techniques For Dynamic Modeling on Future Exascale Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Shuaiwen; Tallent, Nathan R.; Vishnu, Abhinav

    2013-09-23

    Future exascale systems must be optimized for both power and performance at scale in order to achieve DOE’s goal of a sustained petaflop within 20 Megawatts by 2022 [1]. Massive parallelism of the future systems combined with complex memory hierarchies will form a barrier to efficient application and architecture design. These challenges are exacerbated with emerging complex architectures such as GPGPUs and Intel Xeon Phi as parallelism increases orders of magnitude and system power consumption can easily triple or quadruple. Therefore, we need techniques that can reduce the search space for optimization, isolate power-performance bottlenecks, identify root causes for software/hardwaremore » inefficiency, and effectively direct runtime scheduling.« less

  3. MILC Code Performance on High End CPU and GPU Supercomputer Clusters

    NASA Astrophysics Data System (ADS)

    DeTar, Carleton; Gottlieb, Steven; Li, Ruizi; Toussaint, Doug

    2018-03-01

    With recent developments in parallel supercomputing architecture, many core, multi-core, and GPU processors are now commonplace, resulting in more levels of parallelism, memory hierarchy, and programming complexity. It has been necessary to adapt the MILC code to these new processors starting with NVIDIA GPUs, and more recently, the Intel Xeon Phi processors. We report on our efforts to port and optimize our code for the Intel Knights Landing architecture. We consider performance of the MILC code with MPI and OpenMP, and optimizations with QOPQDP and QPhiX. For the latter approach, we concentrate on the staggered conjugate gradient and gauge force. We also consider performance on recent NVIDIA GPUs using the QUDA library.

  4. Visual perception as retrospective Bayesian decoding from high- to low-level features

    PubMed Central

    Ding, Stephanie; Cueva, Christopher J.; Tsodyks, Misha; Qian, Ning

    2017-01-01

    When a stimulus is presented, its encoding is known to progress from low- to high-level features. How these features are decoded to produce perception is less clear, and most models assume that decoding follows the same low- to high-level hierarchy of encoding. There are also theories arguing for global precedence, reversed hierarchy, or bidirectional processing, but they are descriptive without quantitative comparison with human perception. Moreover, observers often inspect different parts of a scene sequentially to form overall perception, suggesting that perceptual decoding requires working memory, yet few models consider how working-memory properties may affect decoding hierarchy. We probed decoding hierarchy by comparing absolute judgments of single orientations and relative/ordinal judgments between two sequentially presented orientations. We found that lower-level, absolute judgments failed to account for higher-level, relative/ordinal judgments. However, when ordinal judgment was used to retrospectively decode memory representations of absolute orientations, striking aspects of absolute judgments, including the correlation and forward/backward aftereffects between two reported orientations in a trial, were explained. We propose that the brain prioritizes decoding of higher-level features because they are more behaviorally relevant, and more invariant and categorical, and thus easier to specify and maintain in noisy working memory, and that more reliable higher-level decoding constrains less reliable lower-level decoding. PMID:29073108

  5. Dynamic storage in resource-scarce browsing multimedia applications

    NASA Astrophysics Data System (ADS)

    Elenbaas, Herman; Dimitrova, Nevenka

    1998-10-01

    In the convergence of information and entertainment there is a conflict between the consumer's expectation of fast access to high quality multimedia content through narrow bandwidth channels versus the size of this content. During the retrieval and information presentation of a multimedia application there are two problems that have to be solved: the limited bandwidth during transmission of the retrieved multimedia content and the limited memory for temporary caching. In this paper we propose an approach for latency optimization in information browsing applications. We proposed a method for flattening hierarchically linked documents in a manner convenient for network transport over slow channels to minimize browsing latency. Flattening of the hierarchy involves linearization, compression and bundling of the document nodes. After the transfer, the compressed hierarchy is stored on a local device where it can be partly unbundled to fit the caching limits at the local site while giving the user availability to the content.

  6. Formal verification of a set of memory management units

    NASA Technical Reports Server (NTRS)

    Schubert, E. Thomas; Levitt, K.; Cohen, Gerald C.

    1992-01-01

    This document describes the verification of a set of memory management units (MMU). The verification effort demonstrates the use of hierarchical decomposition and abstract theories. The MMUs can be organized into a complexity hierarchy. Each new level in the hierarchy adds a few significant features or modifications to the lower level MMU. The units described include: (1) a page check translation look-aside module (TLM); (2) a page check TLM with supervisor line; (3) a base bounds MMU; (4) a virtual address translation MMU; and (5) a virtual address translation MMU with memory resident segment table.

  7. Automated Cache Performance Analysis And Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohror, Kathryn

    While there is no lack of performance counter tools for coarse-grained measurement of cache activity, there is a critical lack of tools for relating data layout to cache behavior to application performance. Generally, any nontrivial optimizations are either not done at all, or are done ”by hand” requiring significant time and expertise. To the best of our knowledge no tool available to users measures the latency of memory reference instructions for partic- ular addresses and makes this information available to users in an easy-to-use and intuitive way. In this project, we worked to enable the Open|SpeedShop performance analysis tool tomore » gather memory reference latency information for specific instructions and memory ad- dresses, and to gather and display this information in an easy-to-use and intuitive way to aid performance analysts in identifying problematic data structures in their codes. This tool was primarily designed for use in the supercomputer domain as well as grid, cluster, cloud-based parallel e-commerce, and engineering systems and middleware. Ultimately, we envision a tool to automate optimization of application cache layout and utilization in the Open|SpeedShop performance analysis tool. To commercialize this soft- ware, we worked to develop core capabilities for gathering enhanced memory usage per- formance data from applications and create and apply novel methods for automatic data structure layout optimizations, tailoring the overall approach to support existing supercom- puter and cluster programming models and constraints. In this Phase I project, we focused on infrastructure necessary to gather performance data and present it in an intuitive way to users. With the advent of enhanced Precise Event-Based Sampling (PEBS) counters on recent Intel processor architectures and equivalent technology on AMD processors, we are now in a position to access memory reference information for particular addresses. Prior to the introduction of PEBS counters, cache behavior could only be measured reliably in the ag- gregate across tens or hundreds of thousands of instructions. With the newest iteration of PEBS technology, cache events can be tied to a tuple of instruction pointer, target address (for both loads and stores), memory hierarchy, and observed latency. With this information we can now begin asking questions regarding the efficiency of not only regions of code, but how these regions interact with particular data structures and how these interactions evolve over time. In the short term, this information will be vital for performance analysts understanding and optimizing the behavior of their codes for the memory hierarchy. In the future, we can begin to ask how data layouts might be changed to improve performance and, for a particular application, what the theoretical optimal performance might be. The overall benefit to be produced by this effort was a commercial quality easy-to- use and scalable performance tool that will allow both beginner and experienced parallel programmers to automatically tune their applications for optimal cache usage. Effective use of such a tool can literally save weeks of performance tuning effort. Easy to use. With the proposed innovations, finding and fixing memory performance issues would be more automated and hide most to all of the performance engineer exper- tise ”under the hood” of the Open|SpeedShop performance tool. One of the biggest public benefits from the proposed innovations is that it makes performance analysis more usable to a larger group of application developers. Intuitive reporting of results. The Open|SpeedShop performance analysis tool has a rich set of intuitive, yet detailed reports for presenting performance results to application developers. Our goal was to leverage this existing technology to present the results from our memory performance addition to Open|SpeedShop. Suitable for experts as well as novices. Application performance is getting more difficult to measure as the hardware platforms they run on become more complicated. This makes life difficult for the application developer, in that they need to know more about the hardware platform, including the memory system hierarchy, in order to understand the performance of their application. Some application developers are comfortable in that sce- nario, while others want to do their scientific research and not have to understand all the nuances in the hardware platform they are running their application on. Our proposed innovations were aimed to support both experts and novice performance analysts. Useful in many markets. The enhancement to Open|SpeedShop would appeal to a broader market space, as it will be useful in scientific, commercial, and cloud computing environments. Our goal was to use technology developed initially at the and Lawrence Livermore Na- tional Laboratory combined with the development and commercial software experience of the Argo Navis Technologies, LLC (ANT) to form a powerful combination to delivery these objectives.« less

  8. Visual perception as retrospective Bayesian decoding from high- to low-level features.

    PubMed

    Ding, Stephanie; Cueva, Christopher J; Tsodyks, Misha; Qian, Ning

    2017-10-24

    When a stimulus is presented, its encoding is known to progress from low- to high-level features. How these features are decoded to produce perception is less clear, and most models assume that decoding follows the same low- to high-level hierarchy of encoding. There are also theories arguing for global precedence, reversed hierarchy, or bidirectional processing, but they are descriptive without quantitative comparison with human perception. Moreover, observers often inspect different parts of a scene sequentially to form overall perception, suggesting that perceptual decoding requires working memory, yet few models consider how working-memory properties may affect decoding hierarchy. We probed decoding hierarchy by comparing absolute judgments of single orientations and relative/ordinal judgments between two sequentially presented orientations. We found that lower-level, absolute judgments failed to account for higher-level, relative/ordinal judgments. However, when ordinal judgment was used to retrospectively decode memory representations of absolute orientations, striking aspects of absolute judgments, including the correlation and forward/backward aftereffects between two reported orientations in a trial, were explained. We propose that the brain prioritizes decoding of higher-level features because they are more behaviorally relevant, and more invariant and categorical, and thus easier to specify and maintain in noisy working memory, and that more reliable higher-level decoding constrains less reliable lower-level decoding. Published under the PNAS license.

  9. Automatic blocking of nested loops

    NASA Technical Reports Server (NTRS)

    Schreiber, Robert; Dongarra, Jack J.

    1990-01-01

    Blocked algorithms have much better properties of data locality and therefore can be much more efficient than ordinary algorithms when a memory hierarchy is involved. On the other hand, they are very difficult to write and to tune for particular machines. The reorganization is considered of nested loops through the use of known program transformations in order to create blocked algorithms automatically. The program transformations used are strip mining, loop interchange, and a variant of loop skewing in which invertible linear transformations (with integer coordinates) of the loop indices are allowed. Some problems are solved concerning the optimal application of these transformations. It is shown, in a very general setting, how to choose a nearly optimal set of transformed indices. It is then shown, in one particular but rather frequently occurring situation, how to choose an optimal set of block sizes.

  10. Behavior-aware cache hierarchy optimization for low-power multi-core embedded systems

    NASA Astrophysics Data System (ADS)

    Zhao, Huatao; Luo, Xiao; Zhu, Chen; Watanabe, Takahiro; Zhu, Tianbo

    2017-07-01

    In modern embedded systems, the increasing number of cores requires efficient cache hierarchies to ensure data throughput, but such cache hierarchies are restricted by their tumid size and interference accesses which leads to both performance degradation and wasted energy. In this paper, we firstly propose a behavior-aware cache hierarchy (BACH) which can optimally allocate the multi-level cache resources to many cores and highly improved the efficiency of cache hierarchy, resulting in low energy consumption. The BACH takes full advantage of the explored application behaviors and runtime cache resource demands as the cache allocation bases, so that we can optimally configure the cache hierarchy to meet the runtime demand. The BACH was implemented on the GEM5 simulator. The experimental results show that energy consumption of a three-level cache hierarchy can be saved from 5.29% up to 27.94% compared with other key approaches while the performance of the multi-core system even has a slight improvement counting in hardware overhead.

  11. Dynamic Organization of Hierarchical Memories

    PubMed Central

    Kurikawa, Tomoki; Kaneko, Kunihiko

    2016-01-01

    In the brain, external objects are categorized in a hierarchical way. Although it is widely accepted that objects are represented as static attractors in neural state space, this view does not take account interaction between intrinsic neural dynamics and external input, which is essential to understand how neural system responds to inputs. Indeed, structured spontaneous neural activity without external inputs is known to exist, and its relationship with evoked activities is discussed. Then, how categorical representation is embedded into the spontaneous and evoked activities has to be uncovered. To address this question, we studied bifurcation process with increasing input after hierarchically clustered associative memories are learned. We found a “dynamic categorization”; neural activity without input wanders globally over the state space including all memories. Then with the increase of input strength, diffuse representation of higher category exhibits transitions to focused ones specific to each object. The hierarchy of memories is embedded in the transition probability from one memory to another during the spontaneous dynamics. With increased input strength, neural activity wanders over a narrower state space including a smaller set of memories, showing more specific category or memory corresponding to the applied input. Moreover, such coarse-to-fine transitions are also observed temporally during transient process under constant input, which agrees with experimental findings in the temporal cortex. These results suggest the hierarchy emerging through interaction with an external input underlies hierarchy during transient process, as well as in the spontaneous activity. PMID:27618549

  12. Cache-Aware Asymptotically-Optimal Sampling-Based Motion Planning

    PubMed Central

    Ichnowski, Jeffrey; Prins, Jan F.; Alterovitz, Ron

    2014-01-01

    We present CARRT* (Cache-Aware Rapidly Exploring Random Tree*), an asymptotically optimal sampling-based motion planner that significantly reduces motion planning computation time by effectively utilizing the cache memory hierarchy of modern central processing units (CPUs). CARRT* can account for the CPU’s cache size in a manner that keeps its working dataset in the cache. The motion planner progressively subdivides the robot’s configuration space into smaller regions as the number of configuration samples rises. By focusing configuration exploration in a region for periods of time, nearest neighbor searching is accelerated since the working dataset is small enough to fit in the cache. CARRT* also rewires the motion planning graph in a manner that complements the cache-aware subdivision strategy to more quickly refine the motion planning graph toward optimality. We demonstrate the performance benefit of our cache-aware motion planning approach for scenarios involving a point robot as well as the Rethink Robotics Baxter robot. PMID:25419474

  13. Cache-Aware Asymptotically-Optimal Sampling-Based Motion Planning.

    PubMed

    Ichnowski, Jeffrey; Prins, Jan F; Alterovitz, Ron

    2014-05-01

    We present CARRT* (Cache-Aware Rapidly Exploring Random Tree*), an asymptotically optimal sampling-based motion planner that significantly reduces motion planning computation time by effectively utilizing the cache memory hierarchy of modern central processing units (CPUs). CARRT* can account for the CPU's cache size in a manner that keeps its working dataset in the cache. The motion planner progressively subdivides the robot's configuration space into smaller regions as the number of configuration samples rises. By focusing configuration exploration in a region for periods of time, nearest neighbor searching is accelerated since the working dataset is small enough to fit in the cache. CARRT* also rewires the motion planning graph in a manner that complements the cache-aware subdivision strategy to more quickly refine the motion planning graph toward optimality. We demonstrate the performance benefit of our cache-aware motion planning approach for scenarios involving a point robot as well as the Rethink Robotics Baxter robot.

  14. Exploring performance and energy tradeoffs for irregular applications: A case study on the Tilera many-core architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Panyala, Ajay; Chavarría-Miranda, Daniel; Manzano, Joseph B.

    High performance, parallel applications with irregular data accesses are becoming a critical workload class for modern systems. In particular, the execution of such workloads on emerging many-core systems is expected to be a significant component of applications in data mining, machine learning, scientific computing and graph analytics. However, power and energy constraints limit the capabilities of individual cores, memory hierarchy and on-chip interconnect of such systems, thus leading to architectural and software trade-os that must be understood in the context of the intended application’s behavior. Irregular applications are notoriously hard to optimize given their data-dependent access patterns, lack of structuredmore » locality and complex data structures and code patterns. We have ported two irregular applications, graph community detection using the Louvain method (Grappolo) and high-performance conjugate gradient (HPCCG), to the Tilera many-core system and have conducted a detailed study of platform-independent and platform-specific optimizations that improve their performance as well as reduce their overall energy consumption. To conduct this study, we employ an auto-tuning based approach that explores the optimization design space along three dimensions - memory layout schemes, GCC compiler flag choices and OpenMP loop scheduling options. We leverage MIT’s OpenTuner auto-tuning framework to explore and recommend energy optimal choices for different combinations of parameters. We then conduct an in-depth architectural characterization to understand the memory behavior of the selected workloads. Finally, we perform a correlation study to demonstrate the interplay between the hardware behavior and application characteristics. Using auto-tuning, we demonstrate whole-node energy savings and performance improvements of up to 49:6% and 60% relative to a baseline instantiation, and up to 31% and 45:4% relative to manually optimized variants.« less

  15. Communication-avoiding symmetric-indefinite factorization

    DOE PAGES

    Ballard, Grey Malone; Becker, Dulcenia; Demmel, James; ...

    2014-11-13

    We describe and analyze a novel symmetric triangular factorization algorithm. The algorithm is essentially a block version of Aasen's triangular tridiagonalization. It factors a dense symmetric matrix A as the product A=PLTL TP T where P is a permutation matrix, L is lower triangular, and T is block tridiagonal and banded. The algorithm is the first symmetric-indefinite communication-avoiding factorization: it performs an asymptotically optimal amount of communication in a two-level memory hierarchy for almost any cache-line size. Adaptations of the algorithm to parallel computers are likely to be communication efficient as well; one such adaptation has been recently published. Asmore » a result, the current paper describes the algorithm, proves that it is numerically stable, and proves that it is communication optimal.« less

  16. Communication-avoiding symmetric-indefinite factorization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ballard, Grey Malone; Becker, Dulcenia; Demmel, James

    We describe and analyze a novel symmetric triangular factorization algorithm. The algorithm is essentially a block version of Aasen's triangular tridiagonalization. It factors a dense symmetric matrix A as the product A=PLTL TP T where P is a permutation matrix, L is lower triangular, and T is block tridiagonal and banded. The algorithm is the first symmetric-indefinite communication-avoiding factorization: it performs an asymptotically optimal amount of communication in a two-level memory hierarchy for almost any cache-line size. Adaptations of the algorithm to parallel computers are likely to be communication efficient as well; one such adaptation has been recently published. Asmore » a result, the current paper describes the algorithm, proves that it is numerically stable, and proves that it is communication optimal.« less

  17. A Bayesian Sampler for Optimization of Protein Domain Hierarchies

    PubMed Central

    2014-01-01

    Abstract The process of identifying and modeling functionally divergent subgroups for a specific protein domain class and arranging these subgroups hierarchically has, thus far, largely been done via manual curation. How to accomplish this automatically and optimally is an unsolved statistical and algorithmic problem that is addressed here via Markov chain Monte Carlo sampling. Taking as input a (typically very large) multiple-sequence alignment, the sampler creates and optimizes a hierarchy by adding and deleting leaf nodes, by moving nodes and subtrees up and down the hierarchy, by inserting or deleting internal nodes, and by redefining the sequences and conserved patterns associated with each node. All such operations are based on a probability distribution that models the conserved and divergent patterns defining each subgroup. When we view these patterns as sequence determinants of protein function, each node or subtree in such a hierarchy corresponds to a subgroup of sequences with similar biological properties. The sampler can be applied either de novo or to an existing hierarchy. When applied to 60 protein domains from multiple starting points in this way, it converged on similar solutions with nearly identical log-likelihood ratio scores, suggesting that it typically finds the optimal peak in the posterior probability distribution. Similarities and differences between independently generated, nearly optimal hierarchies for a given domain help distinguish robust from statistically uncertain features. Thus, a future application of the sampler is to provide confidence measures for various features of a domain hierarchy. PMID:24494927

  18. Optimal Planning and Problem-Solving

    NASA Technical Reports Server (NTRS)

    Clemet, Bradley; Schaffer, Steven; Rabideau, Gregg

    2008-01-01

    CTAEMS MDP Optimal Planner is a problem-solving software designed to command a single spacecraft/rover, or a team of spacecraft/rovers, to perform the best action possible at all times according to an abstract model of the spacecraft/rover and its environment. It also may be useful in solving logistical problems encountered in commercial applications such as shipping and manufacturing. The planner reasons around uncertainty according to specified probabilities of outcomes using a plan hierarchy to avoid exploring certain kinds of suboptimal actions. Also, planned actions are calculated as the state-action space is expanded, rather than afterward, to reduce by an order of magnitude the processing time and memory used. The software solves planning problems with actions that can execute concurrently, that have uncertain duration and quality, and that have functional dependencies on others that affect quality. These problems are modeled in a hierarchical planning language called C_TAEMS, a derivative of the TAEMS language for specifying domains for the DARPA Coordinators program. In realistic environments, actions often have uncertain outcomes and can have complex relationships with other tasks. The planner approaches problems by considering all possible actions that may be taken from any state reachable from a given, initial state, and from within the constraints of a given task hierarchy that specifies what tasks may be performed by which team member.

  19. It's all coming back to me now: perception and memory in amnesia.

    PubMed

    Baxter, Mark G

    2012-07-12

    Medial temporal lobe (MTL) structures may constitute a representational hierarchy, rather than a dedicated system for memory. Barense et al. (2012) show that intact memory for object features can interfere with perception of complex objects in individuals with MTL amnesia. Copyright © 2012 Elsevier Inc. All rights reserved.

  20. Short-term plasticity as a neural mechanism supporting memory and attentional functions.

    PubMed

    Jääskeläinen, Iiro P; Ahveninen, Jyrki; Andermann, Mark L; Belliveau, John W; Raij, Tommi; Sams, Mikko

    2011-11-08

    Based on behavioral studies, several relatively distinct perceptual and cognitive functions have been defined in cognitive psychology such as sensory memory, short-term memory, and selective attention. Here, we review evidence suggesting that some of these functions may be supported by shared underlying neuronal mechanisms. Specifically, we present, based on an integrative review of the literature, a hypothetical model wherein short-term plasticity, in the form of transient center-excitatory and surround-inhibitory modulations, constitutes a generic processing principle that supports sensory memory, short-term memory, involuntary attention, selective attention, and perceptual learning. In our model, the size and complexity of receptive fields/level of abstraction of neural representations, as well as the length of temporal receptive windows, increases as one steps up the cortical hierarchy. Consequently, the type of input (bottom-up vs. top down) and the level of cortical hierarchy that the inputs target, determine whether short-term plasticity supports purely sensory vs. semantic short-term memory or attentional functions. Furthermore, we suggest that rather than discrete memory systems, there are continuums of memory representations from short-lived sensory ones to more abstract longer-duration representations, such as those tapped by behavioral studies of short-term memory. Copyright © 2011 Elsevier B.V. All rights reserved.

  1. Spiral: Automated Computing for Linear Transforms

    NASA Astrophysics Data System (ADS)

    Püschel, Markus

    2010-09-01

    Writing fast software has become extraordinarily difficult. For optimal performance, programs and their underlying algorithms have to be adapted to take full advantage of the platform's parallelism, memory hierarchy, and available instruction set. To make things worse, the best implementations are often platform-dependent and platforms are constantly evolving, which quickly renders libraries obsolete. We present Spiral, a domain-specific program generation system for important functionality used in signal processing and communication including linear transforms, filters, and other functions. Spiral completely replaces the human programmer. For a desired function, Spiral generates alternative algorithms, optimizes them, compiles them into programs, and intelligently searches for the best match to the computing platform. The main idea behind Spiral is a mathematical, declarative, domain-specific framework to represent algorithms and the use of rewriting systems to generate and optimize algorithms at a high level of abstraction. Experimental results show that the code generated by Spiral competes with, and sometimes outperforms, the best available human-written code.

  2. Research on Contextual Memorizing of Meaning in Foreign Language Vocabulary

    ERIC Educational Resources Information Center

    Xu, Linjing; Xiong, Qingxia; Qin, Yufang

    2018-01-01

    The purpose of this study was to examine the role of contexts in the memory of meaning in foreign vocabularies. The study was based on the cognitive processing hierarchy theory of Craik and Lockhart (1972), the memory trace theory of McClelland and Rumelhart (1986) and the memory trace theory of cognitive psychology. The subjects were non-English…

  3. Spatial resolution in visual memory.

    PubMed

    Ben-Shalom, Asaf; Ganel, Tzvi

    2015-04-01

    Representations in visual short-term memory are considered to contain relatively elaborated information on object structure. Conversely, representations in earlier stages of the visual hierarchy are thought to be dominated by a sensory-based, feed-forward buildup of information. In four experiments, we compared the spatial resolution of different object properties between two points in time along the processing hierarchy in visual short-term memory. Subjects were asked either to estimate the distance between objects or to estimate the size of one of the objects' features under two experimental conditions, of either a short or a long delay period between the presentation of the target stimulus and the probe. When different objects were referred to, similar spatial resolution was found for the two delay periods, suggesting that initial processing stages are sensitive to object-based properties. Conversely, superior resolution was found for the short, as compared with the long, delay when features were referred to. These findings suggest that initial representations in visual memory are hybrid in that they allow fine-grained resolution for object features alongside normal visual sensitivity to the segregation between objects. The findings are also discussed in reference to the distinction made in earlier studies between visual short-term memory and iconic memory.

  4. Memory hierarchy using row-based compression

    DOEpatents

    Loh, Gabriel H.; O'Connor, James M.

    2016-10-25

    A system includes a first memory and a device coupleable to the first memory. The device includes a second memory to cache data from the first memory. The second memory includes a plurality of rows, each row including a corresponding set of compressed data blocks of non-uniform sizes and a corresponding set of tag blocks. Each tag block represents a corresponding compressed data block of the row. The device further includes decompression logic to decompress data blocks accessed from the second memory. The device further includes compression logic to compress data blocks to be stored in the second memory.

  5. The rules of implicit evaluation by race, religion, and age.

    PubMed

    Axt, Jordan R; Ebersole, Charles R; Nosek, Brian A

    2014-09-01

    The social world is stratified. Social hierarchies are known but often disavowed as anachronisms or unjust. Nonetheless, hierarchies may persist in social memory. In three studies (total N > 200,000), we found evidence of social hierarchies in implicit evaluation by race, religion, and age. Participants implicitly evaluated their own racial group most positively and the remaining racial groups in accordance with the following hierarchy: Whites > Asians > Blacks > Hispanics. Similarly, participants implicitly evaluated their own religion most positively and the remaining religions in accordance with the following hierarchy: Christianity > Judaism > Hinduism or Buddhism > Islam. In a final study, participants of all ages implicitly evaluated age groups following this rule: children > young adults > middle-age adults > older adults. These results suggest that the rules of social evaluation are pervasively embedded in culture and mind. © The Author(s) 2014.

  6. Diverse Heterologous Primary Infections Radically Alter Immunodominance Hierarchies and Clinical Outcomes Following H7N9 Influenza Challenge in Mice

    PubMed Central

    Duan, Susu; Meliopoulos, Victoria A.; McClaren, Jennifer L.; Guo, Xi-Zhi J.; Sanders, Catherine J.; Smallwood, Heather S.; Webby, Richard J.; Schultz-Cherry, Stacey L.; Doherty, Peter C.; Thomas, Paul G.

    2015-01-01

    The recent emergence of a novel H7N9 influenza A virus (IAV) causing severe human infections in China raises concerns about a possible pandemic. The lack of pre-existing neutralizing antibodies in the broader population highlights the potential protective role of IAV-specific CD8+ cytotoxic T lymphocyte (CTL) memory specific for epitopes conserved between H7N9 and previously encountered IAVs. In the present study, the heterosubtypic immunity generated by prior H9N2 or H1N1 infections significantly, but variably, reduced morbidity and mortality, pulmonary virus load and time to clearance in mice challenged with the H7N9 virus. In all cases, the recall of established CTL memory was characterized by earlier, greater airway infiltration of effectors targeting the conserved or cross-reactive H7N9 IAV peptides; though, depending on the priming IAV, each case was accompanied by distinct CTL epitope immunodominance hierarchies for the prominent KbPB1703, DbPA224, and DbNP366 epitopes. While the presence of conserved, variable, or cross-reactive epitopes between the priming H9N2 and H1N1 and the challenge H7N9 IAVs clearly influenced any change in the immunodominance hierarchy, the changing patterns were not tied solely to epitope conservation. Furthermore, the total size of the IAV-specific memory CTL pool after priming was a better predictor of favorable outcomes than the extent of epitope conservation or secondary CTL expansion. Modifying the size of the memory CTL pool significantly altered its subsequent protective efficacy on disease severity or virus clearance, confirming the important role of heterologous priming. These findings establish that both the protective efficacy of heterosubtypic immunity and CTL immunodominance hierarchies are reflective of the immunological history of the host, a finding that has implications for understanding human CTL responses and the rational design of CTL-mediated vaccines. PMID:25668410

  7. An approach to separating the levels of hierarchical structure building in language and mathematics.

    PubMed

    Makuuchi, Michiru; Bahlmann, Jörg; Friederici, Angela D

    2012-07-19

    We aimed to dissociate two levels of hierarchical structure building in language and mathematics, namely 'first-level' (the build-up of hierarchical structure with externally given elements) and 'second-level' (the build-up of hierarchical structure with internally represented elements produced by first-level processes). Using functional magnetic resonance imaging, we investigated these processes in three domains: sentence comprehension, arithmetic calculation (using Reverse Polish notation, which gives two operands followed by an operator) and a working memory control task. All tasks required the build-up of hierarchical structures at the first- and second-level, resulting in a similar computational hierarchy across language and mathematics, as well as in a working memory control task. Using a novel method that estimates the difference in the integration cost for conditions of different trial durations, we found an anterior-to-posterior functional organization in the prefrontal cortex, according to the level of hierarchy. Common to all domains, the ventral premotor cortex (PMv) supports first-level hierarchy building, while the dorsal pars opercularis (POd) subserves second-level hierarchy building, with lower activation for language compared with the other two tasks. These results suggest that the POd and the PMv support domain-general mechanisms for hierarchical structure building, with the POd being uniquely efficient for language.

  8. Semantic Memory Redux: An Experimental Test of Hierarchical Category Representation

    ERIC Educational Resources Information Center

    Murphy, Gregory L.; Hampton, James A.; Milovanovic, Goran S.

    2012-01-01

    Four experiments investigated the classic issue in semantic memory of whether people organize categorical information in hierarchies and use inference to retrieve information from them, as proposed by Collins and Quillian (1969). Past evidence has focused on RT to confirm sentences such as "All birds are animals" or "Canaries breathe." However,…

  9. The hierarchical and functional connectivity of higher-order cognitive mechanisms: neurorobotic model to investigate the stability and flexibility of working memory

    PubMed Central

    Alnajjar, Fady; Yamashita, Yuichi; Tani, Jun

    2013-01-01

    Higher-order cognitive mechanisms (HOCM), such as planning, cognitive branching, switching, etc., are known to be the outcomes of a unique neural organizations and dynamics between various regions of the frontal lobe. Although some recent anatomical and neuroimaging studies have shed light on the architecture underlying the formation of such mechanisms, the neural dynamics and the pathways in and between the frontal lobe to form and/or to tune the stability level of its working memory remain controversial. A model to clarify this aspect is therefore required. In this study, we propose a simple neurocomputational model that suggests the basic concept of how HOCM, including the cognitive branching and switching in particular, may mechanistically emerge from time-based neural interactions. The proposed model is constructed such that its functional and structural hierarchy mimics, to a certain degree, the biological hierarchy that is believed to exist between local regions in the frontal lobe. Thus, the hierarchy is attained not only by the force of the layout architecture of the neural connections but also through distinct types of neurons, each with different time properties. To validate the model, cognitive branching and switching tasks were simulated in a physical humanoid robot driven by the model. Results reveal that separation between the lower and the higher-level neurons in such a model is an essential factor to form an appropriate working memory to handle cognitive branching and switching. The analyses of the obtained result also illustrates that the breadth of this separation is important to determine the characteristics of the resulting memory, either static memory or dynamic memory. This work can be considered as a joint research between synthetic and empirical studies, which can open an alternative research area for better understanding of brain mechanisms. PMID:23423881

  10. The hierarchical and functional connectivity of higher-order cognitive mechanisms: neurorobotic model to investigate the stability and flexibility of working memory.

    PubMed

    Alnajjar, Fady; Yamashita, Yuichi; Tani, Jun

    2013-01-01

    Higher-order cognitive mechanisms (HOCM), such as planning, cognitive branching, switching, etc., are known to be the outcomes of a unique neural organizations and dynamics between various regions of the frontal lobe. Although some recent anatomical and neuroimaging studies have shed light on the architecture underlying the formation of such mechanisms, the neural dynamics and the pathways in and between the frontal lobe to form and/or to tune the stability level of its working memory remain controversial. A model to clarify this aspect is therefore required. In this study, we propose a simple neurocomputational model that suggests the basic concept of how HOCM, including the cognitive branching and switching in particular, may mechanistically emerge from time-based neural interactions. The proposed model is constructed such that its functional and structural hierarchy mimics, to a certain degree, the biological hierarchy that is believed to exist between local regions in the frontal lobe. Thus, the hierarchy is attained not only by the force of the layout architecture of the neural connections but also through distinct types of neurons, each with different time properties. To validate the model, cognitive branching and switching tasks were simulated in a physical humanoid robot driven by the model. Results reveal that separation between the lower and the higher-level neurons in such a model is an essential factor to form an appropriate working memory to handle cognitive branching and switching. The analyses of the obtained result also illustrates that the breadth of this separation is important to determine the characteristics of the resulting memory, either static memory or dynamic memory. This work can be considered as a joint research between synthetic and empirical studies, which can open an alternative research area for better understanding of brain mechanisms.

  11. A general graphical user interface for automatic reliability modeling

    NASA Technical Reports Server (NTRS)

    Liceaga, Carlos A.; Siewiorek, Daniel P.

    1991-01-01

    Reported here is a general Graphical User Interface (GUI) for automatic reliability modeling of Processor Memory Switch (PMS) structures using a Markov model. This GUI is based on a hierarchy of windows. One window has graphical editing capabilities for specifying the system's communication structure, hierarchy, reconfiguration capabilities, and requirements. Other windows have field texts, popup menus, and buttons for specifying parameters and selecting actions. An example application of the GUI is given.

  12. Deep recurrent neural network reveals a hierarchy of process memory during dynamic natural vision.

    PubMed

    Shi, Junxing; Wen, Haiguang; Zhang, Yizhen; Han, Kuan; Liu, Zhongming

    2018-05-01

    The human visual cortex extracts both spatial and temporal visual features to support perception and guide behavior. Deep convolutional neural networks (CNNs) provide a computational framework to model cortical representation and organization for spatial visual processing, but unable to explain how the brain processes temporal information. To overcome this limitation, we extended a CNN by adding recurrent connections to different layers of the CNN to allow spatial representations to be remembered and accumulated over time. The extended model, or the recurrent neural network (RNN), embodied a hierarchical and distributed model of process memory as an integral part of visual processing. Unlike the CNN, the RNN learned spatiotemporal features from videos to enable action recognition. The RNN better predicted cortical responses to natural movie stimuli than the CNN, at all visual areas, especially those along the dorsal stream. As a fully observable model of visual processing, the RNN also revealed a cortical hierarchy of temporal receptive window, dynamics of process memory, and spatiotemporal representations. These results support the hypothesis of process memory, and demonstrate the potential of using the RNN for in-depth computational understanding of dynamic natural vision. © 2018 Wiley Periodicals, Inc.

  13. Binary mesh partitioning for cache-efficient visualization.

    PubMed

    Tchiboukdjian, Marc; Danjean, Vincent; Raffin, Bruno

    2010-01-01

    One important bottleneck when visualizing large data sets is the data transfer between processor and memory. Cache-aware (CA) and cache-oblivious (CO) algorithms take into consideration the memory hierarchy to design cache efficient algorithms. CO approaches have the advantage to adapt to unknown and varying memory hierarchies. Recent CA and CO algorithms developed for 3D mesh layouts significantly improve performance of previous approaches, but they lack of theoretical performance guarantees. We present in this paper a {\\schmi O}(N\\log N) algorithm to compute a CO layout for unstructured but well shaped meshes. We prove that a coherent traversal of a N-size mesh in dimension d induces less than N/B+{\\schmi O}(N/M;{1/d}) cache-misses where B and M are the block size and the cache size, respectively. Experiments show that our layout computation is faster and significantly less memory consuming than the best known CO algorithm. Performance is comparable to this algorithm for classical visualization algorithm access patterns, or better when the BSP tree produced while computing the layout is used as an acceleration data structure adjusted to the layout. We also show that cache oblivious approaches lead to significant performance increases on recent GPU architectures.

  14. The Influences of Emotion on Learning and Memory

    PubMed Central

    Tyng, Chai M.; Amin, Hafeez U.; Saad, Mohamad N. M.; Malik, Aamir S.

    2017-01-01

    Emotion has a substantial influence on the cognitive processes in humans, including perception, attention, learning, memory, reasoning, and problem solving. Emotion has a particularly strong influence on attention, especially modulating the selectivity of attention as well as motivating action and behavior. This attentional and executive control is intimately linked to learning processes, as intrinsically limited attentional capacities are better focused on relevant information. Emotion also facilitates encoding and helps retrieval of information efficiently. However, the effects of emotion on learning and memory are not always univalent, as studies have reported that emotion either enhances or impairs learning and long-term memory (LTM) retention, depending on a range of factors. Recent neuroimaging findings have indicated that the amygdala and prefrontal cortex cooperate with the medial temporal lobe in an integrated manner that affords (i) the amygdala modulating memory consolidation; (ii) the prefrontal cortex mediating memory encoding and formation; and (iii) the hippocampus for successful learning and LTM retention. We also review the nested hierarchies of circular emotional control and cognitive regulation (bottom-up and top-down influences) within the brain to achieve optimal integration of emotional and cognitive processing. This review highlights a basic evolutionary approach to emotion to understand the effects of emotion on learning and memory and the functional roles played by various brain regions and their mutual interactions in relation to emotional processing. We also summarize the current state of knowledge on the impact of emotion on memory and map implications for educational settings. In addition to elucidating the memory-enhancing effects of emotion, neuroimaging findings extend our understanding of emotional influences on learning and memory processes; this knowledge may be useful for the design of effective educational curricula to provide a conducive learning environment for both traditional “live” learning in classrooms and “virtual” learning through online-based educational technologies. PMID:28883804

  15. Targeting multiple heterogeneous hardware platforms with OpenCL

    NASA Astrophysics Data System (ADS)

    Fox, Paul A.; Kozacik, Stephen T.; Humphrey, John R.; Paolini, Aaron; Kuller, Aryeh; Kelmelis, Eric J.

    2014-06-01

    The OpenCL API allows for the abstract expression of parallel, heterogeneous computing, but hardware implementations have substantial implementation differences. The abstractions provided by the OpenCL API are often insufficiently high-level to conceal differences in hardware architecture. Additionally, implementations often do not take advantage of potential performance gains from certain features due to hardware limitations and other factors. These factors make it challenging to produce code that is portable in practice, resulting in much OpenCL code being duplicated for each hardware platform being targeted. This duplication of effort offsets the principal advantage of OpenCL: portability. The use of certain coding practices can mitigate this problem, allowing a common code base to be adapted to perform well across a wide range of hardware platforms. To this end, we explore some general practices for producing performant code that are effective across platforms. Additionally, we explore some ways of modularizing code to enable optional optimizations that take advantage of hardware-specific characteristics. The minimum requirement for portability implies avoiding the use of OpenCL features that are optional, not widely implemented, poorly implemented, or missing in major implementations. Exposing multiple levels of parallelism allows hardware to take advantage of the types of parallelism it supports, from the task level down to explicit vector operations. Static optimizations and branch elimination in device code help the platform compiler to effectively optimize programs. Modularization of some code is important to allow operations to be chosen for performance on target hardware. Optional subroutines exploiting explicit memory locality allow for different memory hierarchies to be exploited for maximum performance. The C preprocessor and JIT compilation using the OpenCL runtime can be used to enable some of these techniques, as well as to factor in hardware-specific optimizations as necessary.

  16. Artificial Intelligence Methods in Pursuit Evasion Differential Games

    DTIC Science & Technology

    1990-07-30

    objectives, sometimes with fuzzy ones. Classical optimization, control or game theoretic methods are insufficient for their resolution. I Solution...OVERALL SATISFACTION WITH SCHOOL 120 FIGURE 5.13 EXAMPLE AHP HIERARCHY FOR CHOOSING MOST APPROPRIATE DIFFERENTIAL GAME AND PARAMETRIZATION 125 FIGURE 5.14...the Analytical Hierarchy Process originated by T.L. Saaty of the Wharton School. The Analytic Hierarchy Process ( AHP ) is a general theory of

  17. Optimization of wastewater treatment alternative selection by hierarchy grey relational analysis.

    PubMed

    Zeng, Guangming; Jiang, Ru; Huang, Guohe; Xu, Min; Li, Jianbing

    2007-01-01

    This paper describes an innovative systematic approach, namely hierarchy grey relational analysis for optimal selection of wastewater treatment alternatives, based on the application of analytic hierarchy process (AHP) and grey relational analysis (GRA). It can be applied for complicated multicriteria decision-making to obtain scientific and reasonable results. The effectiveness of this approach was verified through a real case study. Four wastewater treatment alternatives (A(2)/O, triple oxidation ditch, anaerobic single oxidation ditch and SBR) were evaluated and compared against multiple economic, technical and administrative performance criteria, including capital cost, operation and maintenance (O and M) cost, land area, removal of nitrogenous and phosphorous pollutants, sludge disposal effect, stability of plant operation, maturity of technology and professional skills required for O and M. The result illustrated that the anaerobic single oxidation ditch was the optimal scheme and would obtain the maximum general benefits for the wastewater treatment plant to be constructed.

  18. Stability of glassy hierarchical networks

    NASA Astrophysics Data System (ADS)

    Zamani, M.; Camargo-Forero, L.; Vicsek, T.

    2018-02-01

    The structure of interactions in most animal and human societies can be best represented by complex hierarchical networks. In order to maintain close-to-optimal function both stability and adaptability are necessary. Here we investigate the stability of hierarchical networks that emerge from the simulations of an organization type with an efficiency function reminiscent of the Hamiltonian of spin glasses. Using this quantitative approach we find a number of expected (from everyday observations) and highly non-trivial results for the obtained locally optimal networks, including, for example: (i) stability increases with growing efficiency and level of hierarchy; (ii) the same perturbation results in a larger change for more efficient states; (iii) networks with a lower level of hierarchy become more efficient after perturbation; (iv) due to the huge number of possible optimal states only a small fraction of them exhibit resilience and, finally, (v) ‘attacks’ targeting the nodes selectively (regarding their position in the hierarchy) can result in paradoxical outcomes.

  19. Stochastic optimization of GeantV code by use of genetic algorithms

    DOE PAGES

    Amadio, G.; Apostolakis, J.; Bandieramonte, M.; ...

    2017-10-01

    GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) andmore » handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. Here, the goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.« less

  20. Stochastic optimization of GeantV code by use of genetic algorithms

    NASA Astrophysics Data System (ADS)

    Amadio, G.; Apostolakis, J.; Bandieramonte, M.; Behera, S. P.; Brun, R.; Canal, P.; Carminati, F.; Cosmo, G.; Duhem, L.; Elvira, D.; Folger, G.; Gheata, A.; Gheata, M.; Goulas, I.; Hariri, F.; Jun, S. Y.; Konstantinov, D.; Kumawat, H.; Ivantchenko, V.; Lima, G.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.

    2017-10-01

    GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) and handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. The goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.

  1. Stochastic optimization of GeantV code by use of genetic algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amadio, G.; Apostolakis, J.; Bandieramonte, M.

    GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) andmore » handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. Here, the goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.« less

  2. Operating systems. [of computers

    NASA Technical Reports Server (NTRS)

    Denning, P. J.; Brown, R. L.

    1984-01-01

    A counter operating system creates a hierarchy of levels of abstraction, so that at a given level all details concerning lower levels can be ignored. This hierarchical structure separates functions according to their complexity, characteristic time scale, and level of abstraction. The lowest levels include the system's hardware; concepts associated explicitly with the coordination of multiple tasks appear at intermediate levels, which conduct 'primitive processes'. Software semaphore is the mechanism controlling primitive processes that must be synchronized. At higher levels lie, in rising order, the access to the secondary storage devices of a particular machine, a 'virtual memory' scheme for managing the main and secondary memories, communication between processes by way of a mechanism called a 'pipe', access to external input and output devices, and a hierarchy of directories cataloguing the hardware and software objects to which access must be controlled.

  3. Reactive Goal Decomposition Hierarchies for On-Board Autonomy

    NASA Astrophysics Data System (ADS)

    Hartmann, L.

    2002-01-01

    As our experience grows, space missions and systems are expected to address ever more complex and demanding requirements with fewer resources (e.g., mass, power, budget). One approach to accommodating these higher expectations is to increase the level of autonomy to improve the capabilities and robustness of on- board systems and to simplify operations. The goal decomposition hierarchies described here provide a simple but powerful form of goal-directed behavior that is relatively easy to implement for space systems. A goal corresponds to a state or condition that an operator of the space system would like to bring about. In the system described here goals are decomposed into simpler subgoals until the subgoals are simple enough to execute directly. For each goal there is an activation condition and a set of decompositions. The decompositions correspond to different ways of achieving the higher level goal. Each decomposition contains a gating condition and a set of subgoals to be "executed" sequentially or in parallel. The gating conditions are evaluated in order and for the first one that is true, the corresponding decomposition is executed in order to achieve the higher level goal. The activation condition specifies global conditions (i.e., for all decompositions of the goal) that need to hold in order for the goal to be achieved. In real-time, parameters and state information are passed between goals and subgoals in the decomposition; a termination indication (success, failure, degree) is passed up when a decomposition finishes executing. The lowest level decompositions include servo control loops and finite state machines for generating control signals and sequencing i/o. Semaphores and shared memory are used to synchronize and coordinate decompositions that execute in parallel. The goal decomposition hierarchy is reactive in that the generated behavior is sensitive to the real-time state of the system and the environment. That is, the system is able to react to state and environment and in general can terminate the execution of a decomposition and attempt a new decomposition at any level in the hierarchy. This goal decomposition system is suitable for workstation, microprocessor and fpga implementation and thus is able to support the full range of prototyping activities, from mission design in the laboratory to development of the fpga firmware for the flight system. This approach is based on previous artificial intelligence work including (1) Brooks' subsumption architecture for robot control, (2) Firby's Reactive Action Package System (RAPS) for mediating between high level automated planning and low level execution and (3) hierarchical task networks for automated planning. Reactive goal decomposition hierarchies can be used for a wide variety of on-board autonomy applications including automating low level operation sequences (such as scheduling prerequisite operations, e.g., heaters, warm-up periods, monitoring power constraints), coordinating multiple spacecraft as in formation flying and constellations, robot manipulator operations, rendez-vous, docking, servicing, assembly, on-orbit maintenance, planetary rover operations, solar system and interstellar probes, intelligent science data gathering and disaster early warning. Goal decomposition hierarchies can support high level fault tolerance. Given models of on-board resources and goals to accomplish, the decomposition hierarchy could allocate resources to goals taking into account existing faults and in real-time reallocating resources as new faults arise. Resources to be modeled include memory (e.g., ROM, FPGA configuration memory, processor memory, payload instrument memory), processors, on-board and interspacecraft network nodes and links, sensors, actuators (e.g., attitude determination and control, guidance and navigation) and payload instruments. A goal decomposition hierarchy could be defined to map mission goals and tasks to available on-board resources. As faults occur and are detected the resource allocation is modified to avoid using the faulty resource. Goal decomposition hierarchies can implement variable autonomy (in which the operator chooses to command the system at a high or low level, mixed initiative planning (in which the system is able to interact with the operator, e.g, to request operator intervention when a working envelope is exceeded) and distributed control (in which, for example, multiple spacecraft cooperate to accomplish a task without a fixed master). The full paper will describe in greater detail how goal decompositions work, how they can be implemented, techniques for implementing a candidate application and the current state of the fpga implementation.

  4. Baseline Optimization for the Measurement of CP Violation, Mass Hierarchy, and $$\\theta_{23}$$ Octant in a Long-Baseline Neutrino Oscillation Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bass, M.; Bishai, M.; Cherdack, D.

    2015-03-19

    Next-generation long-baseline electron neutrino appearance experiments will seek to discover C P violation, determine the mass hierarchy and resolve the θ 23 octant. In light of the recent precision measurements of θ 13 , we consider the sensitivity of these measurements in a study to determine the optimal baseline, including practical considerations regarding beam and detector performance. We conclude that a detector at a baseline of at least 1000 km in a wide-band muon neutrino beam is themore » optimal configuration.« less

  5. Exploitation of Self Organization in UAV Swarms for Optimization in Combat Environments

    DTIC Science & Technology

    2008-03-01

    behaviors and entangled hierarchy into Swarmfare [59] UAV simulation environment to include these models. • Validate this new model’s success through...Figure 4.3. The hierarchy of control emerges from the entangled hierarchy of the state relations at the simulation , swarm and rule/behaviors level...majors, major) Abstract Model Types (AMT) Figure A.1: SO Abstract Model Type Table 142 Appendix B. Simulators Comparision Name MATLAB Multi UAV MultiUAV

  6. Simulation-Based Evaluation of Learning Sequences for Instructional Technologies

    ERIC Educational Resources Information Center

    McEneaney, John E.

    2016-01-01

    Instructional technologies critically depend on systematic design, and learning hierarchies are a commonly advocated tool for designing instructional sequences. But hierarchies routinely allow numerous sequences and choosing an optimal sequence remains an unsolved problem. This study explores a simulation-based approach to modeling learning…

  7. Application of phase-change materials in memory taxonomy.

    PubMed

    Wang, Lei; Tu, Liang; Wen, Jing

    2017-01-01

    Phase-change materials are suitable for data storage because they exhibit reversible transitions between crystalline and amorphous states that have distinguishable electrical and optical properties. Consequently, these materials find applications in diverse memory devices ranging from conventional optical discs to emerging nanophotonic devices. Current research efforts are mostly devoted to phase-change random access memory, whereas the applications of phase-change materials in other types of memory devices are rarely reported. Here we review the physical principles of phase-change materials and devices aiming to help researchers understand the concept of phase-change memory. We classify phase-change memory devices into phase-change optical disc, phase-change scanning probe memory, phase-change random access memory, and phase-change nanophotonic device, according to their locations in memory hierarchy. For each device type we discuss the physical principles in conjunction with merits and weakness for data storage applications. We also outline state-of-the-art technologies and future prospects.

  8. Energy-efficient hierarchical processing in the network of wireless intelligent sensors (WISE)

    NASA Astrophysics Data System (ADS)

    Raskovic, Dejan

    Sensor network nodes have benefited from technological advances in the field of wireless communication, processing, and power sources. However, the processing power of microcontrollers is often not sufficient to perform sophisticated processing, while the power requirements of digital signal processing boards or handheld computers are usually too demanding for prolonged system use. We are matching the intrinsic hierarchical nature of many digital signal-processing applications with the natural hierarchy in distributed wireless networks, and building the hierarchical system of wireless intelligent sensors. Our goal is to build a system that will exploit the hierarchical organization to optimize the power consumption and extend battery life for the given time and memory constraints, while providing real-time processing of sensor signals. In addition, we are designing our system to be able to adapt to the current state of the environment, by dynamically changing the algorithm through procedure replacement. This dissertation presents the analysis of hierarchical environment and methods for energy profiling used to evaluate different system design strategies, and to optimize time-effective and energy-efficient processing.

  9. 3D hierarchical spatial representation and memory of multimodal sensory data

    NASA Astrophysics Data System (ADS)

    Khosla, Deepak; Dow, Paul A.; Huber, David J.

    2009-04-01

    This paper describes an efficient method and system for representing, processing and understanding multi-modal sensory data. More specifically, it describes a computational method and system for how to process and remember multiple locations in multimodal sensory space (e.g., visual, auditory, somatosensory, etc.). The multimodal representation and memory is based on a biologically-inspired hierarchy of spatial representations implemented with novel analogues of real representations used in the human brain. The novelty of the work is in the computationally efficient and robust spatial representation of 3D locations in multimodal sensory space as well as an associated working memory for storage and recall of these representations at the desired level for goal-oriented action. We describe (1) A simple and efficient method for human-like hierarchical spatial representations of sensory data and how to associate, integrate and convert between these representations (head-centered coordinate system, body-centered coordinate, etc.); (2) a robust method for training and learning a mapping of points in multimodal sensory space (e.g., camera-visible object positions, location of auditory sources, etc.) to the above hierarchical spatial representations; and (3) a specification and implementation of a hierarchical spatial working memory based on the above for storage and recall at the desired level for goal-oriented action(s). This work is most useful for any machine or human-machine application that requires processing of multimodal sensory inputs, making sense of it from a spatial perspective (e.g., where is the sensory information coming from with respect to the machine and its parts) and then taking some goal-oriented action based on this spatial understanding. A multi-level spatial representation hierarchy means that heterogeneous sensory inputs (e.g., visual, auditory, somatosensory, etc.) can map onto the hierarchy at different levels. When controlling various machine/robot degrees of freedom, the desired movements and action can be computed from these different levels in the hierarchy. The most basic embodiment of this machine could be a pan-tilt camera system, an array of microphones, a machine with arm/hand like structure or/and a robot with some or all of the above capabilities. We describe the approach, system and present preliminary results on a real-robotic platform.

  10. A Study of the Effects of Variation of Short-Term Memory Load, Reading Response Length, and Processing Hierarchy on TOEFL Listening Comprehension Item Performance. Report 33.

    ERIC Educational Resources Information Center

    Henning, Grant

    Criticisms of the Test of English as a Foreign Language (TOEFL) have included speculation that the listening test places too much burden on short-term memory as compared with comprehension, that a knowledge of reading is required to respond successfully, and that many items appear to require mere recall and matching rather than higher-order…

  11. The Efficiency and the Scalability of an Explicit Operator on an IBM POWER4 System

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    We present an evaluation of the efficiency and the scalability of an explicit CFD operator on an IBM POWER4 system. The POWER4 architecture exhibits a common trend in HPC architectures: boosting CPU processing power by increasing the number of functional units, while hiding the latency of memory access by increasing the depth of the memory hierarchy. The overall machine performance depends on the ability of the caches-buses-fabric-memory to feed the functional units with the data to be processed. In this study we evaluate the efficiency and scalability of one explicit CFD operator on an IBM POWER4. This operator performs computations at the points of a Cartesian grid and involves a few dozen floating point numbers and on the order of 100 floating point operations per grid point. The computations in all grid points are independent. Specifically, we estimate the efficiency of the RHS operator (SP of NPB) on a single processor as the observed/peak performance ratio. Then we estimate the scalability of the operator on a single chip (2 CPUs), a single MCM (8 CPUs), 16 CPUs, and the whole machine (32 CPUs). Then we perform the same measurements for a chache-optimized version of the RHS operator. For our measurements we use the HPM (Hardware Performance Monitor) counters available on the POWER4. These counters allow us to analyze the obtained performance results.

  12. Evaluating, Comparing, and Interpreting Protein Domain Hierarchies

    PubMed Central

    2014-01-01

    Abstract Arranging protein domain sequences hierarchically into evolutionarily divergent subgroups is important for investigating evolutionary history, for speeding up web-based similarity searches, for identifying sequence determinants of protein function, and for genome annotation. However, whether or not a particular hierarchy is optimal is often unclear, and independently constructed hierarchies for the same domain can often differ significantly. This article describes methods for statistically evaluating specific aspects of a hierarchy, for probing the criteria underlying its construction and for direct comparisons between hierarchies. Information theoretical notions are used to quantify the contributions of specific hierarchical features to the underlying statistical model. Such features include subhierarchies, sequence subgroups, individual sequences, and subgroup-associated signature patterns. Underlying properties are graphically displayed in plots of each specific feature's contributions, in heat maps of pattern residue conservation, in “contrast alignments,” and through cross-mapping of subgroups between hierarchies. Together, these approaches provide a deeper understanding of protein domain functional divergence, reveal uncertainties caused by inconsistent patterns of sequence conservation, and help resolve conflicts between competing hierarchies. PMID:24559108

  13. A Cross-Modal Perspective on the Relationships between Imagery and Working Memory

    PubMed Central

    Likova, Lora T.

    2013-01-01

    Mapping the distinctions and interrelationships between imagery and working memory (WM) remains challenging. Although each of these major cognitive constructs is defined and treated in various ways across studies, most accept that both imagery and WM involve a form of internal representation available to our awareness. In WM, there is a further emphasis on goal-oriented, active maintenance, and use of this conscious representation to guide voluntary action. Multicomponent WM models incorporate representational buffers, such as the visuo-spatial sketchpad, plus central executive functions. If there is a visuo-spatial “sketchpad” for WM, does imagery involve the same representational buffer? Alternatively, does WM employ an imagery-specific representational mechanism to occupy our awareness? Or do both constructs utilize a more generic “projection screen” of an amodal nature? To address these issues, in a cross-modal fMRI study, I introduce a novel Drawing-Based Memory Paradigm, and conceptualize drawing as a complex behavior that is readily adaptable from the visual to non-visual modalities (such as the tactile modality), which opens intriguing possibilities for investigating cross-modal learning and plasticity. Blindfolded participants were trained through our Cognitive-Kinesthetic Method (Likova, 2010a, 2012) to draw complex objects guided purely by the memory of felt tactile images. If this WM task had been mediated by transfer of the felt spatial configuration to the visual imagery mechanism, the response-profile in visual cortex would be predicted to have the “top-down” signature of propagation of the imagery signal downward through the visual hierarchy. Remarkably, the pattern of cross-modal occipital activation generated by the non-visual memory drawing was essentially the inverse of this typical imagery signature. The sole visual hierarchy activation was isolated to the primary visual area (V1), and accompanied by deactivation of the entire extrastriate cortex, thus ’cutting-off’ any signal propagation from/to V1 through the visual hierarchy. The implications of these findings for the debate on the interrelationships between the core cognitive constructs of WM and imagery and the nature of internal representations are evaluated. PMID:23346061

  14. Scalable Algorithms for Clustering Large Geospatiotemporal Data Sets on Manycore Architectures

    NASA Astrophysics Data System (ADS)

    Mills, R. T.; Hoffman, F. M.; Kumar, J.; Sreepathi, S.; Sripathi, V.

    2016-12-01

    The increasing availability of high-resolution geospatiotemporal data sets from sources such as observatory networks, remote sensing platforms, and computational Earth system models has opened new possibilities for knowledge discovery using data sets fused from disparate sources. Traditional algorithms and computing platforms are impractical for the analysis and synthesis of data sets of this size; however, new algorithmic approaches that can effectively utilize the complex memory hierarchies and the extremely high levels of available parallelism in state-of-the-art high-performance computing platforms can enable such analysis. We describe a massively parallel implementation of accelerated k-means clustering and some optimizations to boost computational intensity and utilization of wide SIMD lanes on state-of-the art multi- and manycore processors, including the second-generation Intel Xeon Phi ("Knights Landing") processor based on the Intel Many Integrated Core (MIC) architecture, which includes several new features, including an on-package high-bandwidth memory. We also analyze the code in the context of a few practical applications to the analysis of climatic and remotely-sensed vegetation phenology data sets, and speculate on some of the new applications that such scalable analysis methods may enable.

  15. Multiprocessor switch with selective pairing

    DOEpatents

    Gara, Alan; Gschwind, Michael K; Salapura, Valentina

    2014-03-11

    System, method and computer program product for a multiprocessing system to offer selective pairing of processor cores for increased processing reliability. A selective pairing facility is provided that selectively connects, i.e., pairs, multiple microprocessor or processor cores to provide one highly reliable thread (or thread group). Each paired microprocessor or processor cores that provide one highly reliable thread for high-reliability connect with a system components such as a memory "nest" (or memory hierarchy), an optional system controller, and optional interrupt controller, optional I/O or peripheral devices, etc. The memory nest is attached to a selective pairing facility via a switch or a bus

  16. C-MOS array design techniques: SUMC multiprocessor system study

    NASA Technical Reports Server (NTRS)

    Clapp, W. A.; Helbig, W. A.; Merriam, A. S.

    1972-01-01

    The current capabilities of LSI techniques for speed and reliability, plus the possibilities of assembling large configurations of LSI logic and storage elements, have demanded the study of multiprocessors and multiprocessing techniques, problems, and potentialities. Evaluated are three previous systems studies for a space ultrareliable modular computer multiprocessing system, and a new multiprocessing system is proposed that is flexibly configured with up to four central processors, four 1/0 processors, and 16 main memory units, plus auxiliary memory and peripheral devices. This multiprocessor system features a multilevel interrupt, qualified S/360 compatibility for ground-based generation of programs, virtual memory management of a storage hierarchy through 1/0 processors, and multiport access to multiple and shared memory units.

  17. Optimization-based interactive segmentation interface for multiregion problems

    PubMed Central

    Baxter, John S. H.; Rajchl, Martin; Peters, Terry M.; Chen, Elvis C. S.

    2016-01-01

    Abstract. Interactive segmentation is becoming of increasing interest to the medical imaging community in that it combines the positive aspects of both manual and automated segmentation. However, general-purpose tools have been lacking in terms of segmenting multiple regions simultaneously with a high degree of coupling between groups of labels. Hierarchical max-flow segmentation has taken advantage of this coupling for individual applications, but until recently, these algorithms were constrained to a particular hierarchy and could not be considered general-purpose. In a generalized form, the hierarchy for any given segmentation problem is specified in run-time, allowing different hierarchies to be quickly explored. We present an interactive segmentation interface, which uses generalized hierarchical max-flow for optimization-based multiregion segmentation guided by user-defined seeds. Applications in cardiac and neonatal brain segmentation are given as example applications of its generality. PMID:27335892

  18. Hierarchy in the eye of the beholder: (Anti-)egalitarianism shapes perceived levels of social inequality.

    PubMed

    Kteily, Nour S; Sheehy-Skeffington, Jennifer; Ho, Arnold K

    2017-01-01

    Debate surrounding the issue of inequality and hierarchy between social groups has become increasingly prominent in recent years. At the same time, individuals disagree about the extent to which inequality between advantaged and disadvantaged groups exists. Whereas prior work has examined the ways in which individuals legitimize (or delegitimize) inequality as a function of their motivations, we consider whether individuals' orientation toward group-based hierarchy motivates the extent to which they perceive inequality between social groups in the first place. Across 8 studies in both real-world (race, gender, and class) and artificial contexts, and involving members of both advantaged and disadvantaged groups, we show that the more individuals endorse hierarchy between groups, the less they perceive inequality between groups at the top and groups at the bottom. Perceiving less inequality is associated with rejecting egalitarian social policies aimed at reducing it. We show that these differences in hierarchy perception as a function of individuals' motivational orientation hold even when inequality is depicted abstractly using images, and even when individuals are financially incentivized to accurately report their true perceptions. Using a novel methodology to assess accurate memory of hierarchy, we find that differences may be driven by both antiegalitarians underestimating inequality, and egalitarians overestimating it. In sum, our results identify a novel perceptual bias rooted in individuals' chronic motivations toward hierarchy-maintenance, with the potential to influence their policy attitudes. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  19. Optimizing an Immersion ESL Curriculum Using Analytic Hierarchy Process

    ERIC Educational Resources Information Center

    Tang, Hui-Wen Vivian

    2011-01-01

    The main purpose of this study is to fill a substantial knowledge gap regarding reaching a uniform group decision in English curriculum design and planning. A comprehensive content-based course criterion model extracted from existing literature and expert opinions was developed. Analytical hierarchy process (AHP) was used to identify the relative…

  20. Self-organizing hierarchies in sensor and communication networks.

    PubMed

    Prokopenko, Mikhail; Wang, Peter; Valencia, Philip; Price, Don; Foreman, Mark; Farmer, Anthony

    2005-01-01

    We consider a hierarchical multicellular sensing and communication network, embedded in an ageless aerospace vehicle that is expected to detect and react to multiple impacts and damage over a wide range of impact energies. In particular, we investigate self-organization of impact boundaries enclosing critically damaged areas, and impact networks connecting remote cells that have detected noncritical impacts. Each level of the hierarchy is shown to have distinct higher-order emergent properties, desirable in self-monitoring and self-repairing vehicles. In addition, cells and communication messages are shown to need memory (hysteresis) in order to retain desirable emergent behavior within and between various hierarchical levels. Spatiotemporal robustness of self-organizing hierarchies is quantitatively measured with graph-theoretic and information-theoretic techniques, such as the Shannon entropy. This allows us to clearly identify phase transitions separating chaotic dynamics from ordered and robust patterns.

  1. Gregarious Data Re-structuring in a Many Core Architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shrestha, Sunil; Manzano Franco, Joseph B.; Marquez, Andres

    this paper, we have developed a new methodology that takes in consideration the access patterns from a single parallel actor (e.g. a thread), as well as, the access patterns of “grouped” parallel actors that share a resource (e.g. a distributed Level 3 cache). We start with a hierarchical tile code for our target machine and apply a series of transformations at the tile level to improve data residence in a given memory hierarchy level. The contribution of this paper includes (a) collaborative data restructuring for group reuse and (b) low overhead transformation technique to improve access pattern and bring closelymore » connected data elements together. Preliminary results in a many core architecture, Tilera TileGX, shows promising improvements over optimized OpenMP code (up to 31% increase in GFLOPS) and over our own previous work on fine grained runtimes (up to 16%) for selected kernels« less

  2. Toward Transparent Data Management in Multi-layer Storage Hierarchy for HPC Systems

    DOE PAGES

    Wadhwa, Bharti; Byna, Suren; Butt, Ali R.

    2018-04-17

    Upcoming exascale high performance computing (HPC) systems are expected to comprise multi-tier storage hierarchy, and thus will necessitate innovative storage and I/O mechanisms. Traditional disk and block-based interfaces and file systems face severe challenges in utilizing capabilities of storage hierarchies due to the lack of hierarchy support and semantic interfaces. Object-based and semantically-rich data abstractions for scientific data management on large scale systems offer a sustainable solution to these challenges. Such data abstractions can also simplify users involvement in data movement. Here, we take the first steps of realizing such an object abstraction and explore storage mechanisms for these objectsmore » to enhance I/O performance, especially for scientific applications. We explore how an object-based interface can facilitate next generation scalable computing systems by presenting the mapping of data I/O from two real world HPC scientific use cases: a plasma physics simulation code (VPIC) and a cosmology simulation code (HACC). Our storage model stores data objects in different physical organizations to support data movement across layers of memory/storage hierarchy. Our implementation sclaes well to 16K parallel processes, and compared to the state of the art, such as MPI-IO and HDF5, our object-based data abstractions and data placement strategy in multi-level storage hierarchy achieves up to 7 X I/O performance improvement for scientific data.« less

  3. Toward Transparent Data Management in Multi-layer Storage Hierarchy for HPC Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wadhwa, Bharti; Byna, Suren; Butt, Ali R.

    Upcoming exascale high performance computing (HPC) systems are expected to comprise multi-tier storage hierarchy, and thus will necessitate innovative storage and I/O mechanisms. Traditional disk and block-based interfaces and file systems face severe challenges in utilizing capabilities of storage hierarchies due to the lack of hierarchy support and semantic interfaces. Object-based and semantically-rich data abstractions for scientific data management on large scale systems offer a sustainable solution to these challenges. Such data abstractions can also simplify users involvement in data movement. Here, we take the first steps of realizing such an object abstraction and explore storage mechanisms for these objectsmore » to enhance I/O performance, especially for scientific applications. We explore how an object-based interface can facilitate next generation scalable computing systems by presenting the mapping of data I/O from two real world HPC scientific use cases: a plasma physics simulation code (VPIC) and a cosmology simulation code (HACC). Our storage model stores data objects in different physical organizations to support data movement across layers of memory/storage hierarchy. Our implementation sclaes well to 16K parallel processes, and compared to the state of the art, such as MPI-IO and HDF5, our object-based data abstractions and data placement strategy in multi-level storage hierarchy achieves up to 7 X I/O performance improvement for scientific data.« less

  4. The Science of Computing: Virtual Memory

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1986-01-01

    In the March-April issue, I described how a computer's storage system is organized as a hierarchy consisting of cache, main memory, and secondary memory (e.g., disk). The cache and main memory form a subsystem that functions like main memory but attains speeds approaching cache. What happens if a program and its data are too large for the main memory? This is not a frivolous question. Every generation of computer users has been frustrated by insufficient memory. A new line of computers may have sufficient storage for the computations of its predecessor, but new programs will soon exhaust its capacity. In 1960, a longrange planning committee at MIT dared to dream of a computer with 1 million words of main memory. In 1985, the Cray-2 was delivered with 256 million words. Computational physicists dream of computers with 1 billion words. Computer architects have done an outstanding job of enlarging main memories yet they have never kept up with demand. Only the shortsighted believe they can.

  5. Application of phase-change materials in memory taxonomy

    PubMed Central

    Wang, Lei; Tu, Liang; Wen, Jing

    2017-01-01

    Abstract Phase-change materials are suitable for data storage because they exhibit reversible transitions between crystalline and amorphous states that have distinguishable electrical and optical properties. Consequently, these materials find applications in diverse memory devices ranging from conventional optical discs to emerging nanophotonic devices. Current research efforts are mostly devoted to phase-change random access memory, whereas the applications of phase-change materials in other types of memory devices are rarely reported. Here we review the physical principles of phase-change materials and devices aiming to help researchers understand the concept of phase-change memory. We classify phase-change memory devices into phase-change optical disc, phase-change scanning probe memory, phase-change random access memory, and phase-change nanophotonic device, according to their locations in memory hierarchy. For each device type we discuss the physical principles in conjunction with merits and weakness for data storage applications. We also outline state-of-the-art technologies and future prospects. PMID:28740557

  6. Entangled states in the role of witnesses

    NASA Astrophysics Data System (ADS)

    Wang, Bang-Hai

    2018-05-01

    Quantum entanglement lies at the heart of quantum mechanics and quantum information processing. In this work, we show a framework where entangled states play the role of witnesses. We extend the notion of entanglement witnesses, developing a hierarchy of witnesses for classes of observables. This hierarchy captures the fact that entangled states act as witnesses for detecting entanglement witnesses and separable states act as witnesses for the set of non-block-positive Hermitian operators. Indeed, more hierarchies of witnesses exist. We introduce the concept of finer and optimal entangled states. These definitions not only give an unambiguous and non-numeric quantification of entanglement and an alternative perspective on edge states but also answer the open question of what the remainder of the best separable approximation of a density matrix is. Furthermore, we classify all entangled states into disjoint families with optimal entangled states at its heart. This implies that we can focus only on the study of a typical family with optimal entangled states at its core when we investigate entangled states. Our framework also assembles many seemingly different findings with simple arguments that do not require lengthy calculations.

  7. A Framework for Distributed Problem Solving

    NASA Astrophysics Data System (ADS)

    Leone, Joseph; Shin, Don G.

    1989-03-01

    This work explores a distributed problem solving (DPS) approach, namely the AM/AG model, to cooperative memory recall. The AM/AG model is a hierarchic social system metaphor for DPS based on the Mintzberg's model of organizations. At the core of the model are information flow mechanisms, named amplification and aggregation. Amplification is a process of expounding a given task, called an agenda, into a set of subtasks with magnified degree of specificity and distributing them to multiple processing units downward in the hierarchy. Aggregation is a process of combining the results reported from multiple processing units into a unified view, called a resolution, and promoting the conclusion upward in the hierarchy. The combination of amplification and aggregation can account for a memory recall process which primarily relies on the ability of making associations between vast amounts of related concepts, sorting out the combined results, and promoting the most plausible ones. The amplification process is discussed in detail. An implementation of the amplification process is presented. The process is illustrated by an example.

  8. Optimizing an immersion ESL curriculum using analytic hierarchy process.

    PubMed

    Tang, Hui-Wen Vivian

    2011-11-01

    The main purpose of this study is to fill a substantial knowledge gap regarding reaching a uniform group decision in English curriculum design and planning. A comprehensive content-based course criterion model extracted from existing literature and expert opinions was developed. Analytical hierarchy process (AHP) was used to identify the relative importance of course criteria for the purpose of tailoring an optimal one-week immersion English as a second language (ESL) curriculum for elementary school students in a suburban county of Taiwan. The hierarchy model and AHP analysis utilized in the present study will be useful for resolving several important multi-criteria decision-making issues in planning and evaluating ESL programs. This study also offers valuable insights and provides a basis for further research in customizing ESL curriculum models for different student populations with distinct learning needs, goals, and socioeconomic backgrounds. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. The neutrino mass hierarchy measurement with a neutrino telescope in the Mediterranean Sea: A feasibility study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsirigotis, A. G.; Collaboration: KM3NeT Collaboration

    With the measurement of a non zero value of the θ{sub 13} neutrino mixing parameter, interest in neutrinos as source of the baryon asymmetry of the universe has increased. Among the measurements of a rich and varied program in near future neutrino physics is the determination of the mass hierarchy. We present the status of a study of the feasibility of using a densely instrumented undersea neutrino detector to determine the mass hierarchy, utilizing the Mikheyev-Smirnov-Wolfenstein (MSW) effect on atmospheric neutrino oscillations. The detector will use technology developed for KM3NeT. We present the systematic studies of the optimization of amore » detector in the required 5–10 GeV energy regime. These studies include new tracking and interaction identification algorithms as well as geometrical optimizations of the detector.« less

  10. Experimental evaluation of multiprocessor cache-based error recovery

    NASA Technical Reports Server (NTRS)

    Janssens, Bob; Fuchs, W. K.

    1991-01-01

    Several variations of cache-based checkpointing for rollback error recovery in shared-memory multiprocessors have been recently developed. By modifying the cache replacement policy, these techniques use the inherent redundancy in the memory hierarchy to periodically checkpoint the computation state. Three schemes, different in the manner in which they avoid rollback propagation, are evaluated. By simulation with address traces from parallel applications running on an Encore Multimax shared-memory multiprocessor, the performance effect of integrating the recovery schemes in the cache coherence protocol are evaluated. The results indicate that the cache-based schemes can provide checkpointing capability with low performance overhead but uncontrollable high variability in the checkpoint interval.

  11. Simplified Interface to Complex Memory Hierarchies 1.x

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lang, Michael; Ionkov, Latchesar; Williams, Sean

    2017-02-21

    Memory systems are expected to get evermore complicated in the coming years, and it isn't clear exactly what form that complexity will take. On the software side, a simple, flexible way of identifying and working with memory pools is needed. Additionally, most developers seek code portability and do not want to learn the intricacies of complex memory. Hence, we believe that a library for interacting with complex memory systems should expose two kinds of abstraction: First, a low-level, mechanism-based interface designed for the runtime or advanced user that wants complete control, with its focus on simplified representation but with allmore » decisions left to the caller. Second, a high-level, policy-based interface designed for ease of use for the application developer, in which we aim for best-practice decisions based on application intent. We have developed such a library, called SICM: Simplified Interface to Complex Memory.« less

  12. A review of emerging non-volatile memory (NVM) technologies and applications

    NASA Astrophysics Data System (ADS)

    Chen, An

    2016-11-01

    This paper will review emerging non-volatile memory (NVM) technologies, with the focus on phase change memory (PCM), spin-transfer-torque random-access-memory (STTRAM), resistive random-access-memory (RRAM), and ferroelectric field-effect-transistor (FeFET) memory. These promising NVM devices are evaluated in terms of their advantages, challenges, and applications. Their performance is compared based on reported parameters of major industrial test chips. Memory selector devices and cell structures are discussed. Changing market trends toward low power (e.g., mobile, IoT) and data-centric applications create opportunities for emerging NVMs. High-performance and low-cost emerging NVMs may simplify memory hierarchy, introduce non-volatility in logic gates and circuits, reduce system power, and enable novel architectures. Storage-class memory (SCM) based on high-density NVMs could fill the performance and density gap between memory and storage. Some unique characteristics of emerging NVMs can be utilized for novel applications beyond the memory space, e.g., neuromorphic computing, hardware security, etc. In the beyond-CMOS era, emerging NVMs have the potential to fulfill more important functions and enable more efficient, intelligent, and secure computing systems.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bender, Michael A.; Berry, Jonathan W.; Hammond, Simon D.

    A challenge in computer architecture is that processors often cannot be fed data from DRAM as fast as CPUs can consume it. Therefore, many applications are memory-bandwidth bound. With this motivation and the realization that traditional architectures (with all DRAM reachable only via bus) are insufficient to feed groups of modern processing units, vendors have introduced a variety of non-DDR 3D memory technologies (Hybrid Memory Cube (HMC),Wide I/O 2, High Bandwidth Memory (HBM)). These offer higher bandwidth and lower power by stacking DRAM chips on the processor or nearby on a silicon interposer. We will call these solutions “near-memory,” andmore » if user-addressable, “scratchpad.” High-performance systems on the market now offer two levels of main memory: near-memory on package and traditional DRAM further away. In the near term we expect the latencies near-memory and DRAM to be similar. Here, it is natural to think of near-memory as another module on the DRAM level of the memory hierarchy. Vendors are expected to offer modes in which the near memory is used as cache, but we believe that this will be inefficient.« less

  14. SEPAC flight software detailed design specifications, volume 1

    NASA Technical Reports Server (NTRS)

    1982-01-01

    The detailed design specifications (as built) for the SEPAC Flight Software are defined. The design includes a description of the total software system and of each individual module within the system. The design specifications describe the decomposition of the software system into its major components. The system structure is expressed in the following forms: the control-flow hierarchy of the system, the data-flow structure of the system, the task hierarchy, the memory structure, and the software to hardware configuration mapping. The component design description includes details on the following elements: register conventions, module (subroutines) invocaton, module functions, interrupt servicing, data definitions, and database structure.

  15. Who is the boss? Individual recognition memory and social hierarchy formation in crayfish.

    PubMed

    Jiménez-Morales, Nayeli; Mendoza-Ángeles, Karina; Porras-Villalobos, Mercedes; Ibarra-Coronado, Elizabeth; Roldán-Roldán, Gabriel; Hernández-Falcón, Jesús

    2018-01-01

    Under laboratory conditions, crayfish establish hierarchical orders through agonistic encounters whose outcome defines the dominant one and one, or more, submissive animals. These agonistic encounters are ritualistic, based on threats, pushes, attacks, grabs, and avoidance behaviors that include retreats and escape responses. Agonistic behavior in a triad of unfamiliar, size-matched animals is intense on the first day of social interaction and the intensity fades on daily repetitions. The dominant animal keeps its status for long periods, and the submissive ones seem to remember 'who the boss is'. It has been assumed that animals remember and recognize their hierarchical status by urine signals, but the putative substance mediating this recognition has not been reported. The aim of this work was to characterize this hierarchical recognition memory. Triads of unfamiliar crayfish (male animals, size and weight-matched) were faced during standardized agonistic protocols for five consecutive days to analyze memory acquisition dynamics (Experiment 1). In Experiment 2, dominant crayfish were shifted among triads to disclose whether hierarchy depended upon individual recognition memory or recognition of status. The maintenance of the hierarchical structure without behavioral reinforcement was assessed by immobilizing the dominant animal during eleven daily agonistic encounters, and considering any shift in the dominance order (Experiment 3). Standard amnesic treatments (anisomycin, scopolamine or cold-anesthesia) were given to all members of the triads immediately after the first interaction session to prevent individual recognition memory consolidation and evaluate its effect on the hierarchical order (Experiment 4). Acquisition of hierarchical recognition occurs at the first agonistic encounter and agonistic behavior gradually diminishes in the following days; animals keep their hierarchical order despite the inability of the dominant crayfish to attack the submissive ones. Finally, blocking of protein synthesis or muscarinic receptors and cold anesthesia impair memory consolidation. These findings suggest that agonistic encounters induces the acquisition of a robust and lasting social recognition memory in crayfish. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Analytic hierarchy process-based approach for selecting a Pareto-optimal solution of a multi-objective, multi-site supply-chain planning problem

    NASA Astrophysics Data System (ADS)

    Ayadi, Omar; Felfel, Houssem; Masmoudi, Faouzi

    2017-07-01

    The current manufacturing environment has changed from traditional single-plant to multi-site supply chain where multiple plants are serving customer demands. In this article, a tactical multi-objective, multi-period, multi-product, multi-site supply-chain planning problem is proposed. A corresponding optimization model aiming to simultaneously minimize the total cost, maximize product quality and maximize the customer satisfaction demand level is developed. The proposed solution approach yields to a front of Pareto-optimal solutions that represents the trade-offs among the different objectives. Subsequently, the analytic hierarchy process method is applied to select the best Pareto-optimal solution according to the preferences of the decision maker. The robustness of the solutions and the proposed approach are discussed based on a sensitivity analysis and an application to a real case from the textile and apparel industry.

  17. Feature-Based Visual Short-Term Memory Is Widely Distributed and Hierarchically Organized.

    PubMed

    Dotson, Nicholas M; Hoffman, Steven J; Goodell, Baldwin; Gray, Charles M

    2018-06-15

    Feature-based visual short-term memory is known to engage both sensory and association cortices. However, the extent of the participating circuit and the neural mechanisms underlying memory maintenance is still a matter of vigorous debate. To address these questions, we recorded neuronal activity from 42 cortical areas in monkeys performing a feature-based visual short-term memory task and an interleaved fixation task. We find that task-dependent differences in firing rates are widely distributed throughout the cortex, while stimulus-specific changes in firing rates are more restricted and hierarchically organized. We also show that microsaccades during the memory delay encode the stimuli held in memory and that units modulated by microsaccades are more likely to exhibit stimulus specificity, suggesting that eye movements contribute to visual short-term memory processes. These results support a framework in which most cortical areas, within a modality, contribute to mnemonic representations at timescales that increase along the cortical hierarchy. Copyright © 2018 Elsevier Inc. All rights reserved.

  18. Algorithms and Libraries

    NASA Technical Reports Server (NTRS)

    Dongarra, Jack

    1998-01-01

    This exploratory study initiated our inquiry into algorithms and applications that would benefit by latency tolerant approach to algorithm building, including the construction of new algorithms where appropriate. In a multithreaded execution, when a processor reaches a point where remote memory access is necessary, the request is sent out on the network and a context--switch occurs to a new thread of computation. This effectively masks a long and unpredictable latency due to remote loads, thereby providing tolerance to remote access latency. We began to develop standards to profile various algorithm and application parameters, such as the degree of parallelism, granularity, precision, instruction set mix, interprocessor communication, latency etc. These tools will continue to develop and evolve as the Information Power Grid environment matures. To provide a richer context for this research, the project also focused on issues of fault-tolerance and computation migration of numerical algorithms and software. During the initial phase we tried to increase our understanding of the bottlenecks in single processor performance. Our work began by developing an approach for the automatic generation and optimization of numerical software for processors with deep memory hierarchies and pipelined functional units. Based on the results we achieved in this study we are planning to study other architectures of interest, including development of cost models, and developing code generators appropriate to these architectures.

  19. Two-level main memory co-design: Multi-threaded algorithmic primitives, analysis, and simulation

    DOE PAGES

    Bender, Michael A.; Berry, Jonathan W.; Hammond, Simon D.; ...

    2017-01-03

    A challenge in computer architecture is that processors often cannot be fed data from DRAM as fast as CPUs can consume it. Therefore, many applications are memory-bandwidth bound. With this motivation and the realization that traditional architectures (with all DRAM reachable only via bus) are insufficient to feed groups of modern processing units, vendors have introduced a variety of non-DDR 3D memory technologies (Hybrid Memory Cube (HMC),Wide I/O 2, High Bandwidth Memory (HBM)). These offer higher bandwidth and lower power by stacking DRAM chips on the processor or nearby on a silicon interposer. We will call these solutions “near-memory,” andmore » if user-addressable, “scratchpad.” High-performance systems on the market now offer two levels of main memory: near-memory on package and traditional DRAM further away. In the near term we expect the latencies near-memory and DRAM to be similar. Here, it is natural to think of near-memory as another module on the DRAM level of the memory hierarchy. Vendors are expected to offer modes in which the near memory is used as cache, but we believe that this will be inefficient.« less

  20. Automated hierarchical classification of protein domain subfamilies based on functionally-divergent residue signatures

    PubMed Central

    2012-01-01

    Background The NCBI Conserved Domain Database (CDD) consists of a collection of multiple sequence alignments of protein domains that are at various stages of being manually curated into evolutionary hierarchies based on conserved and divergent sequence and structural features. These domain models are annotated to provide insights into the relationships between sequence, structure and function via web-based BLAST searches. Results Here we automate the generation of conserved domain (CD) hierarchies using a combination of heuristic and Markov chain Monte Carlo (MCMC) sampling procedures and starting from a (typically very large) multiple sequence alignment. This procedure relies on statistical criteria to define each hierarchy based on the conserved and divergent sequence patterns associated with protein functional-specialization. At the same time this facilitates the sequence and structural annotation of residues that are functionally important. These statistical criteria also provide a means to objectively assess the quality of CD hierarchies, a non-trivial task considering that the protein subgroups are often very distantly related—a situation in which standard phylogenetic methods can be unreliable. Our aim here is to automatically generate (typically sub-optimal) hierarchies that, based on statistical criteria and visual comparisons, are comparable to manually curated hierarchies; this serves as the first step toward the ultimate goal of obtaining optimal hierarchical classifications. A plot of runtimes for the most time-intensive (non-parallelizable) part of the algorithm indicates a nearly linear time complexity so that, even for the extremely large Rossmann fold protein class, results were obtained in about a day. Conclusions This approach automates the rapid creation of protein domain hierarchies and thus will eliminate one of the most time consuming aspects of conserved domain database curation. At the same time, it also facilitates protein domain annotation by identifying those pattern residues that most distinguish each protein domain subgroup from other related subgroups. PMID:22726767

  1. An extended continuum model considering optimal velocity change with memory and numerical tests

    NASA Astrophysics Data System (ADS)

    Qingtao, Zhai; Hongxia, Ge; Rongjun, Cheng

    2018-01-01

    In this paper, an extended continuum model of traffic flow is proposed with the consideration of optimal velocity changes with memory. The new model's stability condition and KdV-Burgers equation considering the optimal velocities change with memory are deduced through linear stability theory and nonlinear analysis, respectively. Numerical simulation is carried out to study the extended continuum model, which explores how optimal velocity changes with memory affected velocity, density and energy consumption. Numerical results show that when considering the effects of optimal velocity changes with memory, the traffic jams can be suppressed efficiently. Both the memory step and sensitivity parameters of optimal velocity changes with memory will enhance the stability of traffic flow efficiently. Furthermore, numerical results demonstrates that the effect of optimal velocity changes with memory can avoid the disadvantage of historical information, which increases the stability of traffic flow on road, and so it improve the traffic flow stability and minimize cars' energy consumptions.

  2. Boy, Am I Tired!! Sleep....Why You Need It!

    ERIC Educational Resources Information Center

    Olivieri, Chrystyne

    2016-01-01

    Sleep is essential to a healthy human being. It is among the basic necessities of life, located at the bottom of Maslow's Hierarchy of Need. It is a dynamic activity, necessary to maintain mood, memory and cognitive performance. Sleep disorders are strongly associated with the development of acute and chronic medical conditions. This article…

  3. Stream Processors

    NASA Astrophysics Data System (ADS)

    Erez, Mattan; Dally, William J.

    Stream processors, like other multi core architectures partition their functional units and storage into multiple processing elements. In contrast to typical architectures, which contain symmetric general-purpose cores and a cache hierarchy, stream processors have a significantly leaner design. Stream processors are specifically designed for the stream execution model, in which applications have large amounts of explicit parallel computation, structured and predictable control, and memory accesses that can be performed at a coarse granularity. Applications in the streaming model are expressed in a gather-compute-scatter form, yielding programs with explicit control over transferring data to and from on-chip memory. Relying on these characteristics, which are common to many media processing and scientific computing applications, stream architectures redefine the boundary between software and hardware responsibilities with software bearing much of the complexity required to manage concurrency, locality, and latency tolerance. Thus, stream processors have minimal control consisting of fetching medium- and coarse-grained instructions and executing them directly on the many ALUs. Moreover, the on-chip storage hierarchy of stream processors is under explicit software control, as is all communication, eliminating the need for complex reactive hardware mechanisms.

  4. Hierarchy in directed random networks.

    PubMed

    Mones, Enys

    2013-02-01

    In recent years, the theory and application of complex networks have been quickly developing in a markable way due to the increasing amount of data from real systems and the fruitful application of powerful methods used in statistical physics. Many important characteristics of social or biological systems can be described by the study of their underlying structure of interactions. Hierarchy is one of these features that can be formulated in the language of networks. In this paper we present some (qualitative) analytic results on the hierarchical properties of random network models with zero correlations and also investigate, mainly numerically, the effects of different types of correlations. The behavior of the hierarchy is different in the absence and the presence of giant components. We show that the hierarchical structure can be drastically different if there are one-point correlations in the network. We also show numerical results suggesting that the hierarchy does not change monotonically with the correlations and there is an optimal level of nonzero correlations maximizing the level of hierarchy.

  5. Applications of polynomial optimization in financial risk investment

    NASA Astrophysics Data System (ADS)

    Zeng, Meilan; Fu, Hongwei

    2017-09-01

    Recently, polynomial optimization has many important applications in optimization, financial economics and eigenvalues of tensor, etc. This paper studies the applications of polynomial optimization in financial risk investment. We consider the standard mean-variance risk measurement model and the mean-variance risk measurement model with transaction costs. We use Lasserre's hierarchy of semidefinite programming (SDP) relaxations to solve the specific cases. The results show that polynomial optimization is effective for some financial optimization problems.

  6. Atmospheric neutrino oscillation analysis with external constraints in Super-Kamiokande I-IV

    NASA Astrophysics Data System (ADS)

    Abe, K.; Bronner, C.; Haga, Y.; Hayato, Y.; Ikeda, M.; Iyogi, K.; Kameda, J.; Kato, Y.; Kishimoto, Y.; Marti, Ll.; Miura, M.; Moriyama, S.; Nakahata, M.; Nakajima, T.; Nakano, Y.; Nakayama, S.; Okajima, Y.; Orii, A.; Pronost, G.; Sekiya, H.; Shiozawa, M.; Sonoda, Y.; Takeda, A.; Takenaka, A.; Tanaka, H.; Tasaka, S.; Tomura, T.; Akutsu, R.; Irvine, T.; Kajita, T.; Kametani, I.; Kaneyuki, K.; Nishimura, Y.; Okumura, K.; Richard, E.; Tsui, K. M.; Labarga, L.; Fernandez, P.; Blaszczyk, F. d. M.; Gustafson, J.; Kachulis, C.; Kearns, E.; Raaf, J. L.; Stone, J. L.; Sulak, L. R.; Berkman, S.; Tobayama, S.; Goldhaber, M.; Carminati, G.; Elnimr, M.; Kropp, W. R.; Mine, S.; Locke, S.; Renshaw, A.; Smy, M. B.; Sobel, H. W.; Takhistov, V.; Weatherly, P.; Ganezer, K. S.; Hartfiel, B. L.; Hill, J.; Hong, N.; Kim, J. Y.; Lim, I. T.; Park, R. G.; Akiri, T.; Himmel, A.; Li, Z.; O'Sullivan, E.; Scholberg, K.; Walter, C. W.; Wongjirad, T.; Ishizuka, T.; Nakamura, T.; Jang, J. S.; Choi, K.; Learned, J. G.; Matsuno, S.; Smith, S. N.; Amey, J.; Litchfield, R. P.; Ma, W. Y.; Uchida, Y.; Wascko, M. O.; Cao, S.; Friend, M.; Hasegawa, T.; Ishida, T.; Ishii, T.; Kobayashi, T.; Nakadaira, T.; Nakamura, K.; Oyama, Y.; Sakashita, K.; Sekiguchi, T.; Tsukamoto, T.; Abe, KE.; Hasegawa, M.; Suzuki, A. T.; Takeuchi, Y.; Yano, T.; Hayashino, T.; Hirota, S.; Huang, K.; Ieki, K.; Jiang, M.; Kikawa, T.; Nakamura, KE.; Nakaya, T.; Patel, N. D.; Suzuki, K.; Takahashi, S.; Wendell, R. A.; Anthony, L. H. V.; McCauley, N.; Pritchard, A.; Fukuda, Y.; Itow, Y.; Mitsuka, G.; Murase, M.; Muto, F.; Suzuki, T.; Mijakowski, P.; Frankiewicz, K.; Hignight, J.; Imber, J.; Jung, C. K.; Li, X.; Palomino, J. L.; Santucci, G.; Vilela, C.; Wilking, M. J.; Yanagisawa, C.; Ito, S.; Fukuda, D.; Ishino, H.; Kayano, T.; Kibayashi, A.; Koshio, Y.; Mori, T.; Nagata, H.; Sakuda, M.; Xu, C.; Kuno, Y.; Wark, D.; Di Lodovico, F.; Richards, B.; Tacik, R.; Kim, S. B.; Cole, A.; Thompson, L.; Okazawa, H.; Choi, Y.; Ito, K.; Nishijima, K.; Koshiba, M.; Totsuka, Y.; Suda, Y.; Yokoyama, M.; Calland, R. G.; Hartz, M.; Martens, K.; Quilain, B.; Simpson, C.; Suzuki, Y.; Vagins, M. R.; Hamabe, D.; Kuze, M.; Yoshida, T.; Ishitsuka, M.; Martin, J. F.; Nantais, C. M.; de Perio, P.; Tanaka, H. A.; Konaka, A.; Chen, S.; Wan, L.; Zhang, Y.; Wilkes, R. J.; Minamino, A.; Super-Kamiokande Collaboration

    2018-04-01

    An analysis of atmospheric neutrino data from all four run periods of Super-Kamiokande optimized for sensitivity to the neutrino mass hierarchy is presented. Confidence intervals for Δ m322 , sin2θ23, sin2θ13 and δC P are presented for normal neutrino mass hierarchy and inverted neutrino mass hierarchy hypotheses, based on atmospheric neutrino data alone. Additional constraints from reactor data on θ13 and from published binned T2K data on muon neutrino disappearance and electron neutrino appearance are added to the atmospheric neutrino fit to give enhanced constraints on the above parameters. Over the range of parameters allowed at 90% confidence level, the normal mass hierarchy is favored by between 91.9% and 94.5% based on the combined Super-Kamiokande plus T2K result.

  7. A linguistic geometry for space applications

    NASA Technical Reports Server (NTRS)

    Stilman, Boris

    1994-01-01

    We develop a formal theory, the so-called Linguistic Geometry, in order to discover the inner properties of human expert heuristics, which were successful in a certain class of complex control systems, and apply them to different systems. This research relies on the formalization of search heuristics of high-skilled human experts which allow for the decomposition of complex system into the hierarchy of subsystems, and thus solve intractable problems reducing the search. The hierarchy of subsystems is represented as a hierarchy of formal attribute languages. This paper includes a formal survey of the Linguistic Geometry, and new example of a solution of optimization problem for the space robotic vehicles. This example includes actual generation of the hierarchy of languages, some details of trajectory generation and demonstrates the drastic reduction of search in comparison with conventional search algorithms.

  8. Multiprocessor architectural study

    NASA Technical Reports Server (NTRS)

    Kosmala, A. L.; Stanten, S. F.; Vandever, W. H.

    1972-01-01

    An architectural design study was made of a multiprocessor computing system intended to meet functional and performance specifications appropriate to a manned space station application. Intermetrics, previous experience, and accumulated knowledge of the multiprocessor field is used to generate a baseline philosophy for the design of a future SUMC* multiprocessor. Interrupts are defined and the crucial questions of interrupt structure, such as processor selection and response time, are discussed. Memory hierarchy and performance is discussed extensively with particular attention to the design approach which utilizes a cache memory associated with each processor. The ability of an individual processor to approach its theoretical maximum performance is then analyzed in terms of a hit ratio. Memory management is envisioned as a virtual memory system implemented either through segmentation or paging. Addressing is discussed in terms of various register design adopted by current computers and those of advanced design.

  9. A large-scale circuit mechanism for hierarchical dynamical processing in the primate cortex

    PubMed Central

    Chaudhuri, Rishidev; Knoblauch, Kenneth; Gariel, Marie-Alice; Kennedy, Henry; Wang, Xiao-Jing

    2015-01-01

    We developed a large-scale dynamical model of the macaque neocortex, which is based on recently acquired directed- and weighted-connectivity data from tract-tracing experiments, and which incorporates heterogeneity across areas. A hierarchy of timescales naturally emerges from this system: sensory areas show brief, transient responses to input (appropriate for sensory processing), whereas association areas integrate inputs over time and exhibit persistent activity (suitable for decision-making and working memory). The model displays multiple temporal hierarchies, as evidenced by contrasting responses to visual versus somatosensory stimulation. Moreover, slower prefrontal and temporal areas have a disproportionate impact on global brain dynamics. These findings establish a circuit mechanism for “temporal receptive windows” that are progressively enlarged along the cortical hierarchy, suggest an extension of time integration in decision-making from local to large circuits, and should prompt a re-evaluation of the analysis of functional connectivity (measured by fMRI or EEG/MEG) by taking into account inter-areal heterogeneity. PMID:26439530

  10. The derivation and approximation of coarse-grained dynamics from Langevin dynamics

    NASA Astrophysics Data System (ADS)

    Ma, Lina; Li, Xiantao; Liu, Chun

    2016-11-01

    We present a derivation of a coarse-grained description, in the form of a generalized Langevin equation, from the Langevin dynamics model that describes the dynamics of bio-molecules. The focus is placed on the form of the memory kernel function, the colored noise, and the second fluctuation-dissipation theorem that connects them. Also presented is a hierarchy of approximations for the memory and random noise terms, using rational approximations in the Laplace domain. These approximations offer increasing accuracy. More importantly, they eliminate the need to evaluate the integral associated with the memory term at each time step. Direct sampling of the colored noise can also be avoided within this framework. Therefore, the numerical implementation of the generalized Langevin equation is much more efficient.

  11. A Task-Optimized Neural Network Replicates Human Auditory Behavior, Predicts Brain Responses, and Reveals a Cortical Processing Hierarchy.

    PubMed

    Kell, Alexander J E; Yamins, Daniel L K; Shook, Erica N; Norman-Haignere, Sam V; McDermott, Josh H

    2018-05-02

    A core goal of auditory neuroscience is to build quantitative models that predict cortical responses to natural sounds. Reasoning that a complete model of auditory cortex must solve ecologically relevant tasks, we optimized hierarchical neural networks for speech and music recognition. The best-performing network contained separate music and speech pathways following early shared processing, potentially replicating human cortical organization. The network performed both tasks as well as humans and exhibited human-like errors despite not being optimized to do so, suggesting common constraints on network and human performance. The network predicted fMRI voxel responses substantially better than traditional spectrotemporal filter models throughout auditory cortex. It also provided a quantitative signature of cortical representational hierarchy-primary and non-primary responses were best predicted by intermediate and late network layers, respectively. The results suggest that task optimization provides a powerful set of tools for modeling sensory systems. Copyright © 2018 Elsevier Inc. All rights reserved.

  12. Memory Effects on Movement Behavior in Animal Foraging

    PubMed Central

    Bracis, Chloe; Gurarie, Eliezer; Van Moorter, Bram; Goodwin, R. Andrew

    2015-01-01

    An individual’s choices are shaped by its experience, a fundamental property of behavior important to understanding complex processes. Learning and memory are observed across many taxa and can drive behaviors, including foraging behavior. To explore the conditions under which memory provides an advantage, we present a continuous-space, continuous-time model of animal movement that incorporates learning and memory. Using simulation models, we evaluate the benefit memory provides across several types of landscapes with variable-quality resources and compare the memory model within a nested hierarchy of simpler models (behavioral switching and random walk). We find that memory almost always leads to improved foraging success, but that this effect is most marked in landscapes containing sparse, contiguous patches of high-value resources that regenerate relatively fast and are located in an otherwise devoid landscape. In these cases, there is a large payoff for finding a resource patch, due to size, value, or locational difficulty. While memory-informed search is difficult to differentiate from other factors using solely movement data, our results suggest that disproportionate spatial use of higher value areas, higher consumption rates, and consumption variability all point to memory influencing the movement direction of animals in certain ecosystems. PMID:26288228

  13. Memory Effects on Movement Behavior in Animal Foraging.

    PubMed

    Bracis, Chloe; Gurarie, Eliezer; Van Moorter, Bram; Goodwin, R Andrew

    2015-01-01

    An individual's choices are shaped by its experience, a fundamental property of behavior important to understanding complex processes. Learning and memory are observed across many taxa and can drive behaviors, including foraging behavior. To explore the conditions under which memory provides an advantage, we present a continuous-space, continuous-time model of animal movement that incorporates learning and memory. Using simulation models, we evaluate the benefit memory provides across several types of landscapes with variable-quality resources and compare the memory model within a nested hierarchy of simpler models (behavioral switching and random walk). We find that memory almost always leads to improved foraging success, but that this effect is most marked in landscapes containing sparse, contiguous patches of high-value resources that regenerate relatively fast and are located in an otherwise devoid landscape. In these cases, there is a large payoff for finding a resource patch, due to size, value, or locational difficulty. While memory-informed search is difficult to differentiate from other factors using solely movement data, our results suggest that disproportionate spatial use of higher value areas, higher consumption rates, and consumption variability all point to memory influencing the movement direction of animals in certain ecosystems.

  14. The functional architecture of the ventral temporal cortex and its role in categorization

    PubMed Central

    Grill-Spector, Kalanit; Weiner, Kevin S.

    2014-01-01

    Visual categorization is thought to occur in the human ventral temporal cortex (VTC), but how this categorization is achieved is still largely unknown. In this Review, we consider the computations and representations that are necessary for categorization and examine how the microanatomical and macroanatomical layout of the VTC might optimize them to achieve rapid and flexible visual categorization. We propose that efficient categorization is achieved by organizing representations in a nested spatial hierarchy in the VTC. This spatial hierarchy serves as a neural infrastructure for the representational hierarchy of visual information in the VTC and thereby enables flexible access to category information at several levels of abstraction. PMID:24962370

  15. [Dynamic hierarchy of regulatory peptides. Structure of the induction relations of regulators as the target for therapeutic agents].

    PubMed

    Koroleva, S V; Miasoedov, N F

    2012-01-01

    Based on the database information (literature period 1970-2010 gg.) on the effects of regulatory peptides (RP) and non-peptide neurotransmitters (dopamine, serotonin, norepi-nephrine, acetylcholine) it was analyzed of possible cascade processes of endogenous regulators. It was found that the entire continuum of RP and mediators is a chaotic soup of the ordered three-level compartments. Such a dynamic functional hierarchy of endogenous regulators allows to create start-up and corrective tasks for a variety of physiological functions. Some examples of static and dynamic patterns of induction processes of RP and mediators (that regulate the states of anxiety, depression, learning and memory, feeding behavior, reproductive processes, etc.) are considered.

  16. A Bayesian generative model for learning semantic hierarchies

    PubMed Central

    Mittelman, Roni; Sun, Min; Kuipers, Benjamin; Savarese, Silvio

    2014-01-01

    Building fine-grained visual recognition systems that are capable of recognizing tens of thousands of categories, has received much attention in recent years. The well known semantic hierarchical structure of categories and concepts, has been shown to provide a key prior which allows for optimal predictions. The hierarchical organization of various domains and concepts has been subject to extensive research, and led to the development of the WordNet domains hierarchy (Fellbaum, 1998), which was also used to organize the images in the ImageNet (Deng et al., 2009) dataset, in which the category count approaches the human capacity. Still, for the human visual system, the form of the hierarchy must be discovered with minimal use of supervision or innate knowledge. In this work, we propose a new Bayesian generative model for learning such domain hierarchies, based on semantic input. Our model is motivated by the super-subordinate organization of domain labels and concepts that characterizes WordNet, and accounts for several important challenges: maintaining context information when progressing deeper into the hierarchy, learning a coherent semantic concept for each node, and modeling uncertainty in the perception process. PMID:24904452

  17. Hierarchy of non-glucose sugars in Escherichia coli.

    PubMed

    Aidelberg, Guy; Towbin, Benjamin D; Rothschild, Daphna; Dekel, Erez; Bren, Anat; Alon, Uri

    2014-12-24

    Understanding how cells make decisions, and why they make the decisions they make, is of fundamental interest in systems biology. To address this, we study the decisions made by E. coli on which genes to express when presented with two different sugars. It is well-known that glucose, E. coli's preferred carbon source, represses the uptake of other sugars by means of global and gene-specific mechanisms. However, less is known about the utilization of glucose-free sugar mixtures which are found in the natural environment of E. coli and in biotechnology. Here, we combine experiment and theory to map the choices of E. coli among 6 different non-glucose carbon sources. We used robotic assays and fluorescence reporter strains to make precise measurements of promoter activity and growth rate in all pairs of these sugars. We find that the sugars can be ranked in a hierarchy: in a mixture of a higher and a lower sugar, the lower sugar system shows reduced promoter activity. The hierarchy corresponds to the growth rate supported by each sugar- the faster the growth rate, the higher the sugar on the hierarchy. The hierarchy is 'soft' in the sense that the lower sugar promoters are not completely repressed. Measurement of the activity of the master regulator CRP-cAMP shows that the hierarchy can be quantitatively explained based on differential activation of the promoters by CRP-cAMP. Comparing sugar system activation as a function of time in sugar pair mixtures at sub-saturating concentrations, we find cases of sequential activation, and also cases of simultaneous expression of both systems. Such simultaneous expression is not predicted by simple models of growth rate optimization, which predict only sequential activation. We extend these models by suggesting multi-objective optimization for both growing rapidly now and preparing the cell for future growth on the poorer sugar. We find a defined hierarchy of sugar utilization, which can be quantitatively explained by differential activation by the master regulator cAMP-CRP. The present approach can be used to understand cell decisions when presented with mixtures of conditions.

  18. Benchmarking Memory Performance with the Data Cube Operator

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael A.; Shabanov, Leonid V.

    2004-01-01

    Data movement across a computer memory hierarchy and across computational grids is known to be a limiting factor for applications processing large data sets. We use the Data Cube Operator on an Arithmetic Data Set, called ADC, to benchmark capabilities of computers and of computational grids to handle large distributed data sets. We present a prototype implementation of a parallel algorithm for computation of the operatol: The algorithm follows a known approach for computing views from the smallest parent. The ADC stresses all levels of grid memory and storage by producing some of 2d views of an Arithmetic Data Set of d-tuples described by a small number of integers. We control data intensity of the ADC by selecting the tuple parameters, the sizes of the views, and the number of realized views. Benchmarking results of memory performance of a number of computer architectures and of a small computational grid are presented.

  19. Lasting Adaptations in Social Behavior Produced by Social Disruption and Inhibition of Adult Neurogenesis

    PubMed Central

    Opendak, Maya; Offit, Lily; Monari, Patrick; Schoenfeld, Timothy J.; Sonti, Anup N.; Cameron, Heather A.

    2016-01-01

    Research on social instability has focused on its detrimental consequences, but most people are resilient and respond by invoking various coping strategies. To investigate cellular processes underlying such strategies, a dominance hierarchy of rats was formed and then destabilized. Regardless of social position, rats from disrupted hierarchies had fewer new neurons in the hippocampus compared with rats from control cages and those from stable hierarchies. Social disruption produced a preference for familiar over novel conspecifics, a change that did not involve global memory impairments or increased anxiety. Using the neuropeptide oxytocin as a tool to increase neurogenesis in the hippocampus of disrupted rats restored preference for novel conspecifics to predisruption levels. Conversely, reducing the number of new neurons by limited inhibition of adult neurogenesis in naive transgenic GFAP–thymidine kinase rats resulted in social behavior similar to disrupted rats. Together, these results provide novel mechanistic evidence that social disruption shapes behavior in a potentially adaptive way, possibly by reducing adult neurogenesis in the hippocampus. SIGNIFICANCE STATEMENT To investigate cellular processes underlying adaptation to social instability, a dominance hierarchy of rats was formed and then destabilized. Regardless of social position, rats from disrupted hierarchies had fewer new neurons in the hippocampus compared with rats from control cages and those from stable hierarchies. Unexpectedly, these changes were accompanied by changes in social strategies without evidence of impairments in cognition or anxiety regulation. Restoring adult neurogenesis in disrupted rats using oxytocin and conditionally suppressing the production of new neurons in socially naive GFAP–thymidine kinase rats showed that loss of 6-week-old neurons may be responsible for adaptive changes in social behavior. PMID:27358459

  20. The future of memory

    NASA Astrophysics Data System (ADS)

    Marinella, M.

    In the not too distant future, the traditional memory and storage hierarchy of may be replaced by a single Storage Class Memory (SCM) device integrated on or near the logic processor. Traditional magnetic hard drives, NAND flash, DRAM, and higher level caches (L2 and up) will be replaced with a single high performance memory device. The Storage Class Memory paradigm will require high speed (< 100 ns read/write), excellent endurance (> 1012), nonvolatility (retention > 10 years), and low switching energies (< 10 pJ per switch). The International Technology Roadmap for Semiconductors (ITRS) has recently evaluated several potential candidates SCM technologies, including Resistive (or Redox) RAM, Spin Torque Transfer RAM (STT-MRAM), and phase change memory (PCM). All of these devices show potential well beyond that of current flash technologies and research efforts are underway to improve the endurance, write speeds, and scalabilities to be on-par with DRAM. This progress has interesting implications for space electronics: each of these emerging device technologies show excellent resistance to the types of radiation typically found in space applications. Commercially developed, high density storage class memory-based systems may include a memory that is physically radiation hard, and suitable for space applications without major shielding efforts. This paper reviews the Storage Class Memory concept, emerging memory devices, and possible applicability to radiation hardened electronics for space.

  1. Multilevel Optimization Framework for Hierarchical Stiffened Shells Accelerated by Adaptive Equivalent Strategy

    NASA Astrophysics Data System (ADS)

    Wang, Bo; Tian, Kuo; Zhao, Haixin; Hao, Peng; Zhu, Tianyu; Zhang, Ke; Ma, Yunlong

    2017-06-01

    In order to improve the post-buckling optimization efficiency of hierarchical stiffened shells, a multilevel optimization framework accelerated by adaptive equivalent strategy is presented in this paper. Firstly, the Numerical-based Smeared Stiffener Method (NSSM) for hierarchical stiffened shells is derived by means of the numerical implementation of asymptotic homogenization (NIAH) method. Based on the NSSM, a reasonable adaptive equivalent strategy for hierarchical stiffened shells is developed from the concept of hierarchy reduction. Its core idea is to self-adaptively decide which hierarchy of the structure should be equivalent according to the critical buckling mode rapidly predicted by NSSM. Compared with the detailed model, the high prediction accuracy and efficiency of the proposed model is highlighted. On the basis of this adaptive equivalent model, a multilevel optimization framework is then established by decomposing the complex entire optimization process into major-stiffener-level and minor-stiffener-level sub-optimizations, during which Fixed Point Iteration (FPI) is employed to accelerate convergence. Finally, the illustrative examples of the multilevel framework is carried out to demonstrate its efficiency and effectiveness to search for the global optimum result by contrast with the single-level optimization method. Remarkably, the high efficiency and flexibility of the adaptive equivalent strategy is indicated by compared with the single equivalent strategy.

  2. Schooling Space: Where South Africans Learnt to Position Themselves within the Hierarchy of Apartheid Society

    ERIC Educational Resources Information Center

    Karlsson, Jenni

    2004-01-01

    In setting out to understand how South African school space was harnessed to the political project of apartheid, the author explores memory accounts from several adults who attended school during the apartheid era. Her analysis of their reminiscences found that non-pedagogic areas of the school and public domain beyond school premises were places…

  3. Memory and Energy Optimization Strategies for Multithreaded Operating System on the Resource-Constrained Wireless Sensor Node

    PubMed Central

    Liu, Xing; Hou, Kun Mean; de Vaulx, Christophe; Xu, Jun; Yang, Jianfeng; Zhou, Haiying; Shi, Hongling; Zhou, Peng

    2015-01-01

    Memory and energy optimization strategies are essential for the resource-constrained wireless sensor network (WSN) nodes. In this article, a new memory-optimized and energy-optimized multithreaded WSN operating system (OS) LiveOS is designed and implemented. Memory cost of LiveOS is optimized by using the stack-shifting hybrid scheduling approach. Different from the traditional multithreaded OS in which thread stacks are allocated statically by the pre-reservation, thread stacks in LiveOS are allocated dynamically by using the stack-shifting technique. As a result, memory waste problems caused by the static pre-reservation can be avoided. In addition to the stack-shifting dynamic allocation approach, the hybrid scheduling mechanism which can decrease both the thread scheduling overhead and the thread stack number is also implemented in LiveOS. With these mechanisms, the stack memory cost of LiveOS can be reduced more than 50% if compared to that of a traditional multithreaded OS. Not is memory cost optimized, but also the energy cost is optimized in LiveOS, and this is achieved by using the multi-core “context aware” and multi-core “power-off/wakeup” energy conservation approaches. By using these approaches, energy cost of LiveOS can be reduced more than 30% when compared to the single-core WSN system. Memory and energy optimization strategies in LiveOS not only prolong the lifetime of WSN nodes, but also make the multithreaded OS feasible to run on the memory-constrained WSN nodes. PMID:25545264

  4. Associative Recognition Memory Awareness Improved by Theta-Burst Stimulation of Frontopolar Cortex

    PubMed Central

    Ryals, Anthony J.; Rogers, Lynn M.; Gross, Evan Z.; Polnaszek, Kelly L.; Voss, Joel L.

    2016-01-01

    Neuroimaging and lesion studies have implicated specific prefrontal cortex locations in subjective memory awareness. Based on this evidence, a rostrocaudal organization has been proposed whereby increasingly anterior prefrontal regions are increasingly involved in memory awareness. We used theta-burst transcranial magnetic stimulation (TBS) to temporarily modulate dorsolateral versus frontopolar prefrontal cortex to test for distinct causal roles in memory awareness. In three sessions, participants received TBS bilaterally to frontopolar cortex, dorsolateral prefrontal cortex, or a control location prior to performing an associative-recognition task involving judgments of memory awareness. Objective memory performance (i.e., accuracy) did not differ based on stimulation location. In contrast, frontopolar stimulation significantly influenced several measures of memory awareness. During study, judgments of learning were more accurate such that lower ratings were given to items that were subsequently forgotten selectively following frontopolar TBS. Confidence ratings during test were also higher for correct trials following frontopolar TBS. Finally, trial-by-trial correspondence between overt performance and subjective awareness during study demonstrated a linear increase across control, dorsolateral, and frontopolar TBS locations, supporting a rostrocaudal hierarchy of prefrontal contributions to memory awareness. These findings indicate that frontopolar cortex contributes causally to memory awareness, which was improved selectively by anatomically targeted TBS. PMID:25577574

  5. Two routes toward optimism: how agentic and communal themes in autobiographical memories guide optimism for the future.

    PubMed

    Austin, Adrienne; Costabile, Kristi

    2017-11-01

    Autobiographical memories are particularly adaptive because they function not only to preserve the past, but also to direct our future thoughts and behaviours. Two studies were conducted to examine how communal and agentic themes of positive autobiographical memories differentially predicted the route from autobiographical memories to optimism for the future. Across two studies, results revealed that the degree to which participants focused on communal themes in their autobiographical memories predicted their experience of nostalgia. In turn, the experience of nostalgia increased participants' levels of self-esteem and in turn, optimism for the future. By contrast, the degree to which participants focused on agentic themes in their memories predicted self-esteem and optimism, operating outside the experience of nostalgia. These effects remained even after controlling for self-focused attention. Together, these studies provide greater understanding of the interrelations among autobiographical memory, self-concept, and time, and demonstrate how agency and communion operate to influence perceptions of one's future when thinking about the past.

  6. Research findings from the Memories of Nursing oral history project.

    PubMed

    Thomas, Gail; Rosser, Elizabeth

    2017-02-23

    Capturing the stories of nurses who practised in the past offers the opportunity to reflect on the changes in practice over time to determine lessons for the future. This article shares some of the memories of a group of 16 nurses who were interviewed in Bournemouth, UK, between 2009 and 2016. Thematic analysis of the interview transcripts identified a number of themes, three of which are presented: defining moments, hygiene and hierarchy. The similarities and differences between their experiences and contemporary nursing practice are discussed to highlight how it may be timely to think back in order to take practice forward positively in the future.

  7. Arithmetic Data Cube as a Data Intensive Benchmark

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael A.; Shabano, Leonid

    2003-01-01

    Data movement across computational grids and across memory hierarchy of individual grid machines is known to be a limiting factor for application involving large data sets. In this paper we introduce the Data Cube Operator on an Arithmetic Data Set which we call Arithmetic Data Cube (ADC). We propose to use the ADC to benchmark grid capabilities to handle large distributed data sets. The ADC stresses all levels of grid memory by producing 2d views of an Arithmetic Data Set of d-tuples described by a small number of parameters. We control data intensity of the ADC by controlling the sizes of the views through choice of the tuple parameters.

  8. A Competitive and Experiential Assignment in Search Engine Optimization Strategy

    ERIC Educational Resources Information Center

    Clarke, Theresa B.; Clarke, Irvine, III

    2014-01-01

    Despite an increase in ad spending and demand for employees with expertise in search engine optimization (SEO), methods for teaching this important marketing strategy have received little coverage in the literature. Using Bloom's cognitive goals hierarchy as a framework, this experiential assignment provides a process for educators who may be new…

  9. Integration of fuzzy analytic hierarchy process and probabilistic dynamic programming in formulating an optimal fleet management model

    NASA Astrophysics Data System (ADS)

    Teoh, Lay Eng; Khoo, Hooi Ling

    2013-09-01

    This study deals with two major aspects of airlines, i.e. supply and demand management. The aspect of supply focuses on the mathematical formulation of an optimal fleet management model to maximize operational profit of the airlines while the aspect of demand focuses on the incorporation of mode choice modeling as parts of the developed model. The proposed methodology is outlined in two-stage, i.e. Fuzzy Analytic Hierarchy Process is first adopted to capture mode choice modeling in order to quantify the probability of probable phenomena (for aircraft acquisition/leasing decision). Then, an optimization model is developed as a probabilistic dynamic programming model to determine the optimal number and types of aircraft to be acquired and/or leased in order to meet stochastic demand during the planning horizon. The findings of an illustrative case study show that the proposed methodology is viable. The results demonstrate that the incorporation of mode choice modeling could affect the operational profit and fleet management decision of the airlines at varying degrees.

  10. A decision support system using analytical hierarchy process (AHP) for the optimal environmental reclamation of an open-pit mine

    NASA Astrophysics Data System (ADS)

    Bascetin, A.

    2007-04-01

    The selection of an optimal reclamation method is one of the most important factors in open-pit design and production planning. It also affects economic considerations in open-pit design as a function of plan location and depth. Furthermore, the selection is a complex multi-person, multi-criteria decision problem. The group decision-making process can be improved by applying a systematic and logical approach to assess the priorities based on the inputs of several specialists from different functional areas within the mine company. The analytical hierarchy process (AHP) can be very useful in involving several decision makers with different conflicting objectives to arrive at a consensus decision. In this paper, the selection of an optimal reclamation method using an AHP-based model was evaluated for coal production in an open-pit coal mine located at Seyitomer region in Turkey. The use of the proposed model indicates that it can be applied to improve the group decision making in selecting a reclamation method that satisfies optimal specifications. Also, it is found that the decision process is systematic and using the proposed model can reduce the time taken to select a optimal method.

  11. Simulation-based planning for theater air warfare

    NASA Astrophysics Data System (ADS)

    Popken, Douglas A.; Cox, Louis A., Jr.

    2004-08-01

    Planning for Theatre Air Warfare can be represented as a hierarchy of decisions. At the top level, surviving airframes must be assigned to roles (e.g., Air Defense, Counter Air, Close Air Support, and AAF Suppression) in each time period in response to changing enemy air defense capabilities, remaining targets, and roles of opposing aircraft. At the middle level, aircraft are allocated to specific targets to support their assigned roles. At the lowest level, routing and engagement decisions are made for individual missions. The decisions at each level form a set of time-sequenced Courses of Action taken by opposing forces. This paper introduces a set of simulation-based optimization heuristics operating within this planning hierarchy to optimize allocations of aircraft. The algorithms estimate distributions for stochastic outcomes of the pairs of Red/Blue decisions. Rather than using traditional stochastic dynamic programming to determine optimal strategies, we use an innovative combination of heuristics, simulation-optimization, and mathematical programming. Blue decisions are guided by a stochastic hill-climbing search algorithm while Red decisions are found by optimizing over a continuous representation of the decision space. Stochastic outcomes are then provided by fast, Lanchester-type attrition simulations. This paper summarizes preliminary results from top and middle level models.

  12. Dynamic Hierarchical Energy-Efficient Method Based on Combinatorial Optimization for Wireless Sensor Networks.

    PubMed

    Chang, Yuchao; Tang, Hongying; Cheng, Yongbo; Zhao, Qin; Yuan, Baoqing Li andXiaobing

    2017-07-19

    Routing protocols based on topology control are significantly important for improving network longevity in wireless sensor networks (WSNs). Traditionally, some WSN routing protocols distribute uneven network traffic load to sensor nodes, which is not optimal for improving network longevity. Differently to conventional WSN routing protocols, we propose a dynamic hierarchical protocol based on combinatorial optimization (DHCO) to balance energy consumption of sensor nodes and to improve WSN longevity. For each sensor node, the DHCO algorithm obtains the optimal route by establishing a feasible routing set instead of selecting the cluster head or the next hop node. The process of obtaining the optimal route can be formulated as a combinatorial optimization problem. Specifically, the DHCO algorithm is carried out by the following procedures. It employs a hierarchy-based connection mechanism to construct a hierarchical network structure in which each sensor node is assigned to a special hierarchical subset; it utilizes the combinatorial optimization theory to establish the feasible routing set for each sensor node, and takes advantage of the maximum-minimum criterion to obtain their optimal routes to the base station. Various results of simulation experiments show effectiveness and superiority of the DHCO algorithm in comparison with state-of-the-art WSN routing algorithms, including low-energy adaptive clustering hierarchy (LEACH), hybrid energy-efficient distributed clustering (HEED), genetic protocol-based self-organizing network clustering (GASONeC), and double cost function-based routing (DCFR) algorithms.

  13. What people know about electronic devices: A descriptive study

    NASA Astrophysics Data System (ADS)

    Kieras, D. E.

    1982-10-01

    Informal descriptive results on the nature of people's natural knowledge of electronic devices are presented. Expert and nonexpert subjects were given an electronic device to examine and describe orally. The devices ranged from familiar everyday devices, to those familiar only to the expert, to unusual devices unfamiliar even to an expert. College students were asked to describe everyday devices from memory. The results suggest that device knowledge consists of the major categories of what the device is for, how it is used, its structure in terms of subdevices, its physical layout, how it works, and its behavior. A preliminary theoretical framework for device knowledge is that it consists of a hierarchy of schemas, corresponding to a hierarchial decomposition of the device into subdevices, with each level containing the major categories of information.

  14. Using a source-to-source transformation to introduce multi-threading into the AliRoot framework for a parallel event reconstruction

    NASA Astrophysics Data System (ADS)

    Lohn, Stefan B.; Dong, Xin; Carminati, Federico

    2012-12-01

    Chip-Multiprocessors are going to support massive parallelism by many additional physical and logical cores. Improving performance can no longer be obtained by increasing clock-frequency because the technical limits are almost reached. Instead, parallel execution must be used to gain performance. Resources like main memory, the cache hierarchy, bandwidth of the memory bus or links between cores and sockets are not going to be improved as fast. Hence, parallelism can only result into performance gains if the memory usage is optimized and the communication between threads is minimized. Besides concurrent programming has become a domain for experts. Implementing multi-threading is error prone and labor-intensive. A full reimplementation of the whole AliRoot source-code is unaffordable. This paper describes the effort to evaluate the adaption of AliRoot to the needs of multi-threading and to provide the capability of parallel processing by using a semi-automatic source-to-source transformation to address the problems as described before and to provide a straight-forward way of parallelization with almost no interference between threads. This makes the approach simple and reduces the required manual changes in the code. In a first step, unconditional thread-safety will be introduced to bring the original sequential and thread unaware source-code into the position of utilizing multi-threading. Afterwards further investigations have to be performed to point out candidates of classes that are useful to share amongst threads. Then in a second step, the transformation has to change the code to share these classes and finally to verify if there are anymore invalid interferences between threads.

  15. On the thermodynamics of multilevel evolution.

    PubMed

    Tessera, Marc; Hoelzer, Guy A

    2013-09-01

    Biodiversity is hierarchically structured both phylogenetically and functionally. Phylogenetic hierarchy is understood as a product of branching organic evolution as described by Darwin. Ecosystem biologists understand some aspects of functional hierarchy, such as food web architecture, as a product of evolutionary ecology; but functional hierarchy extends to much lower scales of organization than those studied by ecologists. We argue that the more general use of the term "evolution" employed by physicists and applied to non-living systems connects directly to the narrow biological meaning. Physical evolution is best understood as a thermodynamic phenomenon, and this perspective comfortably includes all of biological evolution. We suggest four dynamical factors that build on each other in a hierarchical fashion and set the stage for the Darwinian evolution of biological systems: (1) the entropic erosion of structure; (2) the construction of dissipative systems; (3) the reproduction of growing systems and (4) the historical memory accrued to populations of reproductive agents by the acquisition of hereditary mechanisms. A particular level of evolution can underpin the emergence of higher levels, but evolutionary processes persist at each level in the hierarchy. We also argue that particular evolutionary processes can occur at any level of the hierarchy where they are not obstructed by material constraints. This theoretical framework provides an extensive basis for understanding natural selection as a multilevel process. The extensive literature on thermodynamics in turn provides an important advantage to this perspective on the evolution of higher levels of organization, such as the evolution of altruism that can accompany the emergence of social organization. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  16. Utilizing Hierarchical Segmentation to Generate Water and Snow Masks to Facilitate Monitoring Change with Remotely Sensed Image Data

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Lawrence, William T.; Plaza, Antonio J.

    2006-01-01

    The hierarchical segmentation (HSEG) algorithm is a hybrid of hierarchical step-wise optimization and constrained spectral clustering that produces a hierarchical set of image segmentations. This segmentation hierarchy organizes image data in a manner that makes the image's information content more accessible for analysis by enabling region-based analysis. This paper discusses data analysis with HSEG and describes several measures of region characteristics that may be useful analyzing segmentation hierarchies for various applications. Segmentation hierarchy analysis for generating landwater and snow/ice masks from MODIS (Moderate Resolution Imaging Spectroradiometer) data was demonstrated and compared with the corresponding MODIS standard products. The masks based on HSEG segmentation hierarchies compare very favorably to the MODIS standard products. Further, the HSEG based landwater mask was specifically tailored to the MODIS data and the HSEG snow/ice mask did not require the setting of a critical threshold as required in the production of the corresponding MODIS standard product.

  17. Cognitive Theory within the Framework of an Information Processing Model and Learning Hierarchy: Viable Alternative to the Bloom-Mager System.

    ERIC Educational Resources Information Center

    Stahl, Robert J.

    This review of the current status of the human information processing model presents the Stahl Perceptual Information Processing and Operations Model (SPInPrOM) as a model of how thinking, memory, and the processing of information take place within the individual learner. A related system, the Domain of Cognition, is presented as an alternative to…

  18. Justification of Estimates for Fiscal Year 1983 Submitted to Congress.

    DTIC Science & Technology

    1982-02-01

    hierarchies to aid software production; completion of the components of an adaptive suspension vehicle including a storage energy unit, hydraulics, laser...and corrosion (long storage times), and radiation-induced breakdown. Solid- lubricated main engine bearings for cruise missile engines would offer...environments will cause "soft error" (computational and memory storage errors) in advanced microelectronic circuits. Research on high-speed, low-power

  19. Neuroimaging markers associated with maintenance of optimal memory performance in late-life.

    PubMed

    Dekhtyar, Maria; Papp, Kathryn V; Buckley, Rachel; Jacobs, Heidi I L; Schultz, Aaron P; Johnson, Keith A; Sperling, Reisa A; Rentz, Dorene M

    2017-06-01

    Age-related memory decline has been well-documented; however, some individuals reach their 8th-10th decade while maintaining strong memory performance. To determine which demographic and biomarker factors differentiated top memory performers (aged 75+, top 20% for memory) from their peers and whether top memory performance was maintained over 3 years. Clinically normal adults (n=125, CDR=0; age: 79.5±3.57 years) from the Harvard Aging Brain Study underwent cognitive testing and neuroimaging (amyloid PET, MRI) at baseline and 3-year follow-up. Participants were grouped into Optimal (n=25) vs. Typical (n=100) performers using performance on 3 challenging memory measures. Non-parametric tests were used to compare groups. There were no differences in age, sex, or education between Optimal vs. Typical performers. The Optimal group performed better in Processing Speed (p=0.016) and Executive Functioning (p<0.001). Optimal performers had larger hippocampal volumes at baseline compared with Typical Performers (p=0.027) but no differences in amyloid burden (p=0.442). Twenty-three of the 25 Optimal performers had longitudinal data and16 maintained top memory performance while 7 declined. Non-Maintainers additionally declined in Executive Functioning but not Processing Speed. Longitudinally, there were no hippocampal volume differences between Maintainers and Non-Maintainers, however Non-Maintainers exhibited higher amyloid burden at baseline in contrast with Maintainers (p=0.008). Excellent memory performance in late life does not guarantee protection against cognitive decline. Those who maintain an optimal memory into the 8th and 9th decades may have lower levels of AD pathology. Copyright © 2017. Published by Elsevier Ltd.

  20. Memory control beliefs and everyday forgetfulness in adulthood: the effects of selection, optimization, and compensation strategies.

    PubMed

    Scheibner, Gunnar Benjamin; Leathem, Janet

    2012-01-01

    Controlling for age, gender, education, and self-rated health, the present study used regression analyses to examine the relationships between memory control beliefs and self-reported forgetfulness in the context of the meta-theory of Selective Optimization with Compensation (SOC). Findings from this online survey (N = 409) indicate that, among adult New Zealanders, a higher sense of memory control accounts for a 22.7% reduction in self-reported forgetfulness. Similarly, optimization was found to account for a 5% reduction in forgetfulness while the strategies of selection and compensation were not related to self-reports of forgetfulness. Optimization partially mediated the beneficial effects that some memory beliefs (e.g., believing that memory decline is inevitable and believing in the potential for memory improvement) have on forgetfulness. It was concluded that memory control beliefs are important predictors of self-reported forgetfulness while the support for the SOC model in the context of memory controllability and everyday forgetfulness is limited.

  1. Automated control of hierarchical systems using value-driven methods

    NASA Technical Reports Server (NTRS)

    Pugh, George E.; Burke, Thomas E.

    1990-01-01

    An introduction is given to the Value-driven methodology, which has been successfully applied to solve a variety of difficult decision, control, and optimization problems. Many real-world decision processes (e.g., those encountered in scheduling, allocation, and command and control) involve a hierarchy of complex planning considerations. For such problems it is virtually impossible to define a fixed set of rules that will operate satisfactorily over the full range of probable contingencies. Decision Science Applications' value-driven methodology offers a systematic way of automating the intuitive, common-sense approach used by human planners. The inherent responsiveness of value-driven systems to user-controlled priorities makes them particularly suitable for semi-automated applications in which the user must remain in command of the systems operation. Three examples of the practical application of the approach in the automation of hierarchical decision processes are discussed: the TAC Brawler air-to-air combat simulation is a four-level computerized hierarchy; the autonomous underwater vehicle mission planning system is a three-level control system; and the Space Station Freedom electrical power control and scheduling system is designed as a two-level hierarchy. The methodology is compared with rule-based systems and with other more widely-known optimization techniques.

  2. Simulating Hydrologic Flow and Reactive Transport with PFLOTRAN and PETSc on Emerging Fine-Grained Parallel Computer Architectures

    NASA Astrophysics Data System (ADS)

    Mills, R. T.; Rupp, K.; Smith, B. F.; Brown, J.; Knepley, M.; Zhang, H.; Adams, M.; Hammond, G. E.

    2017-12-01

    As the high-performance computing community pushes towards the exascale horizon, power and heat considerations have driven the increasing importance and prevalence of fine-grained parallelism in new computer architectures. High-performance computing centers have become increasingly reliant on GPGPU accelerators and "manycore" processors such as the Intel Xeon Phi line, and 512-bit SIMD registers have even been introduced in the latest generation of Intel's mainstream Xeon server processors. The high degree of fine-grained parallelism and more complicated memory hierarchy considerations of such "manycore" processors present several challenges to existing scientific software. Here, we consider how the massively parallel, open-source hydrologic flow and reactive transport code PFLOTRAN - and the underlying Portable, Extensible Toolkit for Scientific Computation (PETSc) library on which it is built - can best take advantage of such architectures. We will discuss some key features of these novel architectures and our code optimizations and algorithmic developments targeted at them, and present experiences drawn from working with a wide range of PFLOTRAN benchmark problems on these architectures.

  3. Automatic Blocking Of QR and LU Factorizations for Locality

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yi, Q; Kennedy, K; You, H

    2004-03-26

    QR and LU factorizations for dense matrices are important linear algebra computations that are widely used in scientific applications. To efficiently perform these computations on modern computers, the factorization algorithms need to be blocked when operating on large matrices to effectively exploit the deep cache hierarchy prevalent in today's computer memory systems. Because both QR (based on Householder transformations) and LU factorization algorithms contain complex loop structures, few compilers can fully automate the blocking of these algorithms. Though linear algebra libraries such as LAPACK provides manually blocked implementations of these algorithms, by automatically generating blocked versions of the computations, moremore » benefit can be gained such as automatic adaptation of different blocking strategies. This paper demonstrates how to apply an aggressive loop transformation technique, dependence hoisting, to produce efficient blockings for both QR and LU with partial pivoting. We present different blocking strategies that can be generated by our optimizer and compare the performance of auto-blocked versions with manually tuned versions in LAPACK, both using reference BLAS, ATLAS BLAS and native BLAS specially tuned for the underlying machine architectures.« less

  4. VLBI-resolution radio-map algorithms: Performance analysis of different levels of data-sharing on multi-socket, multi-core architectures

    NASA Astrophysics Data System (ADS)

    Tabik, S.; Romero, L. F.; Mimica, P.; Plata, O.; Zapata, E. L.

    2012-09-01

    A broad area in astronomy focuses on simulating extragalactic objects based on Very Long Baseline Interferometry (VLBI) radio-maps. Several algorithms in this scope simulate what would be the observed radio-maps if emitted from a predefined extragalactic object. This work analyzes the performance and scaling of this kind of algorithms on multi-socket, multi-core architectures. In particular, we evaluate a sharing approach, a privatizing approach and a hybrid approach on systems with complex memory hierarchy that includes shared Last Level Cache (LLC). In addition, we investigate which manual processes can be systematized and then automated in future works. The experiments show that the data-privatizing model scales efficiently on medium scale multi-socket, multi-core systems (up to 48 cores) while regardless of algorithmic and scheduling optimizations, the sharing approach is unable to reach acceptable scalability on more than one socket. However, the hybrid model with a specific level of data-sharing provides the best scalability over all used multi-socket, multi-core systems.

  5. Limits to the usability of iconic memory.

    PubMed

    Rensink, Ronald A

    2014-01-01

    Human vision briefly retains a trace of a stimulus after it disappears. This trace-iconic memory-is often believed to be a surrogate for the original stimulus, a representational structure that can be used as if the original stimulus were still present. To investigate its nature, a flicker-search paradigm was developed that relied upon a full scan (rather than partial report) of its contents. Results show that for visual search it can indeed act as a surrogate, with little cost for alternating between visible and iconic representations. However, the duration over which it can be used depends on the type of task: some tasks can use iconic memory for at least 240 ms, others for only about 190 ms, while others for no more than about 120 ms. The existence of these different limits suggests that iconic memory may have multiple layers, each corresponding to a particular level of the visual hierarchy. In this view, the inability to use a layer of iconic memory may reflect an inability to maintain feedback connections to the corresponding representation.

  6. Radiative and precipitation controls on root zone soil moisture spectra

    DOE PAGES

    Nakai, Taro; Katul, Gabriel G.; Kotani, Ayumi; ...

    2014-10-20

    Here, we present that temporal variability in root zone soil moisture content (w) exhibits a Lorentzian spectrum with memory dictated by a damping term when forced with white-noise precipitation. In the context of regional dimming, radiation and precipitation variability are needed to reproduce w trends prompting interest in how the w memory is altered by radiative forcing. A hierarchy of models that sequentially introduce the spectrum of precipitation, net radiation, and the effect of w on evaporative and drainage losses was used to analyze the spectrum of w at subtropical and temperate forested sites. Reproducing the w spectra at longmore » time scales necessitated simultaneous precipitation and net radiation measurements depending on site conditions. The w memory inferred from observed w spectra was 25–38 days, larger than that determined from maximum wet evapotranspiration and field capacity. Finally, the w memory can be reasonably inferred from the Lorentzian spectrum when precipitation and evapotranspiration are in phase.« less

  7. Software Techniques for Non-Von Neumann Architectures

    DTIC Science & Technology

    1990-01-01

    Commtopo programmable Benes net.; hypercubic lattice for QCD Control CENTRALIZED Assign STATIC Memory :SHARED Synch UNIVERSAL Max-cpu 566 Proessor...boards (each = 4 floating point units, 2 multipliers) Cpu-size 32-bit floating point chips Perform 11.4 Gflops Market quantum chromodynamics ( QCD ...functions there should exist a capability to define hierarchies and lattices of complex objects. A complex object can be made up of a set of simple objects

  8. Computer architecture evaluation for structural dynamics computations: Project summary

    NASA Technical Reports Server (NTRS)

    Standley, Hilda M.

    1989-01-01

    The intent of the proposed effort is the examination of the impact of the elements of parallel architectures on the performance realized in a parallel computation. To this end, three major projects are developed: a language for the expression of high level parallelism, a statistical technique for the synthesis of multicomputer interconnection networks based upon performance prediction, and a queueing model for the analysis of shared memory hierarchies.

  9. Skill and Working Memory.

    DTIC Science & Technology

    1982-04-30

    clusters of rooms or areas. The fairly localized property of architectural patterns at the lowest level in the hierarchy is reminiscent of the localized...three digits. We have termed these clusters of groups "supergroups". Finally, when these supergroups became too large (more than 4 or 5 groups), SF...Supergroups -.> Clusters of Supergroups. Insert Figure 4 about here .... .... o.... In another study, run separately on SF and DD, after an hour’s

  10. Dynamic Hierarchical Energy-Efficient Method Based on Combinatorial Optimization for Wireless Sensor Networks

    PubMed Central

    Tang, Hongying; Cheng, Yongbo; Zhao, Qin; Li, Baoqing; Yuan, Xiaobing

    2017-01-01

    Routing protocols based on topology control are significantly important for improving network longevity in wireless sensor networks (WSNs). Traditionally, some WSN routing protocols distribute uneven network traffic load to sensor nodes, which is not optimal for improving network longevity. Differently to conventional WSN routing protocols, we propose a dynamic hierarchical protocol based on combinatorial optimization (DHCO) to balance energy consumption of sensor nodes and to improve WSN longevity. For each sensor node, the DHCO algorithm obtains the optimal route by establishing a feasible routing set instead of selecting the cluster head or the next hop node. The process of obtaining the optimal route can be formulated as a combinatorial optimization problem. Specifically, the DHCO algorithm is carried out by the following procedures. It employs a hierarchy-based connection mechanism to construct a hierarchical network structure in which each sensor node is assigned to a special hierarchical subset; it utilizes the combinatorial optimization theory to establish the feasible routing set for each sensor node, and takes advantage of the maximum–minimum criterion to obtain their optimal routes to the base station. Various results of simulation experiments show effectiveness and superiority of the DHCO algorithm in comparison with state-of-the-art WSN routing algorithms, including low-energy adaptive clustering hierarchy (LEACH), hybrid energy-efficient distributed clustering (HEED), genetic protocol-based self-organizing network clustering (GASONeC), and double cost function-based routing (DCFR) algorithms. PMID:28753962

  11. Parameterized Micro-benchmarking: An Auto-tuning Approach for Complex Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Wenjing; Krishnamoorthy, Sriram; Agrawal, Gagan

    2012-05-15

    Auto-tuning has emerged as an important practical method for creating highly optimized implementations of key computational kernels and applications. However, the growing complexity of architectures and applications is creating new challenges for auto-tuning. Complex applications can involve a prohibitively large search space that precludes empirical auto-tuning. Similarly, architectures are becoming increasingly complicated, making it hard to model performance. In this paper, we focus on the challenge to auto-tuning presented by applications with a large number of kernels and kernel instantiations. While these kernels may share a somewhat similar pattern, they differ considerably in problem sizes and the exact computation performed.more » We propose and evaluate a new approach to auto-tuning which we refer to as parameterized micro-benchmarking. It is an alternative to the two existing classes of approaches to auto-tuning: analytical model-based and empirical search-based. Particularly, we argue that the former may not be able to capture all the architectural features that impact performance, whereas the latter might be too expensive for an application that has several different kernels. In our approach, different expressions in the application, different possible implementations of each expression, and the key architectural features, are used to derive a simple micro-benchmark and a small parameter space. This allows us to learn the most significant features of the architecture that can impact the choice of implementation for each kernel. We have evaluated our approach in the context of GPU implementations of tensor contraction expressions encountered in excited state calculations in quantum chemistry. We have focused on two aspects of GPUs that affect tensor contraction execution: memory access patterns and kernel consolidation. Using our parameterized micro-benchmarking approach, we obtain a speedup of up to 2 over the version that used default optimizations, but no auto-tuning. We demonstrate that observations made from microbenchmarks match the behavior seen from real expressions. In the process, we make important observations about the memory hierarchy of two of the most recent NVIDIA GPUs, which can be used in other optimization frameworks as well.« less

  12. Optimal causal inference: estimating stored information and approximating causal architecture.

    PubMed

    Still, Susanne; Crutchfield, James P; Ellison, Christopher J

    2010-09-01

    We introduce an approach to inferring the causal architecture of stochastic dynamical systems that extends rate-distortion theory to use causal shielding--a natural principle of learning. We study two distinct cases of causal inference: optimal causal filtering and optimal causal estimation. Filtering corresponds to the ideal case in which the probability distribution of measurement sequences is known, giving a principled method to approximate a system's causal structure at a desired level of representation. We show that in the limit in which a model-complexity constraint is relaxed, filtering finds the exact causal architecture of a stochastic dynamical system, known as the causal-state partition. From this, one can estimate the amount of historical information the process stores. More generally, causal filtering finds a graded model-complexity hierarchy of approximations to the causal architecture. Abrupt changes in the hierarchy, as a function of approximation, capture distinct scales of structural organization. For nonideal cases with finite data, we show how the correct number of the underlying causal states can be found by optimal causal estimation. A previously derived model-complexity control term allows us to correct for the effect of statistical fluctuations in probability estimates and thereby avoid overfitting.

  13. Data storage technology comparisons

    NASA Technical Reports Server (NTRS)

    Katti, Romney R.

    1990-01-01

    The role of data storage and data storage technology is an integral, though conceptually often underestimated, portion of data processing technology. Data storage is important in the mass storage mode in which generated data is buffered for later use. But data storage technology is also important in the data flow mode when data are manipulated and hence required to flow between databases, datasets and processors. This latter mode is commonly associated with memory hierarchies which support computation. VLSI devices can reasonably be defined as electronic circuit devices such as channel and control electronics as well as highly integrated, solid-state devices that are fabricated using thin film deposition technology. VLSI devices in both capacities play an important role in data storage technology. In addition to random access memories (RAM), read-only memories (ROM), and other silicon-based variations such as PROM's, EPROM's, and EEPROM's, integrated devices find their way into a variety of memory technologies which offer significant performance advantages. These memory technologies include magnetic tape, magnetic disk, magneto-optic disk, and vertical Bloch line memory. In this paper, some comparison between selected technologies will be made to demonstrate why more than one memory technology exists today, based for example on access time and storage density at the active bit and system levels.

  14. Using Fuzzy Analytic Hierarchy Process multicriteria and Geographical information system for coastal vulnerability analysis in Morocco: The case of Mohammedia

    NASA Astrophysics Data System (ADS)

    Tahri, Meryem; Maanan, Mohamed; Hakdaoui, Mustapha

    2016-04-01

    This paper shows a method to assess the vulnerability of coastal risks such as coastal erosion or submarine applying Fuzzy Analytic Hierarchy Process (FAHP) and spatial analysis techniques with Geographic Information System (GIS). The coast of the Mohammedia located in Morocco was chosen as the study site to implement and validate the proposed framework by applying a GIS-FAHP based methodology. The coastal risk vulnerability mapping follows multi-parametric causative factors as sea level rise, significant wave height, tidal range, coastal erosion, elevation, geomorphology and distance to an urban area. The Fuzzy Analytic Hierarchy Process methodology enables the calculation of corresponding criteria weights. The result shows that the coastline of the Mohammedia is characterized by a moderate, high and very high level of vulnerability to coastal risk. The high vulnerability areas are situated in the east at Monika and Sablette beaches. This technical approach is based on the efficiency of the Geographic Information System tool based on Fuzzy Analytical Hierarchy Process to help decision maker to find optimal strategies to minimize coastal risks.

  15. A Game-Theoretical Winner and Loser Model of Dominance Hierarchy Formation.

    PubMed

    Kura, Klodeta; Broom, Mark; Kandler, Anne

    2016-06-01

    Many animals spend large parts of their lives in groups. Within such groups, they need to find efficient ways of dividing available resources between them. This is often achieved by means of a dominance hierarchy, which in its most extreme linear form allocates a strict priority order to the individuals. Once a hierarchy is formed, it is often stable over long periods, but the formation of hierarchies among individuals with little or no knowledge of each other can involve aggressive contests. The outcome of such contests can have significant effects on later contests, with previous winners more likely to win (winner effects) and previous losers more likely to lose (loser effects). This scenario has been modelled by a number of authors, in particular by Dugatkin. In his model, individuals engage in aggressive contests if the assessment of their fighting ability relative to their opponent is above a threshold [Formula: see text]. Here we present a model where each individual can choose its own value [Formula: see text]. This enables us to address questions such as how aggressive should individuals be in order to take up one of the first places in the hierarchy? We find that a unique strategy evolves, as opposed to a mixture of strategies. Thus, in any scenario there exists a unique best level of aggression, and individuals should not switch between strategies. We find that for optimal strategy choice, the hierarchy forms quickly, after which there are no mutually aggressive contests.

  16. Overview of emerging nonvolatile memory technologies

    PubMed Central

    2014-01-01

    Nonvolatile memory technologies in Si-based electronics date back to the 1990s. Ferroelectric field-effect transistor (FeFET) was one of the most promising devices replacing the conventional Flash memory facing physical scaling limitations at those times. A variant of charge storage memory referred to as Flash memory is widely used in consumer electronic products such as cell phones and music players while NAND Flash-based solid-state disks (SSDs) are increasingly displacing hard disk drives as the primary storage device in laptops, desktops, and even data centers. The integration limit of Flash memories is approaching, and many new types of memory to replace conventional Flash memories have been proposed. Emerging memory technologies promise new memories to store more data at less cost than the expensive-to-build silicon chips used by popular consumer gadgets including digital cameras, cell phones and portable music players. They are being investigated and lead to the future as potential alternatives to existing memories in future computing systems. Emerging nonvolatile memory technologies such as magnetic random-access memory (MRAM), spin-transfer torque random-access memory (STT-RAM), ferroelectric random-access memory (FeRAM), phase-change memory (PCM), and resistive random-access memory (RRAM) combine the speed of static random-access memory (SRAM), the density of dynamic random-access memory (DRAM), and the nonvolatility of Flash memory and so become very attractive as another possibility for future memory hierarchies. Many other new classes of emerging memory technologies such as transparent and plastic, three-dimensional (3-D), and quantum dot memory technologies have also gained tremendous popularity in recent years. Subsequently, not an exaggeration to say that computer memory could soon earn the ultimate commercial validation for commercial scale-up and production the cheap plastic knockoff. Therefore, this review is devoted to the rapidly developing new class of memory technologies and scaling of scientific procedures based on an investigation of recent progress in advanced Flash memory devices. PMID:25278820

  17. Overview of emerging nonvolatile memory technologies.

    PubMed

    Meena, Jagan Singh; Sze, Simon Min; Chand, Umesh; Tseng, Tseung-Yuen

    2014-01-01

    Nonvolatile memory technologies in Si-based electronics date back to the 1990s. Ferroelectric field-effect transistor (FeFET) was one of the most promising devices replacing the conventional Flash memory facing physical scaling limitations at those times. A variant of charge storage memory referred to as Flash memory is widely used in consumer electronic products such as cell phones and music players while NAND Flash-based solid-state disks (SSDs) are increasingly displacing hard disk drives as the primary storage device in laptops, desktops, and even data centers. The integration limit of Flash memories is approaching, and many new types of memory to replace conventional Flash memories have been proposed. Emerging memory technologies promise new memories to store more data at less cost than the expensive-to-build silicon chips used by popular consumer gadgets including digital cameras, cell phones and portable music players. They are being investigated and lead to the future as potential alternatives to existing memories in future computing systems. Emerging nonvolatile memory technologies such as magnetic random-access memory (MRAM), spin-transfer torque random-access memory (STT-RAM), ferroelectric random-access memory (FeRAM), phase-change memory (PCM), and resistive random-access memory (RRAM) combine the speed of static random-access memory (SRAM), the density of dynamic random-access memory (DRAM), and the nonvolatility of Flash memory and so become very attractive as another possibility for future memory hierarchies. Many other new classes of emerging memory technologies such as transparent and plastic, three-dimensional (3-D), and quantum dot memory technologies have also gained tremendous popularity in recent years. Subsequently, not an exaggeration to say that computer memory could soon earn the ultimate commercial validation for commercial scale-up and production the cheap plastic knockoff. Therefore, this review is devoted to the rapidly developing new class of memory technologies and scaling of scientific procedures based on an investigation of recent progress in advanced Flash memory devices.

  18. Set-relevance determines the impact of distractors on episodic memory retrieval.

    PubMed

    Kwok, Sze Chai; Shallice, Tim; Macaluso, Emiliano

    2014-09-01

    We investigated the interplay between stimulus-driven attention and memory retrieval with a novel interference paradigm that engaged both systems concurrently on each trial. Participants encoded a 45-min movie on Day 1 and, on Day 2, performed a temporal order judgment task during fMRI. Each retrieval trial comprised three images presented sequentially, and the task required participants to judge the temporal order of the first and the last images ("memory probes") while ignoring the second image, which was task irrelevant ("attention distractor"). We manipulated the content relatedness and the temporal proximity between the distractor and the memory probes, as well as the temporal distance between two probes. Behaviorally, short temporal distances between the probes led to reduced retrieval performance. Distractors that at encoding were temporally close to the first probe image reduced these costs, specifically when the distractor was content unrelated to the memory probes. The imaging results associated the distractor probe temporal proximity with activation of the right ventral attention network. By contrast, the precuneus was activated for high-content relatedness between distractors and probes and in trials including a short distance between the two memory probes. The engagement of the right ventral attention network by specific types of distractors suggests a link between stimulus-driven attention control and episodic memory retrieval, whereas the activation pattern of the precuneus implicates this region in memory search within knowledge/content-based hierarchies.

  19. An analysis of an optimal selection process for characteristics and technical performance of baseball pitchers.

    PubMed

    Lin, Wen-Bin; Tung, I-Wu; Chen, Mei-Jung; Chen, Mei-Yen

    2011-08-01

    Selection of a qualified pitcher has relied previously on qualitative indices; here, both quantitative and qualitative indices including pitching statistics, defense, mental skills, experience, and managers' recognition were collected, and an analytic hierarchy process was used to rank baseball pitchers. The participants were 8 experts who ranked characteristics and statistics of 15 baseball pitchers who comprised the first round of potential representatives for the Chinese Taipei National Baseball team. The results indicated a selection rate that was 91% consistent with the official national team roster, as 11 pitchers with the highest scores who were recommended as optimal choices to be official members of the Chinese Tai-pei National Baseball team actually participated in the 2009 Baseball World Cup. An analytic hierarchy can aid in selection of qualified pitchers, depending on situational and practical needs; the method could be extended to other sports and team-selection situations.

  20. An Optimizing Compiler for Petascale I/O on Leadership-Class Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kandemir, Mahmut Taylan; Choudary, Alok; Thakur, Rajeev

    In high-performance computing (HPC), parallel I/O architectures usually have very complex hierarchies with multiple layers that collectively constitute an I/O stack, including high-level I/O libraries such as PnetCDF and HDF5, I/O middleware such as MPI-IO, and parallel file systems such as PVFS and Lustre. Our DOE project explored automated instrumentation and compiler support for I/O intensive applications. Our project made significant progress towards understanding the complex I/O hierarchies of high-performance storage systems (including storage caches, HDDs, and SSDs), and designing and implementing state-of-the-art compiler/runtime system technology that targets I/O intensive HPC applications that target leadership class machine. This final reportmore » summarizes the major achievements of the project and also points out promising future directions Two new sections in this report compared to the previous report are IOGenie and SSD/NVM-specific optimizations.« less

  1. Blackcomb: Hardware-Software Co-design for Non-Volatile Memory in Exascale Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schreiber, Robert

    Summary of technical results of Blackcomb Memory Devices We explored various different memory technologies (STTRAM, PCRAM, FeRAM, and ReRAM). The progress can be classified into three categories, below. Modeling and Tool Releases Various modeling tools have been developed over the last decade to help in the design of SRAM or DRAM-based memory hierarchies. To explore new design opportunities that NVM technologies can bring to the designers, we have developed similar high-level models for NVM, including PCRAMsim [Dong 2009], NVSim [Dong 2012], and NVMain [Poremba 2012]. NVSim is a circuit-level model for NVM performance, energy, and area estimation, which supports variousmore » NVM technologies, including STT-RAM, PCRAM, ReRAM, and legacy NAND Flash. NVSim is successfully validated against industrial NVM prototypes, and it is expected to help boost architecture-level NVM-related studies. On the other side, NVMain is a cycle accurate main memory simulator designed to simulate emerging nonvolatile memories at the architectural level. We have released these models as open source tools and provided contiguous support to them. We also proposed PS3-RAM, which is a fast, portable and scalable statistical STT-RAM reliability analysis model [Wen 2012]. Design Space Exploration and Optimization With the support of these models, we explore different device/circuit optimization techniques. For example, in [Niu 2012a] we studied the power reduction technique for the application of ECC scheme in ReRAM designs and proposed to use ECC code to relax the BER (Bit Error Rate) requirement of a single memory to improve the write energy consumption and latency for both 1T1R and cross-point ReRAM designs. In [Xu 2011], we proposed a methodology to design STT-RAM for different optimization goals such as read performance, write performance and write energy by leveraging the trade-off between write current and write time of MTJ. We also studied the tradeoffs in building a reliable crosspoint ReRAM array [Niu 2012b]. We have conducted an in depth analysis of the circuit and system level design implications of multi-layer cross-point Resistive RAM (MLCReRAM) from performance, power and reliability perspectives [Xu 2013]. The objective of this study is to understand the design trade-offs of this technology with respect to the MLC Phase Change Memory (MLCPCM).Our MLC ReRAM design at the circuit and system levels indicates that different resistance allocation schemes, programming strategies, peripheral designs, and material selections profoundly affect the area, latency, power, and reliability of MLC ReRAM. Based on this analysis, we conduct two case studies: first we compare MLC ReRAM design against MLC phase-change memory (PCM) and multi-layer cross-point ReRAM design, and point out why multi-level ReRAM is appealing; second we further explore the design space for MLC ReRAM. Architecture and Application We explored hybrid checkpointing using phase-change memory for future exascale systems [Dong 2011] and showed that the use of nonvolatile memory for local checkpointing significantly increases the number of faults covered by local checkpoints and reduces the probability of a global failure in the middle of a global checkpoint to less than 1%. We also proposed a technique called i2WAP to mitigate the write variations in NVM-based last-level cache for the improvement of the NVM lifetime [Wang 2013]. Our wear leveling technique attempts to work around the limitations of write endurance by arranging data access so that write operations can be distributed evenly across all the storage cells. During our intensive research on fault-tolerant NVM design, we found that ECC cannot effectively tolerate hard errors from limited write endurance and process imperfection. Therefore, we devised a novel Point and Discard (PAD) architecture in in [ 2012] as a hard-error-tolerant architecture for ReRAM-based Last Level Caches. PAD improves the lifetime of ReRAM caches by 1.6X-440X under different process variations without performance overhead in the system's early life. We have investigated the applicability of NVM for persistent memory design [Zhao 2013]. New byte addressable NVM enables fast persistent memory that allows in-memory persistent data objects to be updated with much higher throughput. Despite the significant improvement, the performance of these designs is only 50% of the native system with no persistence support, due to the logging or copy-on-write mechanisms used to update the persistent memory. A challenge in this approach is therefore how to efficiently enable atomic, consistent, and durable updates to ensure data persistence that survives application and/or system failures. We have designed a persistent memory system, called Klin, that can provide performance as close as that of the native system. The Klin design adopts a non-volatile cache and a non-volatile main memory for constructing a multi-versioned durable memory system, enabling atomic updates without logging or copy-on-write. Our evaluation shows that the proposed Kiln mechanism can achieve up to 2X of performance improvement to NVRAM-based persistent memory employing write-ahead logging. In addition, our design has numerous practical advantages: a simple and intuitive abstract interface, microarchitecture-level optimizations, fast recovery from failures, and no redundant writes to slow non-volatile storage media. The work was published in MICRO 2013 and received Best Paper Honorable Mentioned Award.« less

  2. Software Issues in High-Performance Computing and a Framework for the Development of HPC Applications

    DTIC Science & Technology

    1995-01-01

    possible to determine communication points. For this version, a C program spawning Posix threads and using semaphores to synchronize would have to...performance such as the time required for network communication and synchronization as well as issues of asynchrony and memory hierarchy. For example...enhances reusability. Process (or task) parallel computations can also be succinctly expressed with a small set of process creation and synchronization

  3. AHPCRC (Army High Performance Computing Research Center) Bulletin. Volume 2, Issue 1

    DTIC Science & Technology

    2010-01-01

    Researchers in AHPCRC Technical Area 4 focus on improving processes for developing scalable, accurate parallel programs that are easily ported from one...control number. 1. REPORT DATE 2011 2. REPORT TYPE 3. DATES COVERED 00-00-2011 to 00-00-2011 4 . TITLE AND SUBTITLE AHPCRC (Army High...continued on page 4 Virtual levels in Sequoia represent an abstract memory hierarchy without specifying data transfer mechanisms, giving the

  4. A Systematic Approach for Quantitative Analysis of Multidisciplinary Design Optimization Framework

    NASA Astrophysics Data System (ADS)

    Kim, Sangho; Park, Jungkeun; Lee, Jeong-Oog; Lee, Jae-Woo

    An efficient Multidisciplinary Design and Optimization (MDO) framework for an aerospace engineering system should use and integrate distributed resources such as various analysis codes, optimization codes, Computer Aided Design (CAD) tools, Data Base Management Systems (DBMS), etc. in a heterogeneous environment, and need to provide user-friendly graphical user interfaces. In this paper, we propose a systematic approach for determining a reference MDO framework and for evaluating MDO frameworks. The proposed approach incorporates two well-known methods, Analytic Hierarchy Process (AHP) and Quality Function Deployment (QFD), in order to provide a quantitative analysis of the qualitative criteria of MDO frameworks. Identification and hierarchy of the framework requirements and the corresponding solutions for the reference MDO frameworks, the general one and the aircraft oriented one were carefully investigated. The reference frameworks were also quantitatively identified using AHP and QFD. An assessment of three in-house frameworks was then performed. The results produced clear and useful guidelines for improvement of the in-house MDO frameworks and showed the feasibility of the proposed approach for evaluating an MDO framework without a human interference.

  5. Evaluation of Markov-Decision Model for Instructional Sequence Optimization. Semi-Annual Technical Report for the period 1 July-31 December 1975. Technical Report No. 76.

    ERIC Educational Resources Information Center

    Wollmer, Richard D.; Bond, Nicholas A.

    Two computer-assisted instruction programs were written in electronics and trigonometry to test the Wollmer Markov Model for optimizing hierarchial learning; calibration samples totalling 110 students completed these programs. Since the model postulated that transfer effects would be a function of the amount of practice, half of the students were…

  6. Delineating the joint hierarchical structure of clinical and personality disorders in an outpatient psychiatric sample.

    PubMed

    Forbes, Miriam K; Kotov, Roman; Ruggero, Camilo J; Watson, David; Zimmerman, Mark; Krueger, Robert F

    2017-11-01

    A large body of research has focused on identifying the optimal number of dimensions - or spectra - to model individual differences in psychopathology. Recently, it has become increasingly clear that ostensibly competing models with varying numbers of spectra can be synthesized in empirically derived hierarchical structures. We examined the convergence between top-down (bass-ackwards or sequential principal components analysis) and bottom-up (hierarchical agglomerative cluster analysis) statistical methods for elucidating hierarchies to explicate the joint hierarchical structure of clinical and personality disorders. Analyses examined 24 clinical and personality disorders based on semi-structured clinical interviews in an outpatient psychiatric sample (n=2900). The two methods of hierarchical analysis converged on a three-tier joint hierarchy of psychopathology. At the lowest tier, there were seven spectra - disinhibition, antagonism, core thought disorder, detachment, core internalizing, somatoform, and compulsivity - that emerged in both methods. These spectra were nested under the same three higher-order superspectra in both methods: externalizing, broad thought dysfunction, and broad internalizing. In turn, these three superspectra were nested under a single general psychopathology spectrum, which represented the top tier of the hierarchical structure. The hierarchical structure mirrors and extends upon past research, with the inclusion of a novel compulsivity spectrum, and the finding that psychopathology is organized in three superordinate domains. This hierarchy can thus be used as a flexible and integrative framework to facilitate psychopathology research with varying levels of specificity (i.e., focusing on the optimal level of detailed information, rather than the optimal number of factors). Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Is awareness necessary for true inference?

    PubMed

    Leo, Peter D; Greene, Anthony J

    2008-09-01

    In transitive inference, participants learn a set of context-dependent discriminations that can be organized into a hierarchy that supports inference. Several studies show that inference occurs with or without task awareness. However, some studies assert that without awareness, performance is attributable to pseudoinference. By this account, inference-like performance is achieved by differential stimulus weighting according to the stimuli's proximity to the end items of the hierarchy. We implement an inference task that cannot be based on differential stimulus weighting. The design itself rules out pseudoinference strategies. Success on the task without evidence of deliberative strategies would therefore suggest that true inference can be achieved implicitly. We found that accurate performance on the inference task was not dependent on explicit awareness. The finding is consistent with a growing body of evidence that indicates that forms of learning and memory supporting inference and flexibility do not necessarily depend on task awareness.

  8. Parallel Optical Random Access Memory (PORAM)

    NASA Technical Reports Server (NTRS)

    Alphonse, G. A.

    1989-01-01

    It is shown that the need to minimize component count, power and size, and to maximize packing density require a parallel optical random access memory to be designed in a two-level hierarchy: a modular level and an interconnect level. Three module designs are proposed, in the order of research and development requirements. The first uses state-of-the-art components, including individually addressed laser diode arrays, acousto-optic (AO) deflectors and magneto-optic (MO) storage medium, aimed at moderate size, moderate power, and high packing density. The next design level uses an electron-trapping (ET) medium to reduce optical power requirements. The third design uses a beam-steering grating surface emitter (GSE) array to reduce size further and minimize the number of components.

  9. I/O efficient algorithms and applications in geographic information systems

    NASA Astrophysics Data System (ADS)

    Danner, Andrew

    Modern remote sensing methods such a laser altimetry (lidar) and Interferometric Synthetic Aperture Radar (IfSAR) produce georeferenced elevation data at unprecedented rates. Many Geographic Information System (GIS) algorithms designed for terrain modelling applications cannot process these massive data sets. The primary problem is that these data sets are too large to fit in the main internal memory of modern computers and must therefore reside on larger, but considerably slower disks. In these applications, the transfer of data between disk and main memory, or I/O, becomes the primary bottleneck. Working in a theoretical model that more accurately represents this two level memory hierarchy, we can develop algorithms that are I/O-efficient and reduce the amount of disk I/O needed to solve a problem. In this thesis we aim to modernize GIS algorithms and develop a number of I/O-efficient algorithms for processing geographic data derived from massive elevation data sets. For each application, we convert a geographic question to an algorithmic question, develop an I/O-efficient algorithm that is theoretically efficient, implement our approach and verify its performance using real-world data. The applications we consider include constructing a gridded digital elevation model (DEM) from an irregularly spaced point cloud, removing topological noise from a DEM, modeling surface water flow over a terrain, extracting river networks and watershed hierarchies from the terrain, and locating polygons containing query points in a planar subdivision. We initially developed solutions to each of these applications individually. However, we also show how to combine individual solutions to form a scalable geo-processing pipeline that seamlessly solves a sequence of sub-problems with little or no manual intervention. We present experimental results that demonstrate orders of magnitude improvement over previously known algorithms.

  10. Optimal Foraging in Semantic Memory

    ERIC Educational Resources Information Center

    Hills, Thomas T.; Jones, Michael N.; Todd, Peter M.

    2012-01-01

    Do humans search in memory using dynamic local-to-global search strategies similar to those that animals use to forage between patches in space? If so, do their dynamic memory search policies correspond to optimal foraging strategies seen for spatial foraging? Results from a number of fields suggest these possibilities, including the shared…

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Seyong; Vetter, Jeffrey S

    Computer architecture experts expect that non-volatile memory (NVM) hierarchies will play a more significant role in future systems including mobile, enterprise, and HPC architectures. With this expectation in mind, we present NVL-C: a novel programming system that facilitates the efficient and correct programming of NVM main memory systems. The NVL-C programming abstraction extends C with a small set of intuitive language features that target NVM main memory, and can be combined directly with traditional C memory model features for DRAM. We have designed these new features to enable compiler analyses and run-time checks that can improve performance and guard againstmore » a number of subtle programming errors, which, when left uncorrected, can corrupt NVM-stored data. Moreover, to enable recovery of data across application or system failures, these NVL-C features include a flexible directive for specifying NVM transactions. So that our implementation might be extended to other compiler front ends and languages, the majority of our compiler analyses are implemented in an extended version of LLVM's intermediate representation (LLVM IR). We evaluate NVL-C on a number of applications to show its flexibility, performance, and correctness.« less

  12. Limits to the usability of iconic memory

    PubMed Central

    Rensink, Ronald A.

    2014-01-01

    Human vision briefly retains a trace of a stimulus after it disappears. This trace—iconic memory—is often believed to be a surrogate for the original stimulus, a representational structure that can be used as if the original stimulus were still present. To investigate its nature, a flicker-search paradigm was developed that relied upon a full scan (rather than partial report) of its contents. Results show that for visual search it can indeed act as a surrogate, with little cost for alternating between visible and iconic representations. However, the duration over which it can be used depends on the type of task: some tasks can use iconic memory for at least 240 ms, others for only about 190 ms, while others for no more than about 120 ms. The existence of these different limits suggests that iconic memory may have multiple layers, each corresponding to a particular level of the visual hierarchy. In this view, the inability to use a layer of iconic memory may reflect an inability to maintain feedback connections to the corresponding representation. PMID:25221539

  13. Brain Behavior Evolution during Learning: Emergence of Hierarchical Temporal Memory

    DTIC Science & Technology

    2013-08-30

    organization and synapse strengthening and reconnection operating within and upon the existing processing structures[2]. To say the least, the brain is...that it is a tree increases, then we say its hierarchy in- creases. We explore different starting values and different thresholds and find that...impulses from two neuronal columns ( say i and k) to reach column j at the exact same time. This means when column j is analyzing whether or not to

  14. Navigation in large information spaces represented as hypertext: A review of the literature

    NASA Technical Reports Server (NTRS)

    Brown, Marcus

    1990-01-01

    The problem addressed is the failure of information-space navigation tools when the space grows to large. The basic goal is to provide the power of the hypertext interface in such a way as to be most easily comprehensible to the user. It was determined that the optimal structure for information is an overlapping, simplified hierarchy. The hierarchical structure should be made obvious to the user, and many of the non-hierarchical links in the information space should either by eliminated, or should be de-emphasized so that the novice user is not confused by them. Only one of the hierarchies should be very simple.

  15. Understanding Social Hierarchies: The Neural and Psychological Foundations of Status Perception

    PubMed Central

    Koski, Jessica; Xie, Hongling; Olson, Ingrid R.

    2017-01-01

    Social groups across species rapidly self-organize into hierarchies, where members vary in their level of power, influence, skill, or dominance. In this review we explore the nature of social hierarchies and the traits associated with status in both humans and nonhuman primates, and how status varies across development in humans. Our review finds that we can rapidly identify social status based on a wide range of cues. Like monkeys, we tend to use certain cues, like physical strength, to make status judgments, although layered on top of these more primitive perceptual cues are socio-cultural status cues like job titles and educational attainment. One's relative status has profound effects on attention, memory, and social interactions, as well as health and wellness. These effects can be particularly pernicious in children and adolescents. Developmental research on peer groups and social exclusion suggests teenagers may be particularly sensitive to social status information, but research focused specifically on status processing and associated brain areas is very limited. Recent evidence from neuroscience suggests there may be an underlying neural network, including regions involved in executive, emotional, and reward processing, that is sensitive to status information. We conclude with questions for future research as well as stressing the need to expand social neuroscience research on status processing to adolescents. PMID:25697184

  16. Adiabatic quantum optimization for associative memory recall

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seddiqi, Hadayat; Humble, Travis S.

    Hopfield networks are a variant of associative memory that recall patterns stored in the couplings of an Ising model. Stored memories are conventionally accessed as fixed points in the network dynamics that correspond to energetic minima of the spin state. We show that memories stored in a Hopfield network may also be recalled by energy minimization using adiabatic quantum optimization (AQO). Numerical simulations of the underlying quantum dynamics allow us to quantify AQO recall accuracy with respect to the number of stored memories and noise in the input key. We investigate AQO performance with respect to how memories are storedmore » in the Ising model according to different learning rules. Our results demonstrate that AQO recall accuracy varies strongly with learning rule, a behavior that is attributed to differences in energy landscapes. Consequently, learning rules offer a family of methods for programming adiabatic quantum optimization that we expect to be useful for characterizing AQO performance.« less

  17. Adiabatic Quantum Optimization for Associative Memory Recall

    NASA Astrophysics Data System (ADS)

    Seddiqi, Hadayat; Humble, Travis

    2014-12-01

    Hopfield networks are a variant of associative memory that recall patterns stored in the couplings of an Ising model. Stored memories are conventionally accessed as fixed points in the network dynamics that correspond to energetic minima of the spin state. We show that memories stored in a Hopfield network may also be recalled by energy minimization using adiabatic quantum optimization (AQO). Numerical simulations of the underlying quantum dynamics allow us to quantify AQO recall accuracy with respect to the number of stored memories and noise in the input key. We investigate AQO performance with respect to how memories are stored in the Ising model according to different learning rules. Our results demonstrate that AQO recall accuracy varies strongly with learning rule, a behavior that is attributed to differences in energy landscapes. Consequently, learning rules offer a family of methods for programming adiabatic quantum optimization that we expect to be useful for characterizing AQO performance.

  18. Adiabatic quantum optimization for associative memory recall

    DOE PAGES

    Seddiqi, Hadayat; Humble, Travis S.

    2014-12-22

    Hopfield networks are a variant of associative memory that recall patterns stored in the couplings of an Ising model. Stored memories are conventionally accessed as fixed points in the network dynamics that correspond to energetic minima of the spin state. We show that memories stored in a Hopfield network may also be recalled by energy minimization using adiabatic quantum optimization (AQO). Numerical simulations of the underlying quantum dynamics allow us to quantify AQO recall accuracy with respect to the number of stored memories and noise in the input key. We investigate AQO performance with respect to how memories are storedmore » in the Ising model according to different learning rules. Our results demonstrate that AQO recall accuracy varies strongly with learning rule, a behavior that is attributed to differences in energy landscapes. Consequently, learning rules offer a family of methods for programming adiabatic quantum optimization that we expect to be useful for characterizing AQO performance.« less

  19. Generic hierarchical engine for mask data preparation

    NASA Astrophysics Data System (ADS)

    Kalus, Christian K.; Roessl, Wolfgang; Schnitker, Uwe; Simecek, Michal

    2002-07-01

    Electronic layouts are usually flattened on their path from the hierarchical source downstream to the wafer. Mask data preparation has certainly been identified as a severe bottleneck since long. Data volumes are not only doubling every year along the ITRS roadmap. With the advent of optical proximity correction and phase-shifting masks data volumes are escalating up to non-manageable heights. Hierarchical treatment is one of the most powerful means to keep memory and CPU consumption in reasonable ranges. Only recently, however, has this technique acquired more public attention. Mask data preparation is the most critical area calling for a sound infrastructure to reduce the handling problem. Gaining more and more attention though, are other applications such as large area simulation and manufacturing rule checking (MRC). They all would profit from a generic engine capable to efficiently treat hierarchical data. In this paper we will present a generic engine for hierarchical treatment which solves the major problem, steady transitions along cell borders. Several alternatives exist how to walk through the hierarchy tree. They have, to date, not been thoroughly investigated. One is a bottom-up attempt to treat cells starting with the most elementary cells. The other one is a top-down approach which lends itself to creating a new hierarchy tree. In addition, since the variety, degree of hierarchy and quality of layouts extends over a wide range a generic engine has to take intelligent decisions when exploding the hierarchy tree. Several applications will be shown, in particular how far the limits can be pushed with the current hierarchical engine.

  20. Memory as the "whole brain work": a large-scale model based on "oscillations in super-synergy".

    PubMed

    Başar, Erol

    2005-01-01

    According to recent trends, memory depends on several brain structures working in concert across many levels of neural organization; "memory is a constant work-in progress." The proposition of a brain theory based on super-synergy in neural populations is most pertinent for the understanding of this constant work in progress. This report introduces a new model on memory basing on the processes of EEG oscillations and Brain Dynamics. This model is shaped by the following conceptual and experimental steps: 1. The machineries of super-synergy in the whole brain are responsible for formation of sensory-cognitive percepts. 2. The expression "dynamic memory" is used for memory processes that evoke relevant changes in alpha, gamma, theta and delta activities. The concerted action of distributed multiple oscillatory processes provides a major key for understanding of distributed memory. It comprehends also the phyletic memory and reflexes. 3. The evolving memory, which incorporates reciprocal actions or reverberations in the APLR alliance and during working memory processes, is especially emphasized. 4. A new model related to "hierarchy of memories as a continuum" is introduced. 5. The notions of "longer activated memory" and "persistent memory" are proposed instead of long-term memory. 6. The new analysis to recognize faces emphasizes the importance of EEG oscillations in neurophysiology and Gestalt analysis. 7. The proposed basic framework called "Memory in the Whole Brain Work" emphasizes that memory and all brain functions are inseparable and are acting as a "whole" in the whole brain. 8. The role of genetic factors is fundamental in living system settings and oscillations and accordingly in memory, according to recent publications. 9. A link from the "whole brain" to "whole body," and incorporation of vegetative and neurological system, is proposed, EEG oscillations and ultraslow oscillations being a control parameter.

  1. SU (2) lattice gauge theory simulations on Fermi GPUs

    NASA Astrophysics Data System (ADS)

    Cardoso, Nuno; Bicudo, Pedro

    2011-05-01

    In this work we explore the performance of CUDA in quenched lattice SU (2) simulations. CUDA, NVIDIA Compute Unified Device Architecture, is a hardware and software architecture developed by NVIDIA for computing on the GPU. We present an analysis and performance comparison between the GPU and CPU in single and double precision. Analyses with multiple GPUs and two different architectures (G200 and Fermi architectures) are also presented. In order to obtain a high performance, the code must be optimized for the GPU architecture, i.e., an implementation that exploits the memory hierarchy of the CUDA programming model. We produce codes for the Monte Carlo generation of SU (2) lattice gauge configurations, for the mean plaquette, for the Polyakov Loop at finite T and for the Wilson loop. We also present results for the potential using many configurations (50,000) without smearing and almost 2000 configurations with APE smearing. With two Fermi GPUs we have achieved an excellent performance of 200× the speed over one CPU, in single precision, around 110 Gflops/s. We also find that, using the Fermi architecture, double precision computations for the static quark-antiquark potential are not much slower (less than 2× slower) than single precision computations.

  2. Logical definability and asymptotic growth in optimization and counting problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Compton, K.

    1994-12-31

    There has recently been a great deal of interest in the relationship between logical definability and NP-optimization problems. Let MS{sub n} (resp. MP{sub n}) be the class of problems to compute, for given a finite structure A, the maximum number of tuples {bar x} in A satisfying a {Sigma}{sub n} (resp. II{sub n}) formula {psi}({bar x}, {bar S}) as {bar S} ranges over predicates on A. Kolaitis and Thakur showed that the classes MS{sub n} and MP{sub n} collapse to a hierarchy of four levels. Papadimitriou and Yannakakis previously showed that problems in the two lowest levels MS{sub 0} andmore » MS{sub 1} (which they called Max Snp and Max Np) are approximable to within a contrast factor in polynomial time. Similarly, Saluja, Subrahmanyam, and Thakur defined SS{sub n} (resp. SP{sub n}) to be the class of problems to compute, for given a finite structure A, the number of tuples ({bar T}, {bar S}) satisfying a given {Sigma}{sub n} (resp. II{sub n}) formula {psi}({bar T}, {bar c}) in A. They showed that the classes SS{sub n} and SP{sub n} collapse to a hierarchy of five levels and that problems in the two lowest levels SS{sub 0} and SS{sub 1} have a fully polynomial time randomized approximation scheme. We define extended classes MSF{sub n}, MPF{sub n} SSF{sub n}, and SPF{sub n} by allowing formulae to contain predicates definable in a logic known as least fixpoint logic. The resulting hierarchies classes collapse to the same number of levels and problems in the bottom levels can be approximated as before, but now some problems descend from the highest levels in the original hierarchies to the lowest levels in the new hierarchies. We introduce a method characterizing rates of growth of average solution sizes thereby showing a number of important problems do not belong MSF{sub 1} and SSF{sub 1}. This method is related to limit laws for logics and the probabilistic method from combinatorics.« less

  3. Development of a Multilevel Optimization Approach to the Design of Modern Engineering Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Barthelemy, J. F. M.

    1983-01-01

    A general algorithm is proposed which carries out the design process iteratively, starting at the top of the hierarchy and proceeding downward. Each subproblem is optimized separately for fixed controls from higher level subproblems. An optimum sensitivity analysis is then performed which determines the sensitivity of the subproblem design to changes in higher level subproblem controls. The resulting sensitivity derivatives are used to construct constraints which force the controlling subproblems into chosing their own designs so as to improve the lower levels subproblem designs while satisfying their own constraints. The applicability of the proposed algorithm is demonstrated by devising a four-level hierarchy to perform the simultaneous aerodynamic and structural design of a high-performance sailplane wing for maximum cross-country speed. Finally, the concepts discussed are applied to the two-level minimum weight structural design of the sailplane wing. The numerical experiments show that discontinuities in the sensitivity derivatives may delay convergence, but that the algorithm is robust enough to overcome these discontinuities and produce low-weight feasible designs, regardless of whether the optimization is started from the feasible space or the infeasible one.

  4. State recovery and lockstep execution restart in a system with multiprocessor pairing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gara, Alan; Gschwind, Michael K; Salapura, Valentina

    System, method and computer program product for a multiprocessing system to offer selective pairing of processor cores for increased processing reliability. A selective pairing facility is provided that selectively connects, i.e., pairs, multiple microprocessor or processor cores to provide one highly reliable thread (or thread group). Each paired microprocessor or processor cores that provide one highly reliable thread for high-reliability connect with a system components such as a memory "nest" (or memory hierarchy), an optional system controller, and optional interrupt controller, optional I/O or peripheral devices, etc. The memory nest is attached to a selective pairing facility via a switchmore » or a bus. Each selectively paired processor core is includes a transactional execution facility, whereing the system is configured to enable processor rollback to a previous state and reinitialize lockstep execution in order to recover from an incorrect execution when an incorrect execution has been detected by the selective pairing facility.« less

  5. Discovering Event Structure in Continuous Narrative Perception and Memory.

    PubMed

    Baldassano, Christopher; Chen, Janice; Zadbood, Asieh; Pillow, Jonathan W; Hasson, Uri; Norman, Kenneth A

    2017-08-02

    During realistic, continuous perception, humans automatically segment experiences into discrete events. Using a novel model of cortical event dynamics, we investigate how cortical structures generate event representations during narrative perception and how these events are stored to and retrieved from memory. Our data-driven approach allows us to detect event boundaries as shifts between stable patterns of brain activity without relying on stimulus annotations and reveals a nested hierarchy from short events in sensory regions to long events in high-order areas (including angular gyrus and posterior medial cortex), which represent abstract, multimodal situation models. High-order event boundaries are coupled to increases in hippocampal activity, which predict pattern reinstatement during later free recall. These areas also show evidence of anticipatory reinstatement as subjects listen to a familiar narrative. Based on these results, we propose that brain activity is naturally structured into nested events, which form the basis of long-term memory representations. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Optimal colour quality of LED clusters based on memory colours.

    PubMed

    Smet, Kevin; Ryckaert, Wouter R; Pointer, Michael R; Deconinck, Geert; Hanselaer, Peter

    2011-03-28

    The spectral power distributions of tri- and tetrachromatic clusters of Light-Emitting-Diodes, composed of simulated and commercially available LEDs, were optimized with a genetic algorithm to maximize the luminous efficacy of radiation and the colour quality as assessed by the memory colour quality metric developed by the authors. The trade-off of the colour quality as assessed by the memory colour metric and the luminous efficacy of radiation was investigated by calculating the Pareto optimal front using the NSGA-II genetic algorithm. Optimal peak wavelengths and spectral widths of the LEDs were derived, and over half of them were found to be close to Thornton's prime colours. The Pareto optimal fronts of real LED clusters were always found to be smaller than those of the simulated clusters. The effect of binning on designing a real LED cluster was investigated and was found to be quite large. Finally, a real LED cluster of commercially available AlGaInP, InGaN and phosphor white LEDs was optimized to obtain a higher score on memory colour quality scale than its corresponding CIE reference illuminant.

  7. Structural hierarchy of autism spectrum disorder symptoms: an integrative framework.

    PubMed

    Kim, Hyunsik; Keifer, Cara M; Rodriguez-Seijas, Craig; Eaton, Nicholas R; Lerner, Matthew D; Gadow, Kenneth D

    2018-01-01

    In an attempt to resolve questions regarding the symptom classification of autism spectrum disorder (ASD), previous research generally aimed to demonstrate superiority of one model over another. Rather than adjudicating which model may be optimal, we propose an alternative approach that integrates competing models using Goldberg's bass-ackwards method, providing a comprehensive understanding of the underlying symptom structure of ASD. The study sample comprised 3,825 individuals, consecutive referrals to a university hospital developmental disabilities specialty clinic or a child psychiatry outpatient clinic. This study analyzed DSM-IV-referenced ASD symptom statements from parent and teacher versions of the Child and Adolescent Symptom Inventory-4R. A series of exploratory structural equation models was conducted in order to produce interpretable latent factors that account for multivariate covariance. Results indicated that ASD symptoms were structured into an interpretable hierarchy across multiple informants. This hierarchy includes five levels; key features of ASD bifurcate into different constructs with increasing specificity. This is the first study to examine an underlying structural hierarchy of ASD symptomatology using the bass-ackwards method. This hierarchy demonstrates how core features of ASD relate at differing levels of resolution, providing a model for conceptualizing ASD heterogeneity and a structure for integrating divergent theories of cognitive processes and behavioral features that define the disorder. These findings suggest that a more coherent and complete understanding of the structure of ASD symptoms may be reflected in a metastructure rather than at one level of resolution. © 2017 Association for Child and Adolescent Mental Health.

  8. Near-optimal integration of facial form and motion.

    PubMed

    Dobs, Katharina; Ma, Wei Ji; Reddy, Leila

    2017-09-08

    Human perception consists of the continuous integration of sensory cues pertaining to the same object. While it has been fairly well shown that humans use an optimal strategy when integrating low-level cues proportional to their relative reliability, the integration processes underlying high-level perception are much less understood. Here we investigate cue integration in a complex high-level perceptual system, the human face processing system. We tested cue integration of facial form and motion in an identity categorization task and found that an optimal model could successfully predict subjects' identity choices. Our results suggest that optimal cue integration may be implemented across different levels of the visual processing hierarchy.

  9. Effectiveness evaluation of double-layered satellite network with laser and microwave hybrid links based on fuzzy analytic hierarchy process

    NASA Astrophysics Data System (ADS)

    Zhang, Wei; Rao, Qiaomeng

    2018-01-01

    In order to solve the problem of high speed, large capacity and limited spectrum resources of satellite communication network, a double-layered satellite network with global seamless coverage based on laser and microwave hybrid links is proposed in this paper. By analyzing the characteristics of the double-layered satellite network with laser and microwave hybrid links, an effectiveness evaluation index system for the network is established. And then, the fuzzy analytic hierarchy process, which combines the analytic hierarchy process and the fuzzy comprehensive evaluation theory, is used to evaluate the effectiveness of the double-layered satellite network with laser and microwave hybrid links. Furthermore, the evaluation result of the proposed hybrid link network is obtained by simulation. The effectiveness evaluation process of the proposed double-layered satellite network with laser and microwave hybrid links can help to optimize the design of hybrid link double-layered satellite network and improve the operating efficiency of the satellite system.

  10. A GPU-Accelerated Approach for Feature Tracking in Time-Varying Imagery Datasets.

    PubMed

    Peng, Chao; Sahani, Sandip; Rushing, John

    2017-10-01

    We propose a novel parallel connected component labeling (CCL) algorithm along with efficient out-of-core data management to detect and track feature regions of large time-varying imagery datasets. Our approach contributes to the big data field with parallel algorithms tailored for GPU architectures. We remove the data dependency between frames and achieve pixel-level parallelism. Due to the large size, the entire dataset cannot fit into cached memory. Frames have to be streamed through the memory hierarchy (disk to CPU main memory and then to GPU memory), partitioned, and processed as batches, where each batch is small enough to fit into the GPU. To reconnect the feature regions that are separated due to data partitioning, we present a novel batch merging algorithm to extract the region connection information across multiple batches in a parallel fashion. The information is organized in a memory-efficient structure and supports fast indexing on the GPU. Our experiment uses a commodity workstation equipped with a single GPU. The results show that our approach can efficiently process a weather dataset composed of terabytes of time-varying radar images. The advantages of our approach are demonstrated by comparing to the performance of an efficient CPU cluster implementation which is being used by the weather scientists.

  11. Long-term memory of hierarchical relationships in free-living greylag geese.

    PubMed

    Weiss, Brigitte M; Scheiber, Isabella B R

    2013-01-01

    Animals may memorise spatial and social information for many months and even years. Here, we investigated long-term memory of hierarchically ordered relationships, where the position of a reward depended on the relationship of a stimulus relative to other stimuli in the hierarchy. Seventeen greylag geese (Anser anser) had been trained on discriminations between successive pairs of five or seven implicitly ordered colours, where the higher ranking colour in each pair was rewarded. Geese were re-tested on the task 2, 6 and 12 months after learning the dyadic colour relationships. They chose the correct colour above chance at all three points in time, whereby performance was better in colour pairs at the beginning or end of the colour series. Nonetheless, they also performed above chance on internal colour pairs, which is indicative of long-term memory for quantitative differences in associative strength and/or for relational information. There were no indications for a decline in performance over time, indicating that geese may remember dyadic relationships for at least 6 months and probably well over 1 year. Furthermore, performance in the memory task was unrelated to the individuals' sex and their performance while initially learning the dyadic colour relationships. We discuss possible functions of this long-term memory in the social domain.

  12. Health, Health Care, and Systems Science: Emerging Paradigm.

    PubMed

    Janecka, Ivo

    2017-02-15

    Health is a continuum of an optimized state of a biologic system, an outcome of positive relationships with the self and others. A healthy system follows the principles of systems science derived from observations of nature, highlighting the character of relationships as the key determinant. Relationships evolve from our decisions, which are consequential to the function of our own biologic system on all levels, including the genome, where epigenetics impact our morphology. In healthy systems, decisions emanate from the reciprocal collaboration of hippocampal memory and the executive prefrontal cortex. We can decide to change relationships through choices. What is selected, however, only represents the cognitive interpretation of our limited sensory perception; it strongly reflects inherent biases toward either optimizing state, making a biologic system healthy, or not. Health or its absence is then the outcome; there is no inconsequential choice. Public health effort should not focus on punitive steps (e.g. taxation of unhealthy products or behaviors) in order to achieve a higher level of public's health. It should teach people the process of making healthy decisions; otherwise, people will just migrate/shift from one unhealthy product/behavior to another, and well-intended punitive steps will not make much difference. Physical activity, accompanied by nutrition and stress management, have the greatest impact on fashioning health and simultaneously are the most cost-effective measures. Moderate-to-vigorous exercise not only improves aerobic fitness but also positively influences cognition, including memory and senses. Collective, rational societal decisions can then be anticipated. Health care is a business system principally governed by self-maximizing decisions of its components; uneven and contradictory outcomes are the consequences within such a non-optimized system. Health is not health care. We are biologic systems subject to the laws of biology in spite of our incongruous decisions that are detrimental to health. A biologic system/a human body originates from structural, deterministic genes as well as shared epigenetic memory of our ancestors affecting our bodily function and structure. The political governing systems' vertical hierarchy has control over money and laws, neither of which materially affect individual lifestyle/behavioral choices toward health. Improved health comes from focusing on enhancing the biologic age and not the chronologic one, which simply represents a linear time from a birth certificate to a death certificate and is applicable only in its extremes. "Age-related diseases" are simply reflections of a given culture. Biologic age, reflecting the actual state of health, could be used in all health-related assessments including health-life insurance premiums, licensing of job categories, etc., all with a broader and healthy societal impact.

  13. Health, Health Care, and Systems Science: Emerging Paradigm

    PubMed Central

    2017-01-01

    Health is a continuum of an optimized state of a biologic system, an outcome of positive relationships with the self and others. A healthy system follows the principles of systems science derived from observations of nature, highlighting the character of relationships as the key determinant. Relationships evolve from our decisions, which are consequential to the function of our own biologic system on all levels, including the genome, where epigenetics impact our morphology. In healthy systems, decisions emanate from the reciprocal collaboration of hippocampal memory and the executive prefrontal cortex. We can decide to change relationships through choices. What is selected, however, only represents the cognitive interpretation of our limited sensory perception; it strongly reflects inherent biases toward either optimizing state, making a biologic system healthy, or not. Health or its absence is then the outcome; there is no inconsequential choice. Public health effort should not focus on punitive steps (e.g. taxation of unhealthy products or behaviors) in order to achieve a higher level of public’s health. It should teach people the process of making healthy decisions; otherwise, people will just migrate/shift from one unhealthy product/behavior to another, and well-intended punitive steps will not make much difference. Physical activity, accompanied by nutrition and stress management, have the greatest impact on fashioning health and simultaneously are the most cost-effective measures. Moderate-to-vigorous exercise not only improves aerobic fitness but also positively influences cognition, including memory and senses. Collective, rational societal decisions can then be anticipated. Health care is a business system principally governed by self-maximizing decisions of its components; uneven and contradictory outcomes are the consequences within such a non-optimized system. Health is not health care. We are biologic systems subject to the laws of biology in spite of our incongruous decisions that are detrimental to health. A biologic system/a human body originates from structural, deterministic genes as well as shared epigenetic memory of our ancestors affecting our bodily function and structure. The political governing systems’ vertical hierarchy has control over money and laws, neither of which materially affect individual lifestyle/behavioral choices toward health. Improved health comes from focusing on enhancing the biologic age and not the chronologic one, which simply represents a linear time from a birth certificate to a death certificate and is applicable only in its extremes. “Age-related diseases” are simply reflections of a given culture. Biologic age, reflecting the actual state of health, could be used in all health-related assessments including health-life insurance premiums, licensing of job categories, etc., all with a broader and healthy societal impact. PMID:28357162

  14. A numerical similarity approach for using retired Current Procedural Terminology (CPT) codes for electronic phenotyping in the Scalable Collaborative Infrastructure for a Learning Health System (SCILHS).

    PubMed

    Klann, Jeffrey G; Phillips, Lori C; Turchin, Alexander; Weiler, Sarah; Mandl, Kenneth D; Murphy, Shawn N

    2015-12-11

    Interoperable phenotyping algorithms, needed to identify patient cohorts meeting eligibility criteria for observational studies or clinical trials, require medical data in a consistent structured, coded format. Data heterogeneity limits such algorithms' applicability. Existing approaches are often: not widely interoperable; or, have low sensitivity due to reliance on the lowest common denominator (ICD-9 diagnoses). In the Scalable Collaborative Infrastructure for a Learning Healthcare System (SCILHS) we endeavor to use the widely-available Current Procedural Terminology (CPT) procedure codes with ICD-9. Unfortunately, CPT changes drastically year-to-year - codes are retired/replaced. Longitudinal analysis requires grouping retired and current codes. BioPortal provides a navigable CPT hierarchy, which we imported into the Informatics for Integrating Biology and the Bedside (i2b2) data warehouse and analytics platform. However, this hierarchy does not include retired codes. We compared BioPortal's 2014AA CPT hierarchy with Partners Healthcare's SCILHS datamart, comprising three-million patients' data over 15 years. 573 CPT codes were not present in 2014AA (6.5 million occurrences). No existing terminology provided hierarchical linkages for these missing codes, so we developed a method that automatically places missing codes in the most specific "grouper" category, using the numerical similarity of CPT codes. Two informaticians reviewed the results. We incorporated the final table into our i2b2 SCILHS/PCORnet ontology, deployed it at seven sites, and performed a gap analysis and an evaluation against several phenotyping algorithms. The reviewers found the method placed the code correctly with 97 % precision when considering only miscategorizations ("correctness precision") and 52 % precision using a gold-standard of optimal placement ("optimality precision"). High correctness precision meant that codes were placed in a reasonable hierarchal position that a reviewer can quickly validate. Lower optimality precision meant that codes were not often placed in the optimal hierarchical subfolder. The seven sites encountered few occurrences of codes outside our ontology, 93 % of which comprised just four codes. Our hierarchical approach correctly grouped retired and non-retired codes in most cases and extended the temporal reach of several important phenotyping algorithms. We developed a simple, easily-validated, automated method to place retired CPT codes into the BioPortal CPT hierarchy. This complements existing hierarchical terminologies, which do not include retired codes. The approach's utility is confirmed by the high correctness precision and successful grouping of retired with non-retired codes.

  15. Optimal Medical Equipment Maintenance Service Proposal Decision Support System combining Activity Based Costing (ABC) and the Analytic Hierarchy Process (AHP).

    PubMed

    da Rocha, Leticia; Sloane, Elliot; M Bassani, Jose

    2005-01-01

    This study describes a framework to support the choice of the maintenance service (in-house or third party contract) for each category of medical equipment based on: a) the real medical equipment maintenance management system currently used by the biomedical engineering group of the public health system of the Universidade Estadual de Campinas located in Brazil to control the medical equipment maintenance service, b) the Activity Based Costing (ABC) method, and c) the Analytic Hierarchy Process (AHP) method. Results show the cost and performance related to each type of maintenance service. Decision-makers can use these results to evaluate possible strategies for the categories of equipment.

  16. CSOLNP: Numerical Optimization Engine for Solving Non-linearly Constrained Problems.

    PubMed

    Zahery, Mahsa; Maes, Hermine H; Neale, Michael C

    2017-08-01

    We introduce the optimizer CSOLNP, which is a C++ implementation of the R package RSOLNP (Ghalanos & Theussl, 2012, Rsolnp: General non-linear optimization using augmented Lagrange multiplier method. R package version, 1) alongside some improvements. CSOLNP solves non-linearly constrained optimization problems using a Sequential Quadratic Programming (SQP) algorithm. CSOLNP, NPSOL (a very popular implementation of SQP method in FORTRAN (Gill et al., 1986, User's guide for NPSOL (version 4.0): A Fortran package for nonlinear programming (No. SOL-86-2). Stanford, CA: Stanford University Systems Optimization Laboratory), and SLSQP (another SQP implementation available as part of the NLOPT collection (Johnson, 2014, The NLopt nonlinear-optimization package. Retrieved from http://ab-initio.mit.edu/nlopt)) are three optimizers available in OpenMx package. These optimizers are compared in terms of runtimes, final objective values, and memory consumption. A Monte Carlo analysis of the performance of the optimizers was performed on ordinal and continuous models with five variables and one or two factors. While the relative difference between the objective values is less than 0.5%, CSOLNP is in general faster than NPSOL and SLSQP for ordinal analysis. As for continuous data, none of the optimizers performs consistently faster than the others. In terms of memory usage, we used Valgrind's heap profiler tool, called Massif, on one-factor threshold models. CSOLNP and NPSOL consume the same amount of memory, while SLSQP uses 71 MB more memory than the other two optimizers.

  17. Everyday Experiences of Memory Problems and Control: The Adaptive Role of Selective Optimization with Compensation in the Context of Memory Decline

    PubMed Central

    Hahn, Elizabeth A.; Lachman, Margie E.

    2014-01-01

    The present study examined the role of long-term working memory decline in the relationship between everyday experiences of memory problems and perceived control, and we also considered whether the use of accommodative strategies [selective optimization with compensation (SOC)] would be adaptive. The study included Boston-area participants (n=103) from the Midlife in the United States study (MIDUS) who completed two working memory assessments over ten years and weekly diaries following Time 2. In adjusted multi-level analyses, greater memory decline and lower general perceived control were associated with more everyday memory problems. Low perceived control reported in a weekly diary was associated with more everyday memory problems among those with greater memory decline and low SOC strategy use (Est.=−0.28, SE=0.13, p=.036). These results suggest that the use of SOC strategies in the context of declining memory may help to buffer the negative effects of low perceived control on everyday memory. PMID:24597768

  18. Everyday experiences of memory problems and control: the adaptive role of selective optimization with compensation in the context of memory decline.

    PubMed

    Hahn, Elizabeth A; Lachman, Margie E

    2015-01-01

    The present study examined the role of long-term working memory decline in the relationship between everyday experiences of memory problems and perceived control, and we also considered whether the use of accommodative strategies [selective optimization with compensation (SOC)] would be adaptive. The study included Boston-area participants (n = 103) from the Midlife in the United States study (MIDUS) who completed two working memory assessments over 10 years and weekly diaries following Time 2. In adjusted multi-level analyses, greater memory decline and lower general perceived control were associated with more everyday memory problems. Low perceived control reported in a weekly diary was associated with more everyday memory problems among those with greater memory decline and low SOC strategy use (Est. = -0.28, SE= 0.13, p = .036). These results suggest that the use of SOC strategies in the context of declining memory may help to buffer the negative effects of low perceived control on everyday memory.

  19. Optimization of Apparatus Design and Behavioral Measures for the Assessment of Visuo-Spatial Learning and Memory of Mice on the Barnes Maze

    ERIC Educational Resources Information Center

    O'Leary, Timothy P.; Brown, Richard E.

    2013-01-01

    We have previously shown that apparatus design can affect visual-spatial cue use and memory performance of mice on the Barnes maze. The present experiment extends these findings by determining the optimal behavioral measures and test procedure for analyzing visuo-spatial learning and memory in three different Barnes maze designs. Male and female…

  20. Parameter optimization for transitions between memory states in small arrays of Josephson junctions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rezac, Jacob D.; Imam, Neena; Braiman, Yehuda

    Coupled arrays of Josephson junctions possess multiple stable zero voltage states. Such states can store information and consequently can be utilized for cryogenic memory applications. Basic memory operations can be implemented by sending a pulse to one of the junctions and studying transitions between the states. In order to be suitable for memory operations, such transitions between the states have to be fast and energy efficient. Here in this article we employed simulated annealing, a stochastic optimization algorithm, to study parameter optimization of array parameters which minimizes times and energies of transitions between specifically chosen states that can be utilizedmore » for memory operations (Read, Write, and Reset). Simulation results show that such transitions occur with access times on the order of 10–100 ps and access energies on the order of 10 -19–5×10 -18 J. Numerical simulations are validated with approximate analytical results.« less

  1. Choosing an Optical Disc System: A Guide for Users and Resellers.

    ERIC Educational Resources Information Center

    Vane-Tempest, Stewart

    1995-01-01

    Presents a guide for selecting an optional disc system. Highlights include storage hierarchy; standards; data life cycles; security; implementing an optical jukebox system; optimizing the system; performance; quality and reliability; software; cost of online versus near-line; and growing opportunities. Sidebars provide additional information on…

  2. RchyOptimyx: Cellular Hierarchy Optimization for Flow Cytometry

    PubMed Central

    Aghaeepour, Nima; Jalali, Adrin; O’Neill, Kieran; Chattopadhyay, Pratip K.; Roederer, Mario; Hoos, Holger H.; Brinkman, Ryan R.

    2013-01-01

    Analysis of high-dimensional flow cytometry datasets can reveal novel cell populations with poorly understood biology. Following discovery, characterization of these populations in terms of the critical markers involved is an important step, as this can help to both better understand the biology of these populations and aid in designing simpler marker panels to identify them on simpler instruments and with fewer reagents (i.e., in resource poor or highly regulated clinical settings). However, current tools to design panels based on the biological characteristics of the target cell populations work exclusively based on technical parameters (e.g., instrument configurations, spectral overlap, and reagent availability). To address this shortcoming, we developed RchyOptimyx (cellular hieraRCHY OPTIMization), a computational tool that constructs cellular hierarchies by combining automated gating with dynamic programming and graph theory to provide the best gating strategies to identify a target population to a desired level of purity or correlation with a clinical outcome, using the simplest possible marker panels. RchyOptimyx can assess and graphically present the trade-offs between marker choice and population specificity in high-dimensional flow or mass cytometry datasets. We present three proof-of-concept use cases for RchyOptimyx that involve 1) designing a panel of surface markers for identification of rare populations that are primarily characterized using their intracellular signature; 2) simplifying the gating strategy for identification of a target cell population; 3) identification of a non-redundant marker set to identify a target cell population. PMID:23044634

  3. Helium Nanobubbles Enhance Superelasticity and Retard Shear Localization in Small-Volume Shape Memory Alloy.

    PubMed

    Han, Wei-Zhong; Zhang, Jian; Ding, Ming-Shuai; Lv, Lan; Wang, Wen-Hong; Wu, Guang-Heng; Shan, Zhi-Wei; Li, Ju

    2017-06-14

    The intriguing phenomenon of metal superelasticity relies on stress-induced martensitic transformation (SIMT), which is well-known to be governed by developing cooperative strain accommodation at multiple length scales. It is therefore scientifically interesting to see what happens when this natural length scale hierarchy is disrupted. One method is producing pillars that confine the sample volume to micrometer length scale. Here we apply yet another intervention, helium nanobubbles injection, which produces porosity on the order of several nanometers. While the pillar confinement suppresses superelasticity, we found the dispersion of 5-10 nm helium nanobubbles do the opposite of promoting superelasticity in a Ni 53.5 Fe 19.5 Ga 27 shape memory alloy. The role of helium nanobubbles in modulating the competition between ordinary dislocation slip plasticity and SIMT is discussed.

  4. Most people do not ignore salient invalid cues in memory-based decisions.

    PubMed

    Platzer, Christine; Bröder, Arndt

    2012-08-01

    Former experimental studies have shown that decisions from memory tend to rely only on a few cues, following simple noncompensatory heuristics like "take the best." However, it has also repeatedly been demonstrated that a pictorial, as opposed to a verbal, representation of cue information fosters the inclusion of more cues in compensatory strategies, suggesting a facilitated retrieval of cue patterns. These studies did not properly control for visual salience of cues, however. In the experiment reported here, the cue salience hierarchy established in a pilot study was either congruent or incongruent with the validity order of the cues. Only the latter condition increased compensatory decision making, suggesting that the apparent representational format effect is, rather, a salience effect: Participants automatically retrieve and incorporate salient cues irrespective of their validity. Results are discussed with respect to reaction time data.

  5. Array-based, parallel hierarchical mesh refinement algorithms for unstructured meshes

    DOE PAGES

    Ray, Navamita; Grindeanu, Iulian; Zhao, Xinglin; ...

    2016-08-18

    In this paper, we describe an array-based hierarchical mesh refinement capability through uniform refinement of unstructured meshes for efficient solution of PDE's using finite element methods and multigrid solvers. A multi-degree, multi-dimensional and multi-level framework is designed to generate the nested hierarchies from an initial coarse mesh that can be used for a variety of purposes such as in multigrid solvers/preconditioners, to do solution convergence and verification studies and to improve overall parallel efficiency by decreasing I/O bandwidth requirements (by loading smaller meshes and in memory refinement). We also describe a high-order boundary reconstruction capability that can be used tomore » project the new points after refinement using high-order approximations instead of linear projection in order to minimize and provide more control on geometrical errors introduced by curved boundaries.The capability is developed under the parallel unstructured mesh framework "Mesh Oriented dAtaBase" (MOAB Tautges et al. (2004)). We describe the underlying data structures and algorithms to generate such hierarchies in parallel and present numerical results for computational efficiency and effect on mesh quality. Furthermore, we also present results to demonstrate the applicability of the developed capability to study convergence properties of different point projection schemes for various mesh hierarchies and to a multigrid finite-element solver for elliptic problems.« less

  6. CUDA Optimization Strategies for Compute- and Memory-Bound Neuroimaging Algorithms

    PubMed Central

    Lee, Daren; Dinov, Ivo; Dong, Bin; Gutman, Boris; Yanovsky, Igor; Toga, Arthur W.

    2011-01-01

    As neuroimaging algorithms and technology continue to grow faster than CPU performance in complexity and image resolution, data-parallel computing methods will be increasingly important. The high performance, data-parallel architecture of modern graphical processing units (GPUs) can reduce computational times by orders of magnitude. However, its massively threaded architecture introduces challenges when GPU resources are exceeded. This paper presents optimization strategies for compute- and memory-bound algorithms for the CUDA architecture. For compute-bound algorithms, the registers are reduced through variable reuse via shared memory and the data throughput is increased through heavier thread workloads and maximizing the thread configuration for a single thread block per multiprocessor. For memory-bound algorithms, fitting the data into the fast but limited GPU resources is achieved through reorganizing the data into self-contained structures and employing a multi-pass approach. Memory latencies are reduced by selecting memory resources whose cache performance are optimized for the algorithm's access patterns. We demonstrate the strategies on two computationally expensive algorithms and achieve optimized GPU implementations that perform up to 6× faster than unoptimized ones. Compared to CPU implementations, we achieve peak GPU speedups of 129× for the 3D unbiased nonlinear image registration technique and 93× for the non-local means surface denoising algorithm. PMID:21159404

  7. CUDA optimization strategies for compute- and memory-bound neuroimaging algorithms.

    PubMed

    Lee, Daren; Dinov, Ivo; Dong, Bin; Gutman, Boris; Yanovsky, Igor; Toga, Arthur W

    2012-06-01

    As neuroimaging algorithms and technology continue to grow faster than CPU performance in complexity and image resolution, data-parallel computing methods will be increasingly important. The high performance, data-parallel architecture of modern graphical processing units (GPUs) can reduce computational times by orders of magnitude. However, its massively threaded architecture introduces challenges when GPU resources are exceeded. This paper presents optimization strategies for compute- and memory-bound algorithms for the CUDA architecture. For compute-bound algorithms, the registers are reduced through variable reuse via shared memory and the data throughput is increased through heavier thread workloads and maximizing the thread configuration for a single thread block per multiprocessor. For memory-bound algorithms, fitting the data into the fast but limited GPU resources is achieved through reorganizing the data into self-contained structures and employing a multi-pass approach. Memory latencies are reduced by selecting memory resources whose cache performance are optimized for the algorithm's access patterns. We demonstrate the strategies on two computationally expensive algorithms and achieve optimized GPU implementations that perform up to 6× faster than unoptimized ones. Compared to CPU implementations, we achieve peak GPU speedups of 129× for the 3D unbiased nonlinear image registration technique and 93× for the non-local means surface denoising algorithm. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  8. Caffeine Enhances Memory Performance in Young Adults during Their Non-optimal Time of Day

    PubMed Central

    Sherman, Stephanie M.; Buckley, Timothy P.; Baena, Elsa; Ryan, Lee

    2016-01-01

    Many college students struggle to perform well on exams in the early morning. Although students drink caffeinated beverages to feel more awake, it is unclear whether these actually improve performance. After consuming coffee (caffeinated or decaffeinated), college-age adults completed implicit and explicit memory tasks in the early morning and late afternoon (Experiment 1). During the morning, participants ingesting caffeine demonstrated a striking improvement in explicit memory, but not implicit memory. Caffeine did not alter memory performance in the afternoon. In Experiment 2, participants engaged in cardiovascular exercise in order to examine whether increases in physiological arousal similarly improved memory. Despite clear increases in physiological arousal, exercise did not improve memory performance compared to a stretching control condition. These results suggest that caffeine has a specific benefit for memory during students’ non-optimal time of day – early morning. These findings have real-world implications for students taking morning exams. PMID:27895607

  9. Parallelization of Program to Optimize Simulated Trajectories (POST3D)

    NASA Technical Reports Server (NTRS)

    Hammond, Dana P.; Korte, John J. (Technical Monitor)

    2001-01-01

    This paper describes the parallelization of the Program to Optimize Simulated Trajectories (POST3D). POST3D uses a gradient-based optimization algorithm that reaches an optimum design point by moving from one design point to the next. The gradient calculations required to complete the optimization process, dominate the computational time and have been parallelized using a Single Program Multiple Data (SPMD) on a distributed memory NUMA (non-uniform memory access) architecture. The Origin2000 was used for the tests presented.

  10. Hierarchial parallel computer architecture defined by computational multidisciplinary mechanics

    NASA Technical Reports Server (NTRS)

    Padovan, Joe; Gute, Doug; Johnson, Keith

    1989-01-01

    The goal is to develop an architecture for parallel processors enabling optimal handling of multi-disciplinary computation of fluid-solid simulations employing finite element and difference schemes. The goals, philosphical and modeling directions, static and dynamic poly trees, example problems, interpolative reduction, the impact on solvers are shown in viewgraph form.

  11. An Optimal Program Initiative Selection Model for USMC Program Objective Memorandum Planning

    DTIC Science & Technology

    1993-03-01

    Programming, Master’s Thesis, Naval Postgraduate School, Monterey, CA, September, 1992. 7. Anderson, S.M., Captain, USA, A Goal Programming R&D Project Funding ... Model of the U.S. Army Strategic Defense Command Using the Analytic Hierarchy Process, Master’s Thesis, Naval Postgraduate School, Monterey, CA

  12. Hybrid quantum-classical hierarchy for mitigation of decoherence and determination of excited states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McClean, Jarrod R.; Kimchi-Schwartz, Mollie E.; Carter, Jonathan

    Using quantum devices supported by classical computational resources is a promising approach to quantum-enabled computation. One powerful example of such a hybrid quantum-classical approach optimized for classically intractable eigenvalue problems is the variational quantum eigensolver, built to utilize quantum resources for the solution of eigenvalue problems and optimizations with minimal coherence time requirements by leveraging classical computational resources. These algorithms have been placed as leaders among the candidates for the first to achieve supremacy over classical computation. Here, we provide evidence for the conjecture that variational approaches can automatically suppress even nonsystematic decoherence errors by introducing an exactly solvable channelmore » model of variational state preparation. Moreover, we develop a more general hierarchy of measurement and classical computation that allows one to obtain increasingly accurate solutions by leveraging additional measurements and classical resources. In conclusion, we demonstrate numerically on a sample electronic system that this method both allows for the accurate determination of excited electronic states as well as reduces the impact of decoherence, without using any additional quantum coherence time or formal error-correction codes.« less

  13. DReAM: Demand Response Architecture for Multi-level District Heating and Cooling Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhattacharya, Saptarshi; Chandan, Vikas; Arya, Vijay

    In this paper, we exploit the inherent hierarchy of heat exchangers in District Heating and Cooling (DHC) networks and propose DReAM, a novel Demand Response (DR) architecture for Multi-level DHC networks. DReAM serves to economize system operation while still respecting comfort requirements of individual consumers. Contrary to many present day DR schemes that work on a consumer level granularity, DReAM works at a level of hierarchy above buildings, i.e. substations that supply heat to a group of buildings. This improves the overall DR scalability and reduce the computational complexity. In the first step of the proposed approach, mathematical models ofmore » individual substations and their downstream networks are abstracted into appropriately constructed low-complexity structural forms. In the second step, this abstracted information is employed by the utility to perform DR optimization that determines the optimal heat inflow to individual substations rather than buildings, in order to achieve the targeted objectives across the network. We validate the proposed DReAM framework through experimental results under different scenarios on a test network.« less

  14. Development and demonstration of an on-board mission planner for helicopters

    NASA Technical Reports Server (NTRS)

    Deutsch, Owen L.; Desai, Mukund

    1988-01-01

    Mission management tasks can be distributed within a planning hierarchy, where each level of the hierarchy addresses a scope of action, and associated time scale or planning horizon, and requirements for plan generation response time. The current work is focused on the far-field planning subproblem, with a scope and planning horizon encompassing the entire mission and with a response time required to be about two minutes. The far-feld planning problem is posed as a constrained optimization problem and algorithms and structural organizations are proposed for the solution. Algorithms are implemented in a developmental environment, and performance is assessed with respect to optimality and feasibility for the intended application and in comparison with alternative algorithms. This is done for the three major components of far-field planning: goal planning, waypoint path planning, and timeline management. It appears feasible to meet performance requirements on a 10 Mips flyable processor (dedicated to far-field planning) using a heuristically-guided simulated annealing technique for the goal planner, a modified A* search for the waypoint path planner, and a speed scheduling technique developed for this project.

  15. Load balancing prediction method of cloud storage based on analytic hierarchy process and hybrid hierarchical genetic algorithm.

    PubMed

    Zhou, Xiuze; Lin, Fan; Yang, Lvqing; Nie, Jing; Tan, Qian; Zeng, Wenhua; Zhang, Nian

    2016-01-01

    With the continuous expansion of the cloud computing platform scale and rapid growth of users and applications, how to efficiently use system resources to improve the overall performance of cloud computing has become a crucial issue. To address this issue, this paper proposes a method that uses an analytic hierarchy process group decision (AHPGD) to evaluate the load state of server nodes. Training was carried out by using a hybrid hierarchical genetic algorithm (HHGA) for optimizing a radial basis function neural network (RBFNN). The AHPGD makes the aggregative indicator of virtual machines in cloud, and become input parameters of predicted RBFNN. Also, this paper proposes a new dynamic load balancing scheduling algorithm combined with a weighted round-robin algorithm, which uses the predictive periodical load value of nodes based on AHPPGD and RBFNN optimized by HHGA, then calculates the corresponding weight values of nodes and makes constant updates. Meanwhile, it keeps the advantages and avoids the shortcomings of static weighted round-robin algorithm.

  16. Relation between bandgap and resistance drift in amorphous phase change materials

    PubMed Central

    Rütten, Martin; Kaes, Matthias; Albert, Andreas; Wuttig, Matthias; Salinga, Martin

    2015-01-01

    Memory based on phase change materials is currently the most promising candidate for bridging the gap in access time between memory and storage in traditional memory hierarchy. However, multilevel storage is still hindered by the so-called resistance drift commonly related to structural relaxation of the amorphous phase. Here, we present the temporal evolution of infrared spectra measured on amorphous thin films of the three phase change materials Ag4In3Sb67Te26, GeTe and the most popular Ge2Sb2Te5. A widening of the bandgap upon annealing accompanied by a decrease of the optical dielectric constant ε∞ is observed for all three materials. Quantitative comparison with experimental data for the apparent activation energy of conduction reveals that the temporal evolution of bandgap and activation energy can be decoupled. The case of Ag4In3Sb67Te26, where the increase of activation energy is significantly smaller than the bandgap widening, demonstrates the possibility to identify new phase change materials with reduced resistance drift. PMID:26621533

  17. Relation between bandgap and resistance drift in amorphous phase change materials.

    PubMed

    Rütten, Martin; Kaes, Matthias; Albert, Andreas; Wuttig, Matthias; Salinga, Martin

    2015-12-01

    Memory based on phase change materials is currently the most promising candidate for bridging the gap in access time between memory and storage in traditional memory hierarchy. However, multilevel storage is still hindered by the so-called resistance drift commonly related to structural relaxation of the amorphous phase. Here, we present the temporal evolution of infrared spectra measured on amorphous thin films of the three phase change materials Ag4In3Sb67Te26, GeTe and the most popular Ge2Sb2Te5. A widening of the bandgap upon annealing accompanied by a decrease of the optical dielectric constant ε∞ is observed for all three materials. Quantitative comparison with experimental data for the apparent activation energy of conduction reveals that the temporal evolution of bandgap and activation energy can be decoupled. The case of Ag4In3Sb67Te26, where the increase of activation energy is significantly smaller than the bandgap widening, demonstrates the possibility to identify new phase change materials with reduced resistance drift.

  18. On Russian concepts of Soil Memory - expansion of Dokuchaev's pedological paradigm

    NASA Astrophysics Data System (ADS)

    Tsatskin, A.

    2012-04-01

    Having developed from Dokuchaev's research on chernosem soils on loess, the Russian school of pedology traditionally focused on soils as essential component of landscape. Dokuchaev's soil-landscape paradigm (SLP) was later considerably advanced and expanded to include surface soils on other continents by Hans Jenny. In the 1970s Sokolov and Targulian in Russia introduced the new term of soil memory as an inherent ability of soils to memorize in its morphology and properties the processes of earlier stages of development. This understanding was built upon ideas of soil organizational hierarchy and different rates of specific soil processes as proposed by Yaalon. Soil memory terminology became particularly popular in Russia which is expressed in the 2008 multi-author monograph on soil memory. The Soil Memory book edited by Targulian and Goryachkin and written by 34 authors touches upon the following themes: General approaches (Section 1), Mineral carriers of soil memory (Section 2), Biological carriers of soil memory (section 3) and Anthropogenic soil memory (section 4). The book presents an original account on different new interdisciplinary projects on Russian soils and represents an important contribution into the classical Dokuchaev-Jenny SL paradigm. There is still a controversy as to in what way the Russian term soil memory is related to western terms of soil as a record or archive of earlier events and processes during the time of soil formation. Targulian and Goryachkin agree that all of the terms are close, albeit not entirely interchangeable. They insist that soil memory may have a more comprehensive meaning, e.g. applicable to such complex cases when certain soil properties whose origin is currently ambiguous cannot provide valid environmental reconstructions or dated by available dating techniques. Anyway, not terminology is the main issue. The Russian soil memory concept advances the frontiers of pedology by deepening the time-related soil functions and encouraging closer cooperation with isotope dating experts. This approach will hopefully help us all in better understanding, management and protection of the Earth's critical zone.

  19. Library API for Z-Order Memory Layout

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bethel, E. Wes

    This library provides a simple-to-use API for implementing an altnerative to traditional row-major order in-memory layout, one based on a Morton- order space filling curve (SFC) , specifically, a Z-order variant of the Morton order curve. The library enables programmers to, after a simple initialization step, to convert a multidimensional array from row-major to Z- order layouts, then use a single, generic API call to access data from any arbitrary (i,j,k) location from within the array, whether it it be stored in row- major or z-order format. The motivation for using a SFC in-memory layout is for improved spatial locality,more » which results in increased use of local high speed cache memory. The basic idea is that with row-major order layouts, a data access to some location that is nearby in index space is likely far away in physical memory, resulting in poor spatial locality and slow runtime. On the other hand, with a SFC-based layout, accesses that are nearby in index space are much more likely to also be nearby in physical memory, resulting in much better spatial locality, and better runtime performance. Numerous studies over the years have shown significant runtime performance gains are realized by using a SFC-based memory layout compared to a row-major layout, sometimes by as much as 50%, which result from the better use of the memory and cache hierarchy that are attendant with a SFC-based layout (see, for example, [Beth2012]). This library implementation is intended for use with codes that work with structured, array-based data in 2 or 3 dimensions. It is not appropriate for use with unstructured or point-based data.« less

  20. Capping spheres with scarry crystals: Organizing principles of multi-dislocation, ground-state patterns

    NASA Astrophysics Data System (ADS)

    Azadi, Amir; Grason, Gregory M.

    2014-03-01

    Predicting the ground state ordering of curved crystals remains an unsolved, century-old challenge, beginning with the classic Thomson problem to more recent studies of particle-coated droplets. We study the structural features and underlying principles of multi-dislocation ground states of a crystalline cap adhered to a spherical substrate. In the continuum limit, vanishing lattice spacing, a --> 0 , dislocations proliferate and we show that ground states approach a characteristic sequence of patterns of n-fold radial grain boundary ``scars,'' extending from the boundary and terminating in the bulk. A combination of numerical and asymptotic analysis reveals that energetic hierarchy gives rise to a structural hierarchy, whereby the number of dislocation and scars diverge as a --> 0 while the scar length and number of dislocations per scar become remarkably independent of lattice spacing. We show the that structural hierarchy remains intact when n-fold symmetry becomes unstable to polydispersed forked-scar morphologies. We expect this analysis to resolve previously open questions about the optimal symmetries of dislocation patterns in Thomson-like problems, both with and without excess 5-fold defects.

  1. H∞ memory feedback control with input limitation minimization for offshore jacket platform stabilization

    NASA Astrophysics Data System (ADS)

    Yang, Jia Sheng

    2018-06-01

    In this paper, we investigate a H∞ memory controller with input limitation minimization (HMCIM) for offshore jacket platforms stabilization. The main objective of this study is to reduce the control consumption as well as protect the actuator when satisfying the requirement of the system performance. First, we introduce a dynamic model of offshore platform with low order main modes based on mode reduction method in numerical analysis. Then, based on H∞ control theory and matrix inequality techniques, we develop a novel H∞ memory controller with input limitation. Furthermore, a non-convex optimization model to minimize input energy consumption is proposed. Since it is difficult to solve this non-convex optimization model by optimization algorithm, we use a relaxation method with matrix operations to transform this non-convex optimization model to be a convex optimization model. Thus, it could be solved by a standard convex optimization solver in MATLAB or CPLEX. Finally, several numerical examples are given to validate the proposed models and methods.

  2. SU (2) lattice gauge theory simulations on Fermi GPUs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cardoso, Nuno, E-mail: nunocardoso@cftp.ist.utl.p; Bicudo, Pedro, E-mail: bicudo@ist.utl.p

    2011-05-10

    In this work we explore the performance of CUDA in quenched lattice SU (2) simulations. CUDA, NVIDIA Compute Unified Device Architecture, is a hardware and software architecture developed by NVIDIA for computing on the GPU. We present an analysis and performance comparison between the GPU and CPU in single and double precision. Analyses with multiple GPUs and two different architectures (G200 and Fermi architectures) are also presented. In order to obtain a high performance, the code must be optimized for the GPU architecture, i.e., an implementation that exploits the memory hierarchy of the CUDA programming model. We produce codes formore » the Monte Carlo generation of SU (2) lattice gauge configurations, for the mean plaquette, for the Polyakov Loop at finite T and for the Wilson loop. We also present results for the potential using many configurations (50,000) without smearing and almost 2000 configurations with APE smearing. With two Fermi GPUs we have achieved an excellent performance of 200x the speed over one CPU, in single precision, around 110 Gflops/s. We also find that, using the Fermi architecture, double precision computations for the static quark-antiquark potential are not much slower (less than 2x slower) than single precision computations.« less

  3. Automatic red eye correction and its quality metric

    NASA Astrophysics Data System (ADS)

    Safonov, Ilia V.; Rychagov, Michael N.; Kang, KiMin; Kim, Sang Ho

    2008-01-01

    The red eye artifacts are troublesome defect of amateur photos. Correction of red eyes during printing without user intervention and making photos more pleasant for an observer are important tasks. The novel efficient technique of automatic correction of red eyes aimed for photo printers is proposed. This algorithm is independent from face orientation and capable to detect paired red eyes as well as single red eyes. The approach is based on application of 3D tables with typicalness levels for red eyes and human skin tones and directional edge detection filters for processing of redness image. Machine learning is applied for feature selection. For classification of red eye regions a cascade of classifiers including Gentle AdaBoost committee from Classification and Regression Trees (CART) is applied. Retouching stage includes desaturation, darkening and blending with initial image. Several versions of approach implementation using trade-off between detection and correction quality, processing time, memory volume are possible. The numeric quality criterion of automatic red eye correction is proposed. This quality metric is constructed by applying Analytic Hierarchy Process (AHP) for consumer opinions about correction outcomes. Proposed numeric metric helped to choose algorithm parameters via optimization procedure. Experimental results demonstrate high accuracy and efficiency of the proposed algorithm in comparison with existing solutions.

  4. A parallel finite element procedure for contact-impact problems using edge-based smooth triangular element and GPU

    NASA Astrophysics Data System (ADS)

    Cai, Yong; Cui, Xiangyang; Li, Guangyao; Liu, Wenyang

    2018-04-01

    The edge-smooth finite element method (ES-FEM) can improve the computational accuracy of triangular shell elements and the mesh partition efficiency of complex models. In this paper, an approach is developed to perform explicit finite element simulations of contact-impact problems with a graphical processing unit (GPU) using a special edge-smooth triangular shell element based on ES-FEM. Of critical importance for this problem is achieving finer-grained parallelism to enable efficient data loading and to minimize communication between the device and host. Four kinds of parallel strategies are then developed to efficiently solve these ES-FEM based shell element formulas, and various optimization methods are adopted to ensure aligned memory access. Special focus is dedicated to developing an approach for the parallel construction of edge systems. A parallel hierarchy-territory contact-searching algorithm (HITA) and a parallel penalty function calculation method are embedded in this parallel explicit algorithm. Finally, the program flow is well designed, and a GPU-based simulation system is developed, using Nvidia's CUDA. Several numerical examples are presented to illustrate the high quality of the results obtained with the proposed methods. In addition, the GPU-based parallel computation is shown to significantly reduce the computing time.

  5. Cortical Hierarchies Perform Bayesian Causal Inference in Multisensory Perception

    PubMed Central

    Rohe, Tim; Noppeney, Uta

    2015-01-01

    To form a veridical percept of the environment, the brain needs to integrate sensory signals from a common source but segregate those from independent sources. Thus, perception inherently relies on solving the “causal inference problem.” Behaviorally, humans solve this problem optimally as predicted by Bayesian Causal Inference; yet, the underlying neural mechanisms are unexplored. Combining psychophysics, Bayesian modeling, functional magnetic resonance imaging (fMRI), and multivariate decoding in an audiovisual spatial localization task, we demonstrate that Bayesian Causal Inference is performed by a hierarchy of multisensory processes in the human brain. At the bottom of the hierarchy, in auditory and visual areas, location is represented on the basis that the two signals are generated by independent sources (= segregation). At the next stage, in posterior intraparietal sulcus, location is estimated under the assumption that the two signals are from a common source (= forced fusion). Only at the top of the hierarchy, in anterior intraparietal sulcus, the uncertainty about the causal structure of the world is taken into account and sensory signals are combined as predicted by Bayesian Causal Inference. Characterizing the computational operations of signal interactions reveals the hierarchical nature of multisensory perception in human neocortex. It unravels how the brain accomplishes Bayesian Causal Inference, a statistical computation fundamental for perception and cognition. Our results demonstrate how the brain combines information in the face of uncertainty about the underlying causal structure of the world. PMID:25710328

  6. The Role of Semantic Clustering in Optimal Memory Foraging

    ERIC Educational Resources Information Center

    Montez, Priscilla; Thompson, Graham; Kello, Christopher T.

    2015-01-01

    Recent studies of semantic memory have investigated two theories of optimal search adopted from the animal foraging literature: Lévy flights and marginal value theorem. Each theory makes different simplifying assumptions and addresses different findings in search behaviors. In this study, an experiment is conducted to test whether clustering in…

  7. Pharmacological and therapeutic directions in ADHD: Specificity in the PFC.

    PubMed

    Levy, Florence

    2008-02-28

    Recent directions in the treatment of ADHD have involved both a broadening of pharmacological perspectives to include nor-adrenergic as well as dopaminergic agents. A review of animal and human studies of pharmacological and therapeutic directions in ADHD suggests that the D1 receptor is a specific site for dopaminergic regulation of the PFC, but optimal levels of dopamine (DA) are required for beneficial effects on working memory. Animal and human studies indicate that the alpha-2A receptor is also important for prefrontal regulation, leaving open the question of the relative importance of these receptor sites. The therapeutic effects of ADHD medications in the prefrontal cortex have focused attention on the development of working memory capacity in ADHD. The actions of dopaminergic vs noradrenergic agents, currently available for the treatment of ADHD have overlapping, but different actions in the prefrontal cortex (PFC) and subcortical centers. While stimulants act on D1 receptors in the dorsolateral prefrontal cortex, they also have effects on D2 receptors in the corpus striatum and may also have serotonergic effects at orbitofrontal areas. At therapeutic levels, dopamine (DA) stimulation (through DAT transporter inhibition) decreases noise level acting on subcortical D2 receptors, while NE stimulation (through alpha-2A agonists) increases signal by acting preferentially in the PFC possibly on DAD1 receptors. On the other hand, alpha-2A noradrenergic transmission is more limited to the prefrontal cortex (PFC), and thus less likely to have motor or stereotypic side effects, while alpha-2B and alpha-2C agonists may have wider cortical effects. The data suggest a possible hierarchy of specificity in the current medications used in the treatment of ADHD, with guanfacine likely to be most specific for the treatment of prefrontal attentional and working memory deficits. Stimulants may have broader effects on both vigilance and motor impulsivity, depending on dose levels, while atomoxetine may have effects on attention, anxiety, social affect, and sedation via noradrenergic transmission. At a theoretical level, the advent of possible specific alpha-2A noradrenergic therapies has posed the question of the role of working memory in ADHD. Head to head comparisons of stimulant and noradrenergic alpha-2A, alpha-2B and alpha-2C agonists, utilizing vigilance and affective measures should help to clarify pharmacological and therapeutic differences.

  8. The effects of experimental pain and induced optimism on working memory task performance.

    PubMed

    Boselie, Jantine J L M; Vancleef, Linda M G; Peters, Madelon L

    2016-07-01

    Pain can interrupt and deteriorate executive task performance. We have previously shown that experimentally induced optimism can diminish the deteriorating effect of cold pressor pain on a subsequent working memory task (i.e., operation span task). In two successive experiments we sought further evidence for the protective role of optimism on pain-induced working memory impairments. We used another working memory task (i.e., 2-back task) that was performed either after or during pain induction. Study 1 employed a 2 (optimism vs. no-optimism)×2 (pain vs. no-pain)×2 (pre-score vs. post-score) mixed factorial design. In half of the participants optimism was induced by the Best Possible Self (BPS) manipulation, which required them to write and visualize about a life in the future where everything turned out for the best. In the control condition, participants wrote and visualized a typical day in their life (TD). Next, participants completed either the cold pressor task (CPT) or a warm water control task (WWCT). Before (baseline) and after the CPT or WWCT participants working memory performance was measured with the 2-back task. The 2-back task measures the ability to monitor and update working memory representation by asking participants to indicate whether the current stimulus corresponds to the stimulus that was presented 2 stimuli ago. Study 2 had a 2 (optimism vs. no-optimism)×2 (pain vs. no-pain) mixed factorial design. After receiving the BPS or control manipulation, participants completed the 2-back task twice: once with painful heat stimulation, and once without any stimulation (counter-balanced order). Continuous heat stimulation was used with temperatures oscillating around 1°C above and 1°C below the individual pain threshold. In study 1, the results did not show an effect of cold pressor pain on subsequent 2-back task performance. Results of study 2 indicated that heat pain impaired concurrent 2-back task performance. However, no evidence was found that optimism protected against this pain-induced performance deterioration. Experimentally induced pain impairs concurrent but not subsequent working memory task performance. Manipulated optimism did not counteract pain-induced deterioration of 2-back performance. It is important to explore factors that may diminish the negative impact of pain on the ability to function in daily life, as pain itself often cannot be remediated. We are planning to conduct future studies that should shed further light on the conditions, contexts and executive operations for which optimism can act as a protective factor. Copyright © 2016 Scandinavian Association for the Study of Pain. Published by Elsevier B.V. All rights reserved.

  9. Context and meter enhance long-range planning in music performance

    PubMed Central

    Mathias, Brian; Pfordresher, Peter Q.; Palmer, Caroline

    2015-01-01

    Neural responses demonstrate evidence of resonance, or oscillation, during the production of periodic auditory events. Music contains periodic auditory events that give rise to a sense of beat, which in turn generates a sense of meter on the basis of multiple periodicities. Metrical hierarchies may aid memory for music by facilitating similarity-based associations among sequence events at different periodic distances that unfold in longer contexts. A fundamental question is how metrical associations arising from a musical context influence memory during music performance. Longer contexts may facilitate metrical associations at higher hierarchical levels more than shorter contexts, a prediction of the range model, a formal model of planning processes in music performance (Palmer and Pfordresher, 2003; Pfordresher et al., 2007). Serial ordering errors, in which intended sequence events are produced in incorrect sequence positions, were measured as skilled pianists performed musical pieces that contained excerpts embedded in long or short musical contexts. Pitch errors arose from metrically similar positions and further sequential distances more often when the excerpt was embedded in long contexts compared to short contexts. Musicians’ keystroke intensities and error rates also revealed influences of metrical hierarchies, which differed for performances in long and short contexts. The range model accounted for contextual effects and provided better fits to empirical findings when metrical associations between sequence events were included. Longer sequence contexts may facilitate planning during sequence production by increasing conceptual similarity between hierarchically associated events. These findings are consistent with the notion that neural oscillations at multiple periodicities may strengthen metrical associations across sequence events during planning. PMID:25628550

  10. Amnesia and the organization of the hippocampal system.

    PubMed

    Mishkin, M; Vargha-Khadem, F; Gadian, D G

    1998-01-01

    Early hippocampal injury in humans has been found to result in a limited form of global anterograde amnesia. At issue is whether the limitation is qualitative, with the amnesia reflecting substantially greater impairment in episodic than in semantic memory, or only quantitative, with both episodic and semantic memory being partially and equivalently impaired. Evidence from neuroanatomical and lesion studies in animals suggests that the hippocampus and subhippocampal cortices form a hierarchically organized system, such that the greatest convergence of information (and, by implication, the richest amount of association) takes place within the hippocampus, located at the top of the hierarchy. On the one hand, this evidence is consistent with the view that selective hippocampal damage produces a differential impairment in context-rich episodic memory as compared with context-free semantic memory, because only the latter can be supported by the subhippocampal cortices. On the other hand, given the system's hierarchical form of organization, this dissociation of deficits is difficult to prove, because a quantitatively limited deficit will nearly always be a viable alternative. A final choice between the alternative views is therefore likely to depend less on further evidence gathered in brain-injured patients than on which view accounts for more of the data gathered from converging approaches to the problem.

  11. The storage system of PCM based on random access file system

    NASA Astrophysics Data System (ADS)

    Han, Wenbing; Chen, Xiaogang; Zhou, Mi; Li, Shunfen; Li, Gezi; Song, Zhitang

    2016-10-01

    Emerging memory technologies such as Phase change memory (PCM) tend to offer fast, random access to persistent storage with better scalability. It's a hot topic of academic and industrial research to establish PCM in storage hierarchy to narrow the performance gap. However, the existing file systems do not perform well with the emerging PCM storage, which access storage medium via a slow, block-based interface. In this paper, we propose a novel file system, RAFS, to bring about good performance of PCM, which is built in the embedded platform. We attach PCM chips to the memory bus and build RAFS on the physical address space. In the proposed file system, we simplify traditional system architecture to eliminate block-related operations and layers. Furthermore, we adopt memory mapping and bypassed page cache to reduce copy overhead between the process address space and storage device. XIP mechanisms are also supported in RAFS. To the best of our knowledge, we are among the first to implement file system on real PCM chips. We have analyzed and evaluated its performance with IOZONE benchmark tools. Our experimental results show that the RAFS on PCM outperforms Ext4fs on SDRAM with small record lengths. Based on DRAM, RAFS is significantly faster than Ext4fs by 18% to 250%.

  12. Control of Finite-State, Finite Memory Stochastic Systems

    NASA Technical Reports Server (NTRS)

    Sandell, Nils R.

    1974-01-01

    A generalized problem of stochastic control is discussed in which multiple controllers with different data bases are present. The vehicle for the investigation is the finite state, finite memory (FSFM) stochastic control problem. Optimality conditions are obtained by deriving an equivalent deterministic optimal control problem. A FSFM minimum principle is obtained via the equivalent deterministic problem. The minimum principle suggests the development of a numerical optimization algorithm, the min-H algorithm. The relationship between the sufficiency of the minimum principle and the informational properties of the problem are investigated. A problem of hypothesis testing with 1-bit memory is investigated to illustrate the application of control theoretic techniques to information processing problems.

  13. Optimization of memory use of fragment extension-based protein-ligand docking with an original fast minimum cost flow algorithm.

    PubMed

    Yanagisawa, Keisuke; Komine, Shunta; Kubota, Rikuto; Ohue, Masahito; Akiyama, Yutaka

    2018-06-01

    The need to accelerate large-scale protein-ligand docking in virtual screening against a huge compound database led researchers to propose a strategy that entails memorizing the evaluation result of the partial structure of a compound and reusing it to evaluate other compounds. However, the previous method required frequent disk accesses, resulting in insufficient acceleration. Thus, more efficient memory usage can be expected to lead to further acceleration, and optimal memory usage could be achieved by solving the minimum cost flow problem. In this research, we propose a fast algorithm for the minimum cost flow problem utilizing the characteristics of the graph generated for this problem as constraints. The proposed algorithm, which optimized memory usage, was approximately seven times faster compared to existing minimum cost flow algorithms. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  14. Composite Particle Swarm Optimizer With Historical Memory for Function Optimization.

    PubMed

    Li, Jie; Zhang, JunQi; Jiang, ChangJun; Zhou, MengChu

    2015-10-01

    Particle swarm optimization (PSO) algorithm is a population-based stochastic optimization technique. It is characterized by the collaborative search in which each particle is attracted toward the global best position (gbest) in the swarm and its own best position (pbest). However, all of particles' historical promising pbests in PSO are lost except their current pbests. In order to solve this problem, this paper proposes a novel composite PSO algorithm, called historical memory-based PSO (HMPSO), which uses an estimation of distribution algorithm to estimate and preserve the distribution information of particles' historical promising pbests. Each particle has three candidate positions, which are generated from the historical memory, particles' current pbests, and the swarm's gbest. Then the best candidate position is adopted. Experiments on 28 CEC2013 benchmark functions demonstrate the superiority of HMPSO over other algorithms.

  15. PODIO: An Event-Data-Model Toolkit for High Energy Physics Experiments

    NASA Astrophysics Data System (ADS)

    Gaede, F.; Hegner, B.; Mato, P.

    2017-10-01

    PODIO is a C++ library that supports the automatic creation of event data models (EDMs) and efficient I/O code for HEP experiments. It is developed as a new EDM Toolkit for future particle physics experiments in the context of the AIDA2020 EU programme. Experience from LHC and the linear collider community shows that existing solutions partly suffer from overly complex data models with deep object-hierarchies or unfavorable I/O performance. The PODIO project was created in order to address these problems. PODIO is based on the idea of employing plain-old-data (POD) data structures wherever possible, while avoiding deep object-hierarchies and virtual inheritance. At the same time it provides the necessary high-level interface towards the developer physicist, such as the support for inter-object relations and automatic memory-management, as well as a Python interface. To simplify the creation of efficient data models PODIO employs code generation from a simple yaml-based markup language. In addition, it was developed with concurrency in mind in order to support the use of modern CPU features, for example giving basic support for vectorization techniques.

  16. Extraction and prediction of indices for monsoon intraseasonal oscillations: an approach based on nonlinear Laplacian spectral analysis

    NASA Astrophysics Data System (ADS)

    Sabeerali, C. T.; Ajayamohan, R. S.; Giannakis, Dimitrios; Majda, Andrew J.

    2017-11-01

    An improved index for real-time monitoring and forecast verification of monsoon intraseasonal oscillations (MISOs) is introduced using the recently developed nonlinear Laplacian spectral analysis (NLSA) technique. Using NLSA, a hierarchy of Laplace-Beltrami (LB) eigenfunctions are extracted from unfiltered daily rainfall data from the Global Precipitation Climatology Project over the south Asian monsoon region. Two modes representing the full life cycle of the northeastward-propagating boreal summer MISO are identified from the hierarchy of LB eigenfunctions. These modes have a number of advantages over MISO modes extracted via extended empirical orthogonal function analysis including higher memory and predictability, stronger amplitude and higher fractional explained variance over the western Pacific, Western Ghats, and adjoining Arabian Sea regions, and more realistic representation of the regional heat sources over the Indian and Pacific Oceans. Real-time prediction of NLSA-derived MISO indices is demonstrated via extended-range hindcasts based on NCEP Coupled Forecast System version 2 operational output. It is shown that in these hindcasts the NLSA MISO indices remain predictable out to ˜3 weeks.

  17. Optimizing inhomogeneous spin ensembles for quantum memory

    NASA Astrophysics Data System (ADS)

    Bensky, Guy; Petrosyan, David; Majer, Johannes; Schmiedmayer, Jörg; Kurizki, Gershon

    2012-07-01

    We propose a method to maximize the fidelity of quantum memory implemented by a spectrally inhomogeneous spin ensemble. The method is based on preselecting the optimal spectral portion of the ensemble by judiciously designed pulses. This leads to significant improvement of the transfer and storage of quantum information encoded in the microwave or optical field.

  18. Memory and Study Strategies for Optimal Learning.

    ERIC Educational Resources Information Center

    Hamachek, Alice L.

    Study strategies are those specific reading skills that increase understanding, memory storage, and retrieval. Memory techniques are crucial to effective studying, and to subsequent performance in class and on written examinations. A major function of memory is to process information. Stimuli are picked up by sensory receptors and transferred to…

  19. The influence of multispectral scanner spatial resolution on forest feature classification

    NASA Technical Reports Server (NTRS)

    Sadowski, F. G.; Malila, W. A.; Sarno, J. E.; Nalepka, R. F.

    1977-01-01

    Inappropriate spatial resolution and corresponding data processing techniques may be major causes for non-optimal forest classification results frequently achieved from multispectral scanner (MSS) data. Procedures and results of empirical investigations are studied to determine the influence of MSS spatial resolution on the classification of forest features into levels of detail or hierarchies of information that might be appropriate for nationwide forest surveys and detailed in-place inventories. Two somewhat different, but related studies are presented. The first consisted of establishing classification accuracies for several hierarchies of features as spatial resolution was progressively coarsened from (2 meters) squared to (64 meters) squared. The second investigated the capabilities for specialized processing techniques to improve upon the results of conventional processing procedures for both coarse and fine resolution data.

  20. A linguistic geometry for 3D strategic planning

    NASA Technical Reports Server (NTRS)

    Stilman, Boris

    1995-01-01

    This paper is a new step in the development and application of the Linguistic Geometry. This formal theory is intended to discover the inner properties of human expert heuristics, which have been successful in a certain class of complex control systems, and apply them to different systems. In this paper we investigate heuristics extracted in the form of hierarchical networks of planning paths of autonomous agents. Employing Linguistic Geometry tools the dynamic hierarchy of networks is represented as a hierarchy of formal attribute languages. The main ideas of this methodology are shown in this paper on the new pilot example of the solution of the extremely complex 3D optimization problem of strategic planning for the space combat of autonomous vehicles. This example demonstrates deep and highly selective search in comparison with conventional search algorithms.

  1. Structural optimization via a design space hierarchy

    NASA Technical Reports Server (NTRS)

    Vanderplaats, G. N.

    1976-01-01

    Mathematical programming techniques provide a general approach to automated structural design. An iterative method is proposed in which design is treated as a hierarchy of subproblems, one being locally constrained and the other being locally unconstrained. It is assumed that the design space is locally convex in the case of good initial designs and that the objective and constraint functions are continuous, with continuous first derivatives. A general design algorithm is outlined for finding a move direction which will decrease the value of the objective function while maintaining a feasible design. The case of one-dimensional search in a two-variable design space is discussed. Possible applications are discussed. A major feature of the proposed algorithm is its application to problems which are inherently ill-conditioned, such as design of structures for optimum geometry.

  2. Performance Prediction Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chennupati, Gopinath; Santhi, Nanadakishore; Eidenbenz, Stephen

    The Performance Prediction Toolkit (PPT), is a scalable co-design tool that contains the hardware and middle-ware models, which accept proxy applications as input in runtime prediction. PPT relies on Simian, a parallel discrete event simulation engine in Python or Lua, that uses the process concept, where each computing unit (host, node, core) is a Simian entity. Processes perform their task through message exchanges to remain active, sleep, wake-up, begin and end. The PPT hardware model of a compute core (such as a Haswell core) consists of a set of parameters, such as clock speed, memory hierarchy levels, their respective sizes,more » cache-lines, access times for different cache levels, average cycle counts of ALU operations, etc. These parameters are ideally read off a spec sheet or are learned using regression models learned from hardware counters (PAPI) data. The compute core model offers an API to the software model, a function called time_compute(), which takes as input a tasklist. A tasklist is an unordered set of ALU, and other CPU-type operations (in particular virtual memory loads and stores). The PPT application model mimics the loop structure of the application and replaces the computational kernels with a call to the hardware model's time_compute() function giving tasklists as input that model the compute kernel. A PPT application model thus consists of tasklists representing kernels and the high-er level loop structure that we like to think of as pseudo code. The key challenge for the hardware model's time_compute-function is to translate virtual memory accesses into actual cache hierarchy level hits and misses.PPT also contains another CPU core level hardware model, Analytical Memory Model (AMM). The AMM solves this challenge soundly, where our previous alternatives explicitly include the L1,L2,L3 hit-rates as inputs to the tasklists. Explicit hit-rates inevitably only reflect the application modeler's best guess, perhaps informed by a few small test problems using hardware counters; also, hard-coded hit-rates make the hardware model insensitive to changes in cache sizes. Alternatively, we use reuse distance distributions in the tasklists. In general, reuse profiles require the application modeler to run a very expensive trace analysis on the real code that realistically can be done at best for small examples.« less

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Janjusic, Tommy; Kartsaklis, Christos

    Memory scalability is an enduring problem and bottleneck that plagues many parallel codes. Parallel codes designed for High Performance Systems are typically designed over the span of several, and in some instances 10+, years. As a result, optimization practices which were appropriate for earlier systems may no longer be valid and thus require careful optimization consideration. Specifically, parallel codes whose memory footprint is a function of their scalability must be carefully considered for future exa-scale systems. In this paper we present a methodology and tool to study the memory scalability of parallel codes. Using our methodology we evaluate an applicationmore » s memory footprint as a function of scalability, which we coined memory efficiency, and describe our results. In particular, using our in-house tools we can pinpoint the specific application components which contribute to the application s overall memory foot-print (application data- structures, libraries, etc.).« less

  4. About sleep's role in memory.

    PubMed

    Rasch, Björn; Born, Jan

    2013-04-01

    Over more than a century of research has established the fact that sleep benefits the retention of memory. In this review we aim to comprehensively cover the field of "sleep and memory" research by providing a historical perspective on concepts and a discussion of more recent key findings. Whereas initial theories posed a passive role for sleep enhancing memories by protecting them from interfering stimuli, current theories highlight an active role for sleep in which memories undergo a process of system consolidation during sleep. Whereas older research concentrated on the role of rapid-eye-movement (REM) sleep, recent work has revealed the importance of slow-wave sleep (SWS) for memory consolidation and also enlightened some of the underlying electrophysiological, neurochemical, and genetic mechanisms, as well as developmental aspects in these processes. Specifically, newer findings characterize sleep as a brain state optimizing memory consolidation, in opposition to the waking brain being optimized for encoding of memories. Consolidation originates from reactivation of recently encoded neuronal memory representations, which occur during SWS and transform respective representations for integration into long-term memory. Ensuing REM sleep may stabilize transformed memories. While elaborated with respect to hippocampus-dependent memories, the concept of an active redistribution of memory representations from networks serving as temporary store into long-term stores might hold also for non-hippocampus-dependent memory, and even for nonneuronal, i.e., immunological memories, giving rise to the idea that the offline consolidation of memory during sleep represents a principle of long-term memory formation established in quite different physiological systems.

  5. CHAMPION: Intelligent Hierarchical Reasoning Agents for Enhanced Decision Support

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hohimer, Ryan E.; Greitzer, Frank L.; Noonan, Christine F.

    2011-11-15

    We describe the design and development of an advanced reasoning framework employing semantic technologies, organized within a hierarchy of computational reasoning agents that interpret domain specific information. Designed based on an inspirational metaphor of the pattern recognition functions performed by the human neocortex, the CHAMPION reasoning framework represents a new computational modeling approach that derives invariant knowledge representations through memory-prediction belief propagation processes that are driven by formal ontological language specification and semantic technologies. The CHAMPION framework shows promise for enhancing complex decision making in diverse problem domains including cyber security, nonproliferation and energy consumption analysis.

  6. An optimized treatment for algorithmic differentiation of an important glaciological fixed-point problem

    DOE PAGES

    Goldberg, Daniel N.; Narayanan, Sri Hari Krishna; Hascoet, Laurent; ...

    2016-05-20

    We apply an optimized method to the adjoint generation of a time-evolving land ice model through algorithmic differentiation (AD). The optimization involves a special treatment of the fixed-point iteration required to solve the nonlinear stress balance, which differs from a straightforward application of AD software, and leads to smaller memory requirements and in some cases shorter computation times of the adjoint. The optimization is done via implementation of the algorithm of Christianson (1994) for reverse accumulation of fixed-point problems, with the AD tool OpenAD. For test problems, the optimized adjoint is shown to have far lower memory requirements, potentially enablingmore » larger problem sizes on memory-limited machines. In the case of the land ice model, implementation of the algorithm allows further optimization by having the adjoint model solve a sequence of linear systems with identical (as opposed to varying) matrices, greatly improving performance. Finally, the methods introduced here will be of value to other efforts applying AD tools to ice models, particularly ones which solve a hybrid shallow ice/shallow shelf approximation to the Stokes equations.« less

  7. Numerical difficulties associated with using equality constraints to achieve multi-level decomposition in structural optimization

    NASA Technical Reports Server (NTRS)

    Thareja, R.; Haftka, R. T.

    1986-01-01

    There has been recent interest in multidisciplinary multilevel optimization applied to large engineering systems. The usual approach is to divide the system into a hierarchy of subsystems with ever increasing detail in the analysis focus. Equality constraints are usually placed on various design quantities at every successive level to ensure consistency between levels. In many previous applications these equality constraints were eliminated by reducing the number of design variables. In complex systems this may not be possible and these equality constraints may have to be retained in the optimization process. In this paper the impact of such a retention is examined for a simple portal frame problem. It is shown that the equality constraints introduce numerical difficulties, and that the numerical solution becomes very sensitive to optimization parameters for a wide range of optimization algorithms.

  8. Novices and Experts in Geoinformatics: the Cognitive Gap.

    NASA Astrophysics Data System (ADS)

    Zhilin, M.

    2012-04-01

    Modern geoinformatics is an extremely powerful tool for problem analysis and decision making in various fields. Currently general public uses geoinformatics predominantly for navigating (GPS) and sharing information about particular places (GoogleMaps, Wikimapia). Communities also use geoinformatics for particular purposes: fans of history use it to correspond historical and actual maps (www.retromap.ru), birdwatchers point places where they met birds (geobirds.com/rangemaps) etc. However the majority of stakeholders local authorities are not aware of advantages and possibilities of geoinformatics. The same problem is observed for students. At the same time many professional geoinformatic tools are developed, but sometimes the experts even can't explain their purpose to non-experts. So the question is how to shrink the gap between experts and non-experts in understanding and application of geoinformatics. We think that this gap has a cognitive basis. According to modern cognitive theories (Shiffrin-Atkinson and descending) the information primary has to pass through the perceptual filter that cuts off the information that seems to be irrelevant. The mind estimates the relevance implicitly (unconsciously) basing on previous knowledge and judgments what is important. Then it comes to the working memory which is used (a) for proceeding and (b) for problem solving. The working memory has limited capacity and can operate only with about 7 objects simultaneously. Then information passes to the long-term memory that is of unlimited capacity. There it is stored as more or less complex structures with associative links. When necessary it is extracted into the working memory. If great amount of information is linked ("chunked") the working memory operates with it as one object of seven thus overcoming the limitations of the working memory capacity. To adopt any information it should (a) pass through the perceptual filter, (b) not to overload the working memory and (c) to be structured in the long-term memory. Expert easily adopt domain-specific information because they (a) understand terminology and consider the information to be important thus passing it through the perceptual filter and (b) have a lot of complex domain-specific chunks that are processed by the working memory as a whole thus avoiding to overload it. Novices (students and general public) have neither understanding and feeling importance nor necessary chunks. The following measures should be taken to bridge experts' and novices' understanding of geoinformatics. Expert community should popularize geoscientific problems developing understandable language and available tools for their solving. This requires close collaboration with educational system (especially second education). If students understand a problem, they can find and apply appropriate tool for it. Geoscientific problems and models are extremely complex. In cognitive terms, they require hierarchy of chunks. This hierarchy should coherently develop beginning from simple ones later joining them to complex. It requires an appropriate sequence of learning tasks. There is no necessity in correct solutions - the students should understand how are they solved and realize limitations of models. We think that tasks of weather forecast, global climate modeling etc are suitable. The first step on bridging experts and novices is the elaboration of a set and a sequence of learning tasks and its sequence as well as tools for their solution. The tools should be easy for everybody who understands the task and as versatile as possible - otherwise students will waste a lot of time mastering it. This development requires close collaboration between geoscientists and educators.

  9. Optimized Read/Write Conditions of PHB Memory,

    DTIC Science & Technology

    PHB memory has been a good candidate for a future ultra-high density memory for these ten years. This PHB memory is considered to realize the...diameter recording spot. But not so many researchers are working on PHB memory compared to the number of researchers wrestling with realization of higher...possible in such a high density recording in 1 -microns diameter spot. Therefore one of the most important research on PHB memory is the estimation of

  10. Comparison of multiobjective evolutionary algorithms: empirical results.

    PubMed

    Zitzler, E; Deb, K; Thiele, L

    2000-01-01

    In this paper, we provide a systematic comparison of various evolutionary approaches to multiobjective optimization using six carefully chosen test functions. Each test function involves a particular feature that is known to cause difficulty in the evolutionary optimization process, mainly in converging to the Pareto-optimal front (e.g., multimodality and deception). By investigating these different problem features separately, it is possible to predict the kind of problems to which a certain technique is or is not well suited. However, in contrast to what was suspected beforehand, the experimental results indicate a hierarchy of the algorithms under consideration. Furthermore, the emerging effects are evidence that the suggested test functions provide sufficient complexity to compare multiobjective optimizers. Finally, elitism is shown to be an important factor for improving evolutionary multiobjective search.

  11. Research on Collection System Optimal Design of Wind Farm with Obstacles

    NASA Astrophysics Data System (ADS)

    Huang, W.; Yan, B. Y.; Tan, R. S.; Liu, L. F.

    2017-05-01

    To the collection system optimal design of offshore wind farm, the factors considered are not only the reasonable configuration of the cable and switch, but also the influence of the obstacles on the topology design of the offshore wind farm. This paper presents a concrete topology optimization algorithm with obstacles. The minimal area rectangle encasing box of the obstacle is obtained by using the method of minimal area encasing box. Then the optimization algorithm combining the advantages of Dijkstra algorithm and Prim algorithm is used to gain the scheme of avoidance obstacle path planning. Finally a fuzzy comprehensive evaluation model based on the analytic hierarchy process is constructed to compare the performance of the different topologies. Case studies demonstrate the feasibility of the proposed algorithm and model.

  12. Operational Analysis of Time-Optimal Maneuvering for Imaging Spacecraft

    DTIC Science & Technology

    2013-03-01

    imaging spacecraft. The analysis is facilitated through the use of AGI’s Systems Tool Kit ( STK ) software. An Analytic Hierarchy Process (AHP)-based...the Singapore-developed X-SAT imaging spacecraft. The analysis is facilitated through the use of AGI’s Systems Tool Kit ( STK ) software. An Analytic...89  B.  FUTURE WORK................................................................................. 90  APPENDIX A. STK DATA AND BENEFIT

  13. Dynamic facial expressions of emotion transmit an evolving hierarchy of signals over time.

    PubMed

    Jack, Rachael E; Garrod, Oliver G B; Schyns, Philippe G

    2014-01-20

    Designed by biological and social evolutionary pressures, facial expressions of emotion comprise specific facial movements to support a near-optimal system of signaling and decoding. Although highly dynamical, little is known about the form and function of facial expression temporal dynamics. Do facial expressions transmit diagnostic signals simultaneously to optimize categorization of the six classic emotions, or sequentially to support a more complex communication system of successive categorizations over time? Our data support the latter. Using a combination of perceptual expectation modeling, information theory, and Bayesian classifiers, we show that dynamic facial expressions of emotion transmit an evolving hierarchy of "biologically basic to socially specific" information over time. Early in the signaling dynamics, facial expressions systematically transmit few, biologically rooted face signals supporting the categorization of fewer elementary categories (e.g., approach/avoidance). Later transmissions comprise more complex signals that support categorization of a larger number of socially specific categories (i.e., the six classic emotions). Here, we show that dynamic facial expressions of emotion provide a sophisticated signaling system, questioning the widely accepted notion that emotion communication is comprised of six basic (i.e., psychologically irreducible) categories, and instead suggesting four. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Static Memory Deduplication for Performance Optimization in Cloud Computing.

    PubMed

    Jia, Gangyong; Han, Guangjie; Wang, Hao; Yang, Xuan

    2017-04-27

    In a cloud computing environment, the number of virtual machines (VMs) on a single physical server and the number of applications running on each VM are continuously growing. This has led to an enormous increase in the demand of memory capacity and subsequent increase in the energy consumption in the cloud. Lack of enough memory has become a major bottleneck for scalability and performance of virtualization interfaces in cloud computing. To address this problem, memory deduplication techniques which reduce memory demand through page sharing are being adopted. However, such techniques suffer from overheads in terms of number of online comparisons required for the memory deduplication. In this paper, we propose a static memory deduplication (SMD) technique which can reduce memory capacity requirement and provide performance optimization in cloud computing. The main innovation of SMD is that the process of page detection is performed offline, thus potentially reducing the performance cost, especially in terms of response time. In SMD, page comparisons are restricted to the code segment, which has the highest shared content. Our experimental results show that SMD efficiently reduces memory capacity requirement and improves performance. We demonstrate that, compared to other approaches, the cost in terms of the response time is negligible.

  15. Static Memory Deduplication for Performance Optimization in Cloud Computing

    PubMed Central

    Jia, Gangyong; Han, Guangjie; Wang, Hao; Yang, Xuan

    2017-01-01

    In a cloud computing environment, the number of virtual machines (VMs) on a single physical server and the number of applications running on each VM are continuously growing. This has led to an enormous increase in the demand of memory capacity and subsequent increase in the energy consumption in the cloud. Lack of enough memory has become a major bottleneck for scalability and performance of virtualization interfaces in cloud computing. To address this problem, memory deduplication techniques which reduce memory demand through page sharing are being adopted. However, such techniques suffer from overheads in terms of number of online comparisons required for the memory deduplication. In this paper, we propose a static memory deduplication (SMD) technique which can reduce memory capacity requirement and provide performance optimization in cloud computing. The main innovation of SMD is that the process of page detection is performed offline, thus potentially reducing the performance cost, especially in terms of response time. In SMD, page comparisons are restricted to the code segment, which has the highest shared content. Our experimental results show that SMD efficiently reduces memory capacity requirement and improves performance. We demonstrate that, compared to other approaches, the cost in terms of the response time is negligible. PMID:28448434

  16. Verification of immune response optimality through cybernetic modeling.

    PubMed

    Batt, B C; Kompala, D S

    1990-02-09

    An immune response cascade that is T cell independent begins with the stimulation of virgin lymphocytes by antigen to differentiate into large lymphocytes. These immune cells can either replicate themselves or differentiate into plasma cells or memory cells. Plasma cells produce antibody at a specific rate up to two orders of magnitude greater than large lymphocytes. However, plasma cells have short life-spans and cannot replicate. Memory cells produce only surface antibody, but in the event of a subsequent infection by the same antigen, memory cells revert rapidly to large lymphocytes. Immunologic memory is maintained throughout the organism's lifetime. Many immunologists believe that the optimal response strategy calls for large lymphocytes to replicate first, then differentiate into plasma cells and when the antigen has been nearly eliminated, they form memory cells. A mathematical model incorporating the concept of cybernetics has been developed to study the optimality of the immune response. Derived from the matching law of microeconomics, cybernetic variables control the allocation of large lymphocytes to maximize the instantaneous antibody production rate at any time during the response in order to most efficiently inactivate the antigen. A mouse is selected as the model organism and bacteria as the replicating antigen. In addition to verifying the optimal switching strategy, results showing how the immune response is affected by antigen growth rate, initial antigen concentration, and the number of antibodies required to eliminate an antigen are included.

  17. Distributed mixed-integer fuzzy hierarchical programming for municipal solid waste management. Part II: scheme analysis and mechanism revelation.

    PubMed

    Cheng, Guanhui; Huang, Guohe; Dong, Cong; Xu, Ye; Chen, Jiapei; Chen, Xiujuan; Li, Kailong

    2017-03-01

    As presented in the first companion paper, distributed mixed-integer fuzzy hierarchical programming (DMIFHP) was developed for municipal solid waste management (MSWM) under complexities of heterogeneities, hierarchy, discreteness, and interactions. Beijing was selected as a representative case. This paper focuses on presenting the obtained schemes and the revealed mechanisms of the Beijing MSWM system. The optimal MSWM schemes for Beijing under various solid waste treatment policies and their differences are deliberated. The impacts of facility expansion, hierarchy, and spatial heterogeneities and potential extensions of DMIFHP are also discussed. A few of findings are revealed from the results and a series of comparisons and analyses. For instance, DMIFHP is capable of robustly reflecting these complexities in MSWM systems, especially for Beijing. The optimal MSWM schemes are of fragmented patterns due to the dominant role of the proximity principle in allocating solid waste treatment resources, and they are closely related to regulated ratios of landfilling, incineration, and composting. Communities without significant differences among distances to different types of treatment facilities are more sensitive to these ratios than others. The complexities of hierarchy and heterogeneities pose significant impacts on MSWM practices. Spatial dislocation of MSW generation rates and facility capacities caused by unreasonable planning in the past may result in insufficient utilization of treatment capacities under substantial influences of transportation costs. The problems of unreasonable MSWM planning, e.g., severe imbalance among different technologies and complete vacancy of ten facilities, should be gained deliberation of the public and the municipal or local governments in Beijing. These findings are helpful for gaining insights into MSWM systems under these complexities, mitigating key challenges in the planning of these systems, improving the related management practices, and eliminating potential socio-economic and eco-environmental issues resulting from unreasonable management.

  18. Hierarchical organization of macaque and cat cortical sensory systems explored with a novel network processor.

    PubMed

    Hilgetag, C C; O'Neill, M A; Young, M P

    2000-01-29

    Neuroanatomists have described a large number of connections between the various structures of monkey and cat cortical sensory systems. Because of the complexity of the connection data, analysis is required to unravel what principles of organization they imply. To date, analysis of laminar origin and termination connection data to reveal hierarchical relationships between the cortical areas has been the most widely acknowledged approach. We programmed a network processor that searches for optimal hierarchical orderings of cortical areas given known hierarchical constraints and rules for their interpretation. For all cortical systems and all cost functions, the processor found a multitude of equally low-cost hierarchies. Laminar hierarchical constraints that are presently available in the anatomical literature were therefore insufficient to constrain a unique ordering for any of the sensory systems we analysed. Hierarchical orderings of the monkey visual system that have been widely reported, but which were derived by hand, were not among the optimal orderings. All the cortical systems we studied displayed a significant degree of hierarchical organization, and the anatomical constraints from the monkey visual and somato-motor systems were satisfied with very few constraint violations in the optimal hierarchies. The visual and somato-motor systems in that animal were therefore surprisingly strictly hierarchical. Most inconsistencies between the constraints and the hierarchical relationships in the optimal structures for the visual system were related to connections of area FST (fundus of superior temporal sulcus). We found that the hierarchical solutions could be further improved by assuming that FST consists of two areas, which differ in the nature of their projections. Indeed, we found that perfect hierarchical arrangements of the primate visual system, without any violation of anatomical constraints, could be obtained under two reasonable conditions, namely the subdivision of FST into two distinct areas, whose connectivity we predict, and the abolition of at least one of the less reliable rule constraints. Our analyses showed that the future collection of the same type of laminar constraints, or the inclusion of new hierarchical constraints from thalamocortical connections, will not resolve the problem of multiple optimal hierarchical representations for the primate visual system. Further data, however, may help to specify the relative ordering of some more areas. This indeterminacy of the visual hierarchy is in part due to the reported absence of some connections between cortical areas. These absences are consistent with limited cross-talk between differentiated processing streams in the system. Hence, hierarchical representation of the visual system is affected by, and must take into account, other organizational features, such as processing streams.

  19. Neural Mechanisms of Information Storage in Visual Short-Term Memory

    PubMed Central

    Serences, John T.

    2016-01-01

    The capacity to briefly memorize fleeting sensory information supports visual search and behavioral interactions with relevant stimuli in the environment. Traditionally, studies investigating the neural basis of visual short term memory (STM) have focused on the role of prefrontal cortex (PFC) in exerting executive control over what information is stored and how it is adaptively used to guide behavior. However, the neural substrates that support the actual storage of content-specific information in STM are more controversial, with some attributing this function to PFC and others to the specialized areas of early visual cortex that initially encode incoming sensory stimuli. In contrast to these traditional views, I will review evidence suggesting that content-specific information can be flexibly maintained in areas across the cortical hierarchy ranging from early visual cortex to PFC. While the factors that determine exactly where content-specific information is represented are not yet entirely clear, recognizing the importance of task-demands and better understanding the operation of non-spiking neural codes may help to constrain new theories about how memories are maintained at different resolutions, across different timescales, and in the presence of distracting information. PMID:27668990

  20. Acquisition and improvement of human motor skills: Learning through observation and practice

    NASA Technical Reports Server (NTRS)

    Iba, Wayne

    1991-01-01

    Skilled movement is an integral part of the human existence. A better understanding of motor skills and their development is a prerequisite to the construction of truly flexible intelligent agents. We present MAEANDER, a computational model of human motor behavior, that uniformly addresses both the acquisition of skills through observation and the improvement of skills through practice. MAEANDER consists of a sensory-effector interface, a memory of movements, and a set of performance and learning mechanisms that let it recognize and generate motor skills. The system initially acquires such skills by observing movements performed by another agent and constructing a concept hierarchy. Given a stored motor skill in memory, MAEANDER will cause an effector to behave appropriately. All learning involves changing the hierarchical memory of skill concepts to more closely correspond to either observed experience or to desired behaviors. We evaluated MAEANDER empirically with respect to how well it acquires and improves both artificial movement types and handwritten script letters from the alphabet. We also evaluate MAEANDER as a psychological model by comparing its behavior to robust phenomena in humans and by considering the richness of the predictions it makes.

  1. Neural mechanisms of information storage in visual short-term memory.

    PubMed

    Serences, John T

    2016-11-01

    The capacity to briefly memorize fleeting sensory information supports visual search and behavioral interactions with relevant stimuli in the environment. Traditionally, studies investigating the neural basis of visual short term memory (STM) have focused on the role of prefrontal cortex (PFC) in exerting executive control over what information is stored and how it is adaptively used to guide behavior. However, the neural substrates that support the actual storage of content-specific information in STM are more controversial, with some attributing this function to PFC and others to the specialized areas of early visual cortex that initially encode incoming sensory stimuli. In contrast to these traditional views, I will review evidence suggesting that content-specific information can be flexibly maintained in areas across the cortical hierarchy ranging from early visual cortex to PFC. While the factors that determine exactly where content-specific information is represented are not yet entirely clear, recognizing the importance of task-demands and better understanding the operation of non-spiking neural codes may help to constrain new theories about how memories are maintained at different resolutions, across different timescales, and in the presence of distracting information. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Distinct hippocampal versus frontoparietal-network contributions to retrieval and memory-guided exploration

    PubMed Central

    Bridge, Donna J.; Cohen, Neal J.; Voss, Joel L.

    2017-01-01

    Memory can profoundly influence new learning, presumably because memory optimizes exploration of to-be-learned material. Although hippocampus and frontoparietal networks have been implicated in memory-guided exploration, their specific and interactive roles have not been identified. We examined eye movements during fMRI scanning to identify neural correlates of the influences of memory retrieval on exploration and learning. Following retrieval of one object in a multi-object array, viewing was strategically directed away from the retrieved object toward non-retrieved objects, such that exploration was directed towards to-be-learned content. Retrieved objects later served as optimal reminder cues, indicating that exploration caused memory to become structured around the retrieved content. Hippocampal activity was associated with memory retrieval whereas frontoparietal activity varied with strategic viewing patterns deployed following retrieval, thus providing spatiotemporal dissociation of memory retrieval from memory-guided learning strategies. Time-lagged fMRI connectivity analyses indicated that hippocampal activity predicted frontoparietal activity to a greater extent for a condition in which retrieval guided exploration than for a passive control condition in which exploration was not influenced by retrieval. This demonstrates network-level interaction effects specific to influences of memory on strategic exploration. These findings show how memory guides behavior during learning and demonstrate distinct yet interactive hippocampal-frontoparietal roles in implementing strategic exploration behaviors that determine the fate of evolving memory representations. PMID:28471729

  3. Distinct Hippocampal versus Frontoparietal Network Contributions to Retrieval and Memory-guided Exploration.

    PubMed

    Bridge, Donna J; Cohen, Neal J; Voss, Joel L

    2017-08-01

    Memory can profoundly influence new learning, presumably because memory optimizes exploration of to-be-learned material. Although hippocampus and frontoparietal networks have been implicated in memory-guided exploration, their specific and interactive roles have not been identified. We examined eye movements during fMRI scanning to identify neural correlates of the influences of memory retrieval on exploration and learning. After retrieval of one object in a multiobject array, viewing was strategically directed away from the retrieved object toward nonretrieved objects, such that exploration was directed toward to-be-learned content. Retrieved objects later served as optimal reminder cues, indicating that exploration caused memory to become structured around the retrieved content. Hippocampal activity was associated with memory retrieval, whereas frontoparietal activity varied with strategic viewing patterns deployed after retrieval, thus providing spatiotemporal dissociation of memory retrieval from memory-guided learning strategies. Time-lagged fMRI connectivity analyses indicated that hippocampal activity predicted frontoparietal activity to a greater extent for a condition in which retrieval guided exploration occurred than for a passive control condition in which exploration was not influenced by retrieval. This demonstrates network-level interaction effects specific to influences of memory on strategic exploration. These findings show how memory guides behavior during learning and demonstrate distinct yet interactive hippocampal-frontoparietal roles in implementing strategic exploration behaviors that determine the fate of evolving memory representations.

  4. The cost of misremembering: Inferring the loss function in visual working memory.

    PubMed

    Sims, Chris R

    2015-03-04

    Visual working memory (VWM) is a highly limited storage system. A basic consequence of this fact is that visual memories cannot perfectly encode or represent the veridical structure of the world. However, in natural tasks, some memory errors might be more costly than others. This raises the intriguing possibility that the nature of memory error reflects the costs of committing different kinds of errors. Many existing theories assume that visual memories are noise-corrupted versions of afferent perceptual signals. However, this additive noise assumption oversimplifies the problem. Implicit in the behavioral phenomena of visual working memory is the concept of a loss function: a mathematical entity that describes the relative cost to the organism of making different types of memory errors. An optimally efficient memory system is one that minimizes the expected loss according to a particular loss function, while subject to a constraint on memory capacity. This paper describes a novel theoretical framework for characterizing visual working memory in terms of its implicit loss function. Using inverse decision theory, the empirical loss function is estimated from the results of a standard delayed recall visual memory experiment. These results are compared to the predicted behavior of a visual working memory system that is optimally efficient for a previously identified natural task, gaze correction following saccadic error. Finally, the approach is compared to alternative models of visual working memory, and shown to offer a superior account of the empirical data across a range of experimental datasets. © 2015 ARVO.

  5. About Sleep's Role in Memory

    PubMed Central

    2013-01-01

    Over more than a century of research has established the fact that sleep benefits the retention of memory. In this review we aim to comprehensively cover the field of “sleep and memory” research by providing a historical perspective on concepts and a discussion of more recent key findings. Whereas initial theories posed a passive role for sleep enhancing memories by protecting them from interfering stimuli, current theories highlight an active role for sleep in which memories undergo a process of system consolidation during sleep. Whereas older research concentrated on the role of rapid-eye-movement (REM) sleep, recent work has revealed the importance of slow-wave sleep (SWS) for memory consolidation and also enlightened some of the underlying electrophysiological, neurochemical, and genetic mechanisms, as well as developmental aspects in these processes. Specifically, newer findings characterize sleep as a brain state optimizing memory consolidation, in opposition to the waking brain being optimized for encoding of memories. Consolidation originates from reactivation of recently encoded neuronal memory representations, which occur during SWS and transform respective representations for integration into long-term memory. Ensuing REM sleep may stabilize transformed memories. While elaborated with respect to hippocampus-dependent memories, the concept of an active redistribution of memory representations from networks serving as temporary store into long-term stores might hold also for non-hippocampus-dependent memory, and even for nonneuronal, i.e., immunological memories, giving rise to the idea that the offline consolidation of memory during sleep represents a principle of long-term memory formation established in quite different physiological systems. PMID:23589831

  6. GIS-Based Suitability Model for Assessment of Forest Biomass Energy Potential in a Region of Portugal

    NASA Astrophysics Data System (ADS)

    Quinta-Nova, Luis; Fernandez, Paulo; Pedro, Nuno

    2017-12-01

    This work focuses on developed a decision support system based on multicriteria spatial analysis to assess the potential for generation of biomass residues from forestry sources in a region of Portugal (Beira Baixa). A set of environmental, economic and social criteria was defined, evaluated and weighted in the context of Saaty’s analytic hierarchies. The best alternatives were obtained after applying Analytic Hierarchy Process (AHP). The model was applied to the central region of Portugal where forest and agriculture are the most representative land uses. Finally, sensitivity analysis of the set of factors and their associated weights was performed to test the robustness of the model. The proposed evaluation model provides a valuable reference for decision makers in establishing a standardized means of selecting the optimal location for new biomass plants.

  7. Predictability and hierarchy in Drosophila behavior.

    PubMed

    Berman, Gordon J; Bialek, William; Shaevitz, Joshua W

    2016-10-18

    Even the simplest of animals exhibit behavioral sequences with complex temporal dynamics. Prominent among the proposed organizing principles for these dynamics has been the idea of a hierarchy, wherein the movements an animal makes can be understood as a set of nested subclusters. Although this type of organization holds potential advantages in terms of motion control and neural circuitry, measurements demonstrating this for an animal's entire behavioral repertoire have been limited in scope and temporal complexity. Here, we use a recently developed unsupervised technique to discover and track the occurrence of all stereotyped behaviors performed by fruit flies moving in a shallow arena. Calculating the optimally predictive representation of the fly's future behaviors, we show that fly behavior exhibits multiple time scales and is organized into a hierarchical structure that is indicative of its underlying behavioral programs and its changing internal states.

  8. An Optimizing Compiler for Petascale I/O on Leadership Class Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choudhary, Alok; Kandemir, Mahmut

    In high-performance computing systems, parallel I/O architectures usually have very complex hierarchies with multiple layers that collectively constitute an I/O stack, including high-level I/O libraries such as PnetCDF and HDF5, I/O middleware such as MPI-IO, and parallel file systems such as PVFS and Lustre. Our project explored automated instrumentation and compiler support for I/O intensive applications. Our project made significant progress towards understanding the complex I/O hierarchies of high-performance storage systems (including storage caches, HDDs, and SSDs), and designing and implementing state-of-the-art compiler/runtime system technology that targets I/O intensive HPC applications that target leadership class machine. This final report summarizesmore » the major achievements of the project and also points out promising future directions.« less

  9. Large-scale hydropower system optimization using dynamic programming and object-oriented programming: the case of the Northeast China Power Grid.

    PubMed

    Li, Ji-Qing; Zhang, Yu-Shan; Ji, Chang-Ming; Wang, Ai-Jing; Lund, Jay R

    2013-01-01

    This paper examines long-term optimal operation using dynamic programming for a large hydropower system of 10 reservoirs in Northeast China. Besides considering flow and hydraulic head, the optimization explicitly includes time-varying electricity market prices to maximize benefit. Two techniques are used to reduce the 'curse of dimensionality' of dynamic programming with many reservoirs. Discrete differential dynamic programming (DDDP) reduces the search space and computer memory needed. Object-oriented programming (OOP) and the ability to dynamically allocate and release memory with the C++ language greatly reduces the cumulative effect of computer memory for solving multi-dimensional dynamic programming models. The case study shows that the model can reduce the 'curse of dimensionality' and achieve satisfactory results.

  10. Integrating NOE and RDC using sum-of-squares relaxation for protein structure determination.

    PubMed

    Khoo, Y; Singer, A; Cowburn, D

    2017-07-01

    We revisit the problem of protein structure determination from geometrical restraints from NMR, using convex optimization. It is well-known that the NP-hard distance geometry problem of determining atomic positions from pairwise distance restraints can be relaxed into a convex semidefinite program (SDP). However, often the NOE distance restraints are too imprecise and sparse for accurate structure determination. Residual dipolar coupling (RDC) measurements provide additional geometric information on the angles between atom-pair directions and axes of the principal-axis-frame. The optimization problem involving RDC is highly non-convex and requires a good initialization even within the simulated annealing framework. In this paper, we model the protein backbone as an articulated structure composed of rigid units. Determining the rotation of each rigid unit gives the full protein structure. We propose solving the non-convex optimization problems using the sum-of-squares (SOS) hierarchy, a hierarchy of convex relaxations with increasing complexity and approximation power. Unlike classical global optimization approaches, SOS optimization returns a certificate of optimality if the global optimum is found. Based on the SOS method, we proposed two algorithms-RDC-SOS and RDC-NOE-SOS, that have polynomial time complexity in the number of amino-acid residues and run efficiently on a standard desktop. In many instances, the proposed methods exactly recover the solution to the original non-convex optimization problem. To the best of our knowledge this is the first time SOS relaxation is introduced to solve non-convex optimization problems in structural biology. We further introduce a statistical tool, the Cramér-Rao bound (CRB), to provide an information theoretic bound on the highest resolution one can hope to achieve when determining protein structure from noisy measurements using any unbiased estimator. Our simulation results show that when the RDC measurements are corrupted by Gaussian noise of realistic variance, both SOS based algorithms attain the CRB. We successfully apply our method in a divide-and-conquer fashion to determine the structure of ubiquitin from experimental NOE and RDC measurements obtained in two alignment media, achieving more accurate and faster reconstructions compared to the current state of the art.

  11. Rapid solution of large-scale systems of equations

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.

    1994-01-01

    The analysis and design of complex aerospace structures requires the rapid solution of large systems of linear and nonlinear equations, eigenvalue extraction for buckling, vibration and flutter modes, structural optimization and design sensitivity calculation. Computers with multiple processors and vector capabilities can offer substantial computational advantages over traditional scalar computer for these analyses. These computers fall into two categories: shared memory computers and distributed memory computers. This presentation covers general-purpose, highly efficient algorithms for generation/assembly or element matrices, solution of systems of linear and nonlinear equations, eigenvalue and design sensitivity analysis and optimization. All algorithms are coded in FORTRAN for shared memory computers and many are adapted to distributed memory computers. The capability and numerical performance of these algorithms will be addressed.

  12. Shared Memory Parallelization of an Implicit ADI-type CFD Code

    NASA Technical Reports Server (NTRS)

    Hauser, Th.; Huang, P. G.

    1999-01-01

    A parallelization study designed for ADI-type algorithms is presented using the OpenMP specification for shared-memory multiprocessor programming. Details of optimizations specifically addressed to cache-based computer architectures are described and performance measurements for the single and multiprocessor implementation are summarized. The paper demonstrates that optimization of memory access on a cache-based computer architecture controls the performance of the computational algorithm. A hybrid MPI/OpenMP approach is proposed for clusters of shared memory machines to further enhance the parallel performance. The method is applied to develop a new LES/DNS code, named LESTool. A preliminary DNS calculation of a fully developed channel flow at a Reynolds number of 180, Re(sub tau) = 180, has shown good agreement with existing data.

  13. Memory Transformation Enhances Reinforcement Learning in Dynamic Environments.

    PubMed

    Santoro, Adam; Frankland, Paul W; Richards, Blake A

    2016-11-30

    Over the course of systems consolidation, there is a switch from a reliance on detailed episodic memories to generalized schematic memories. This switch is sometimes referred to as "memory transformation." Here we demonstrate a previously unappreciated benefit of memory transformation, namely, its ability to enhance reinforcement learning in a dynamic environment. We developed a neural network that is trained to find rewards in a foraging task where reward locations are continuously changing. The network can use memories for specific locations (episodic memories) and statistical patterns of locations (schematic memories) to guide its search. We find that switching from an episodic to a schematic strategy over time leads to enhanced performance due to the tendency for the reward location to be highly correlated with itself in the short-term, but regress to a stable distribution in the long-term. We also show that the statistics of the environment determine the optimal utilization of both types of memory. Our work recasts the theoretical question of why memory transformation occurs, shifting the focus from the avoidance of memory interference toward the enhancement of reinforcement learning across multiple timescales. As time passes, memories transform from a highly detailed state to a more gist-like state, in a process called "memory transformation." Theories of memory transformation speak to its advantages in terms of reducing memory interference, increasing memory robustness, and building models of the environment. However, the role of memory transformation from the perspective of an agent that continuously acts and receives reward in its environment is not well explored. In this work, we demonstrate a view of memory transformation that defines it as a way of optimizing behavior across multiple timescales. Copyright © 2016 the authors 0270-6474/16/3612228-15$15.00/0.

  14. Self-organization and solution of shortest-path optimization problems with memristive networks

    NASA Astrophysics Data System (ADS)

    Pershin, Yuriy V.; Di Ventra, Massimiliano

    2013-07-01

    We show that memristive networks, namely networks of resistors with memory, can efficiently solve shortest-path optimization problems. Indeed, the presence of memory (time nonlocality) promotes self organization of the network into the shortest possible path(s). We introduce a network entropy function to characterize the self-organized evolution, show the solution of the shortest-path problem and demonstrate the healing property of the solution path. Finally, we provide an algorithm to solve the traveling salesman problem. Similar considerations apply to networks of memcapacitors and meminductors, and networks with memory in various dimensions.

  15. Hippocampal brain-network coordination during volitional exploratory behavior enhances learning

    PubMed Central

    Voss, Joel L.; Gonsalves, Brian D.; Federmeier, Kara D.; Tranel, Daniel; Cohen, Neal J.

    2010-01-01

    Exploratory behaviors during learning determine what is studied and when, helping to optimize subsequent memory performance. We manipulated how much control subjects had over the position of a moving window through which they studied objects and their locations, in order to elucidate the cognitive and neural determinants of exploratory behaviors. Our behavioral, neuropsychological, and neuroimaging data indicate volitional control benefits memory performance, and is linked to a brain network centered on the hippocampus. Increases in correlated activity between the hippocampus and other areas were associated with specific aspects of memory, suggesting that volitional control optimizes interactions among specialized neural systems via the hippocampus. Memory is therefore an active process intrinsically linked to behavior. Furthermore, brain structures typically seen as passive participants in memory encoding (e.g., the hippocampus) are actually part of an active network that controls behavior dynamically as it unfolds. PMID:21102449

  16. Hippocampal brain-network coordination during volitional exploratory behavior enhances learning.

    PubMed

    Voss, Joel L; Gonsalves, Brian D; Federmeier, Kara D; Tranel, Daniel; Cohen, Neal J

    2011-01-01

    Exploratory behaviors during learning determine what is studied and when, helping to optimize subsequent memory performance. To elucidate the cognitive and neural determinants of exploratory behaviors, we manipulated the control that human subjects had over the position of a moving window through which they studied objects and their locations. Our behavioral, neuropsychological and neuroimaging data indicate that volitional control benefits memory performance and is linked to a brain network that is centered on the hippocampus. Increases in correlated activity between the hippocampus and other areas were associated with specific aspects of memory, which suggests that volitional control optimizes interactions among specialized neural systems through the hippocampus. Memory is therefore an active process that is intrinsically linked to behavior. Furthermore, brain structures that are typically seen as passive participants in memory encoding (for example, the hippocampus) are actually part of an active network that controls behavior dynamically as it unfolds.

  17. Acceleration of block-matching algorithms using a custom instruction-based paradigm on a Nios II microprocessor

    NASA Astrophysics Data System (ADS)

    González, Diego; Botella, Guillermo; García, Carlos; Prieto, Manuel; Tirado, Francisco

    2013-12-01

    This contribution focuses on the optimization of matching-based motion estimation algorithms widely used for video coding standards using an Altera custom instruction-based paradigm and a combination of synchronous dynamic random access memory (SDRAM) with on-chip memory in Nios II processors. A complete profile of the algorithms is achieved before the optimization, which locates code leaks, and afterward, creates a custom instruction set, which is then added to the specific design, enhancing the original system. As well, every possible memory combination between on-chip memory and SDRAM has been tested to achieve the best performance. The final throughput of the complete designs are shown. This manuscript outlines a low-cost system, mapped using very large scale integration technology, which accelerates software algorithms by converting them into custom hardware logic blocks and showing the best combination between on-chip memory and SDRAM for the Nios II processor.

  18. Explicit time integration of finite element models on a vectorized, concurrent computer with shared memory

    NASA Technical Reports Server (NTRS)

    Gilbertsen, Noreen D.; Belytschko, Ted

    1990-01-01

    The implementation of a nonlinear explicit program on a vectorized, concurrent computer with shared memory is described and studied. The conflict between vectorization and concurrency is described and some guidelines are given for optimal block sizes. Several example problems are summarized to illustrate the types of speed-ups which can be achieved by reprogramming as compared to compiler optimization.

  19. Translational Approaches Targeting Reconsolidation

    PubMed Central

    Kroes, Marijn C.W.; LeDoux, Joseph E.; Phelps, Elizabeth A.

    2017-01-01

    Maladaptive learned responses and memories contribute to psychiatric disorders that constitute a significant socio-economic burden. Primary treatment methods teach patients to inhibit maladaptive responses, but do not get rid of the memory itself, which explains why many patients experience a return of symptoms even after initially successful treatment. This highlights the need to discover more persistent and robust techniques to diminish maladaptive learned behaviours. One potentially promising approach is to alter the original memory, as opposed to inhibiting it, by targeting memory reconsolidation. Recent research shows that reactivating an old memory results in a period of memory flexibility and requires restorage, or reconsolidation, for the memory to persist. This reconsolidation period allows a window for modification of a specific old memory. Renewal of memory flexibility following reactivation holds great clinical potential as it enables targeting reconsolidation and changing of specific learned responses and memories that contribute to maladaptive mental states and behaviours. Here, we will review translational research on non-human animals, healthy human subjects, and clinical populations aimed at altering memories by targeting reconsolidation using biological treatments (electrical stimulation, noradrenergic antagonists) or behavioural interference (reactivation–extinction paradigm). Both approaches have been used successfully to modify aversive and appetitive memories, yet effectiveness in treating clinical populations has been limited. We will discuss that memory flexibility depends on the type of memory tested and the brain regions that underlie specific types of memory. Further, when and how we can most effectively reactivate a memory and induce flexibility is largely unclear. Finally, the development of drugs that can target reconsolidation and are safe for use in humans would optimize cross-species translations. Increasing the understanding of the mechanism and limitations of memory flexibility upon reactivation should help optimize efficacy of treatments for psychiatric patients. PMID:27240676

  20. Parallel Element Agglomeration Algebraic Multigrid and Upscaling Library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barker, Andrew T.; Benson, Thomas R.; Lee, Chak Shing

    ParELAG is a parallel C++ library for numerical upscaling of finite element discretizations and element-based algebraic multigrid solvers. It provides optimal complexity algorithms to build multilevel hierarchies and solvers that can be used for solving a wide class of partial differential equations (elliptic, hyperbolic, saddle point problems) on general unstructured meshes. Additionally, a novel multilevel solver for saddle point problems with divergence constraint is implemented.

  1. Integrated optimisation technique based on computer-aided capacity and safety evaluation for managing downstream lane-drop merging area of signalised junctions

    NASA Astrophysics Data System (ADS)

    Chen, CHAI; Yiik Diew, WONG

    2017-02-01

    This study provides an integrated strategy, encompassing microscopic simulation, safety assessment, and multi-attribute decision-making, to optimize traffic performance at downstream merging area of signalized intersections. A Fuzzy Cellular Automata (FCA) model is developed to replicate microscopic movement and merging behavior. Based on simulation experiment, the proposed FCA approach is able to provide capacity and safety evaluation of different traffic scenarios. The results are then evaluated through data envelopment analysis (DEA) and analytic hierarchy process (AHP). Optimized geometric layout and control strategies are then suggested for various traffic conditions. An optimal lane-drop distance that is dependent on traffic volume and speed limit can thus be established at the downstream merging area.

  2. Quadratic Polynomial Regression using Serial Observation Processing:Implementation within DART

    NASA Astrophysics Data System (ADS)

    Hodyss, D.; Anderson, J. L.; Collins, N.; Campbell, W. F.; Reinecke, P. A.

    2017-12-01

    Many Ensemble-Based Kalman ltering (EBKF) algorithms process the observations serially. Serial observation processing views the data assimilation process as an iterative sequence of scalar update equations. What is useful about this data assimilation algorithm is that it has very low memory requirements and does not need complex methods to perform the typical high-dimensional inverse calculation of many other algorithms. Recently, the push has been towards the prediction, and therefore the assimilation of observations, for regions and phenomena for which high-resolution is required and/or highly nonlinear physical processes are operating. For these situations, a basic hypothesis is that the use of the EBKF is sub-optimal and performance gains could be achieved by accounting for aspects of the non-Gaussianty. To this end, we develop here a new component of the Data Assimilation Research Testbed [DART] to allow for a wide-variety of users to test this hypothesis. This new version of DART allows one to run several variants of the EBKF as well as several variants of the quadratic polynomial lter using the same forecast model and observations. Dierences between the results of the two systems will then highlight the degree of non-Gaussianity in the system being examined. We will illustrate in this work the differences between the performance of linear versus quadratic polynomial regression in a hierarchy of models from Lorenz-63 to a simple general circulation model.

  3. A Blocked Linear Method for Optimizing Large Parameter Sets in Variational Monte Carlo

    DOE PAGES

    Zhao, Luning; Neuscamman, Eric

    2017-05-17

    We present a modification to variational Monte Carlo’s linear method optimization scheme that addresses a critical memory bottleneck while maintaining compatibility with both the traditional ground state variational principle and our recently-introduced variational principle for excited states. For wave function ansatzes with tens of thousands of variables, our modification reduces the required memory per parallel process from tens of gigabytes to hundreds of megabytes, making the methodology a much better fit for modern supercomputer architectures in which data communication and per-process memory consumption are primary concerns. We verify the efficacy of the new optimization scheme in small molecule tests involvingmore » both the Hilbert space Jastrow antisymmetric geminal power ansatz and real space multi-Slater Jastrow expansions. Satisfied with its performance, we have added the optimizer to the QMCPACK software package, with which we demonstrate on a hydrogen ring a prototype approach for making systematically convergent, non-perturbative predictions of Mott-insulators’ optical band gaps.« less

  4. Toward a Neurobiology of Delusions

    PubMed Central

    Corlett, P.R.; Taylor, J.R.; Wang, X.-J.; Fletcher, P.C.; Krystal, J.H.

    2013-01-01

    Delusions are the false and often incorrigible beliefs that can cause severe suffering in mental illness. We cannot yet explain them in terms of underlying neurobiological abnormalities. However, by drawing on recent advances in the biological, computational and psychological processes of reinforcement learning, memory, and perception it may be feasible to account for delusions in terms of cognition and brain function. The account focuses on a particular parameter, prediction error – the mismatch between expectation and experience – that provides a computational mechanism common to cortical hierarchies, frontostriatal circuits and the amygdala as well as parietal cortices. We suggest that delusions result from aberrations in how brain circuits specify hierarchical predictions, and how they compute and respond to prediction errors. Defects in these fundamental brain mechanisms can vitiate perception, memory, bodily agency and social learning such that individuals with delusions experience an internal and external world that healthy individuals would find difficult to comprehend. The present model attempts to provide a framework through which we can build a mechanistic and translational understanding of these puzzling symptoms. PMID:20558235

  5. The central role of recognition in auditory perception: a neurobiological model.

    PubMed

    McLachlan, Neil; Wilson, Sarah

    2010-01-01

    The model presents neurobiologically plausible accounts of sound recognition (including absolute pitch), neural plasticity involved in pitch, loudness and location information integration, and streaming and auditory recall. It is proposed that a cortical mechanism for sound identification modulates the spectrotemporal response fields of inferior colliculus neurons and regulates the encoding of the echoic trace in the thalamus. Identification involves correlation of sequential spectral slices of the stimulus-driven neural activity with stored representations in association with multimodal memories, verbal lexicons, and contextual information. Identities are then consolidated in auditory short-term memory and bound with attribute information (usually pitch, loudness, and direction) that has been integrated according to the identities' spectral properties. Attention to, or recall of, a particular identity will excite a particular sequence in the identification hierarchies and so lead to modulation of thalamus and inferior colliculus neural spectrotemporal response fields. This operates as an adaptive filter for identities, or their attributes, and explains many puzzling human auditory behaviors, such as the cocktail party effect, selective attention, and continuity illusions.

  6. Effects of Delay Duration on the WMS Logical Memory Performance of Older Adults with Probable Alzheimer's Disease, Probable Vascular Dementia, and Normal Cognition.

    PubMed

    Montgomery, Valencia; Harris, Katie; Stabler, Anthony; Lu, Lisa H

    2017-05-01

    To examine how the duration of time delay between Wechsler Memory Scale (WMS) Logical Memory I and Logical Memory II (LM) affected participants' recall performance. There are 46,146 total Logical Memory administrations to participants diagnosed with either Alzheimer's disease (AD), vascular dementia (VaD), or normal cognition in the National Alzheimer's Disease Coordinating Center's Uniform Data Set. Only 50% of the sample was administered the standard 20-35 min of delay as specified by WMS-R and WMS-III. We found a significant effect of delay time duration on proportion of information retained for the VaD group compared to its control group, which remained after adding LMI raw score as a covariate. There was poorer retention of information with longer delay for this group. This association was not as strong for the AD and cognitively normal groups. A 24.5-min delay was most optimal for differentiating AD from VaD participants (47.7% classification accuracy), an 18.5-min delay was most optimal for differentiating AD versus normal participants (51.7% classification accuracy), and a 22.5-min delay was most optimal for differentiating VaD versus normal participants (52.9% classification accuracy). Considering diagnostic implications, our findings suggest that test administration should incorporate precise tracking of delay periods. We recommend a 20-min delay with 18-25-min range. Poor classification accuracy based on LM data alone is a reminder that story memory performance is only one piece of data that contributes to complex clinical decisions. However, strict adherence to the recommended range yields optimal data for diagnostic decisions. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  7. Neural bases of event knowledge and syntax integration in comprehension of complex sentences.

    PubMed

    Malaia, Evie; Newman, Sharlene

    2015-01-01

    Comprehension of complex sentences is necessarily supported by both syntactic and semantic knowledge, but what linguistic factors trigger a readers' reliance on a specific system? This functional neuroimaging study orthogonally manipulated argument plausibility and verb event type to investigate cortical bases of the semantic effect on argument comprehension during reading. The data suggest that telic verbs facilitate online processing by means of consolidating the event schemas in episodic memory and by easing the computation of syntactico-thematic hierarchies in the left inferior frontal gyrus. The results demonstrate that syntax-semantics integration relies on trade-offs among a distributed network of regions for maximum comprehension efficiency.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Kai; Song, Linze; Shi, Qiang, E-mail: qshi@iccas.ac.cn

    Based on the path integral approach, we derive a new realization of the exact non-Markovian stochastic Schrödinger equation (SSE). The main difference from the previous non-Markovian quantum state diffusion (NMQSD) method is that the complex Gaussian stochastic process used for the forward propagation of the wave function is correlated, which may be used to reduce the amplitude of the non-Markovian memory term at high temperatures. The new SSE is then written into the recently developed hierarchy of pure states scheme, in a form that is more closely related to the hierarchical equation of motion approach. Numerical simulations are then performedmore » to demonstrate the efficiency of the new method.« less

  9. Decoherence and thermalization of a pure quantum state in quantum field theory.

    PubMed

    Giraud, Alexandre; Serreau, Julien

    2010-06-11

    We study the real-time evolution of a self-interacting O(N) scalar field initially prepared in a pure, coherent quantum state. We present a complete solution of the nonequilibrium quantum dynamics from a 1/N expansion of the two-particle-irreducible effective action at next-to-leading order, which includes scattering and memory effects. We demonstrate that, restricting one's attention (or ability to measure) to a subset of the infinite hierarchy of correlation functions, one observes an effective loss of purity or coherence and, on longer time scales, thermalization. We point out that the physics of decoherence is well described by classical statistical field theory.

  10. [Sensory integration: hierarchy and synchronization].

    PubMed

    Kriukov, V I

    2005-01-01

    This is the first in the series of mini-reviews devoted to the basic problems and most important effects of attention in terms of neuronal modeling. We believe that the absence of the unified view on wealth of new date on attention is the main obstacle for further understanding of higher nervous activity. The present work deals with the main ground problem of reconciling two competing architectures designed to integrate the sensory information in the brain. The other mini-reviews will be concerned with the remaining five or six problems of attention, all of them to be ultimately resolved uniformly in the framework of small modification of dominant model of attention and memory.

  11. Testing New Programming Paradigms with NAS Parallel Benchmarks

    NASA Technical Reports Server (NTRS)

    Jin, H.; Frumkin, M.; Schultz, M.; Yan, J.

    2000-01-01

    Over the past decade, high performance computing has evolved rapidly, not only in hardware architectures but also with increasing complexity of real applications. Technologies have been developing to aim at scaling up to thousands of processors on both distributed and shared memory systems. Development of parallel programs on these computers is always a challenging task. Today, writing parallel programs with message passing (e.g. MPI) is the most popular way of achieving scalability and high performance. However, writing message passing programs is difficult and error prone. Recent years new effort has been made in defining new parallel programming paradigms. The best examples are: HPF (based on data parallelism) and OpenMP (based on shared memory parallelism). Both provide simple and clear extensions to sequential programs, thus greatly simplify the tedious tasks encountered in writing message passing programs. HPF is independent of memory hierarchy, however, due to the immaturity of compiler technology its performance is still questionable. Although use of parallel compiler directives is not new, OpenMP offers a portable solution in the shared-memory domain. Another important development involves the tremendous progress in the internet and its associated technology. Although still in its infancy, Java promisses portability in a heterogeneous environment and offers possibility to "compile once and run anywhere." In light of testing these new technologies, we implemented new parallel versions of the NAS Parallel Benchmarks (NPBs) with HPF and OpenMP directives, and extended the work with Java and Java-threads. The purpose of this study is to examine the effectiveness of alternative programming paradigms. NPBs consist of five kernels and three simulated applications that mimic the computation and data movement of large scale computational fluid dynamics (CFD) applications. We started with the serial version included in NPB2.3. Optimization of memory and cache usage was applied to several benchmarks, noticeably BT and SP, resulting in better sequential performance. In order to overcome the lack of an HPF performance model and guide the development of the HPF codes, we employed an empirical performance model for several primitives found in the benchmarks. We encountered a few limitations of HPF, such as lack of supporting the "REDISTRIBUTION" directive and no easy way to handle irregular computation. The parallelization with OpenMP directives was done at the outer-most loop level to achieve the largest granularity. The performance of six HPF and OpenMP benchmarks is compared with their MPI counterparts for the Class-A problem size in the figure in next page. These results were obtained on an SGI Origin2000 (195MHz) with MIPSpro-f77 compiler 7.2.1 for OpenMP and MPI codes and PGI pghpf-2.4.3 compiler with MPI interface for HPF programs.

  12. Breaking Boundaries: Optimizing Reconsolidation-Based Interventions for Strong and Old Memories

    ERIC Educational Resources Information Center

    Elsey, James W. B.; Kindt, Merel

    2017-01-01

    Recent research has demonstrated that consolidated memories can enter a temporary labile state after reactivation, requiring restabilization in order to persist. This process, known as reconsolidation, potentially allows for the modification and disruption of memory. Much interest in reconsolidation stems from the possibility that maladaptive…

  13. The Impact of Sleep Loss on Hippocampal Function

    ERIC Educational Resources Information Center

    Prince, Toni-Moi; Abel, Ted

    2013-01-01

    Hippocampal cellular and molecular processes critical for memory consolidation are affected by the amount and quality of sleep attained. Questions remain with regard to how sleep enhances memory, what parameters of sleep after learning are optimal for memory consolidation, and what underlying hippocampal molecular players are targeted by sleep…

  14. Finding influential nodes for integration in brain networks using optimal percolation theory.

    PubMed

    Del Ferraro, Gino; Moreno, Andrea; Min, Byungjoon; Morone, Flaviano; Pérez-Ramírez, Úrsula; Pérez-Cervera, Laura; Parra, Lucas C; Holodny, Andrei; Canals, Santiago; Makse, Hernán A

    2018-06-11

    Global integration of information in the brain results from complex interactions of segregated brain networks. Identifying the most influential neuronal populations that efficiently bind these networks is a fundamental problem of systems neuroscience. Here, we apply optimal percolation theory and pharmacogenetic interventions in vivo to predict and subsequently target nodes that are essential for global integration of a memory network in rodents. The theory predicts that integration in the memory network is mediated by a set of low-degree nodes located in the nucleus accumbens. This result is confirmed with pharmacogenetic inactivation of the nucleus accumbens, which eliminates the formation of the memory network, while inactivations of other brain areas leave the network intact. Thus, optimal percolation theory predicts essential nodes in brain networks. This could be used to identify targets of interventions to modulate brain function.

  15. CARL: a LabVIEW 3 computer program for conducting exposure therapy for the treatment of dental injection fear.

    PubMed

    Coldwell, S E; Getz, T; Milgrom, P; Prall, C W; Spadafora, A; Ramsay, D S

    1998-04-01

    This paper describes CARL (Computer Assisted Relaxation Learning), a computerized, exposure-based therapy program for the treatment of dental injection fear. The CARL program operates primarily in two different modes; in vitro, which presents a video-taped exposure hierarchy, and in vivo, which presents scripts for a dentist or hygienist to use while working with a subject. Two additional modes are used to train subjects to use the program and to administer behavioral assessment tests. The program contains five different modules, which function to register a subject, train subjects to use physical and cognitive relaxation techniques, deliver an exposure hierarchy, question subjects about the helpfulness of each of the therapy components, and test for memory effects of anxiolytic medication. Nine subjects have completed the CARL therapy program and 1-yr follow-up as participants in a placebo-controlled clinical trial examining the effects of alprazolam on exposure therapy for dental injection phobia. All nine subjects were able to receive two dental injections, and all reduced their general fear of dental injections. Initial results therefore indicate that the CARL program successfully reduces dental injection fear.

  16. Fast analysis of molecular dynamics trajectories with graphics processing units-Radial distribution function histogramming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levine, Benjamin G., E-mail: ben.levine@temple.ed; Stone, John E., E-mail: johns@ks.uiuc.ed; Kohlmeyer, Axel, E-mail: akohlmey@temple.ed

    2011-05-01

    The calculation of radial distribution functions (RDFs) from molecular dynamics trajectory data is a common and computationally expensive analysis task. The rate limiting step in the calculation of the RDF is building a histogram of the distance between atom pairs in each trajectory frame. Here we present an implementation of this histogramming scheme for multiple graphics processing units (GPUs). The algorithm features a tiling scheme to maximize the reuse of data at the fastest levels of the GPU's memory hierarchy and dynamic load balancing to allow high performance on heterogeneous configurations of GPUs. Several versions of the RDF algorithm aremore » presented, utilizing the specific hardware features found on different generations of GPUs. We take advantage of larger shared memory and atomic memory operations available on state-of-the-art GPUs to accelerate the code significantly. The use of atomic memory operations allows the fast, limited-capacity on-chip memory to be used much more efficiently, resulting in a fivefold increase in performance compared to the version of the algorithm without atomic operations. The ultimate version of the algorithm running in parallel on four NVIDIA GeForce GTX 480 (Fermi) GPUs was found to be 92 times faster than a multithreaded implementation running on an Intel Xeon 5550 CPU. On this multi-GPU hardware, the RDF between two selections of 1,000,000 atoms each can be calculated in 26.9 s per frame. The multi-GPU RDF algorithms described here are implemented in VMD, a widely used and freely available software package for molecular dynamics visualization and analysis.« less

  17. Fast Analysis of Molecular Dynamics Trajectories with Graphics Processing Units—Radial Distribution Function Histogramming

    PubMed Central

    Stone, John E.; Kohlmeyer, Axel

    2011-01-01

    The calculation of radial distribution functions (RDFs) from molecular dynamics trajectory data is a common and computationally expensive analysis task. The rate limiting step in the calculation of the RDF is building a histogram of the distance between atom pairs in each trajectory frame. Here we present an implementation of this histogramming scheme for multiple graphics processing units (GPUs). The algorithm features a tiling scheme to maximize the reuse of data at the fastest levels of the GPU’s memory hierarchy and dynamic load balancing to allow high performance on heterogeneous configurations of GPUs. Several versions of the RDF algorithm are presented, utilizing the specific hardware features found on different generations of GPUs. We take advantage of larger shared memory and atomic memory operations available on state-of-the-art GPUs to accelerate the code significantly. The use of atomic memory operations allows the fast, limited-capacity on-chip memory to be used much more efficiently, resulting in a fivefold increase in performance compared to the version of the algorithm without atomic operations. The ultimate version of the algorithm running in parallel on four NVIDIA GeForce GTX 480 (Fermi) GPUs was found to be 92 times faster than a multithreaded implementation running on an Intel Xeon 5550 CPU. On this multi-GPU hardware, the RDF between two selections of 1,000,000 atoms each can be calculated in 26.9 seconds per frame. The multi-GPU RDF algorithms described here are implemented in VMD, a widely used and freely available software package for molecular dynamics visualization and analysis. PMID:21547007

  18. Fabrication and characterization of shape memory polymers at small-scales

    NASA Astrophysics Data System (ADS)

    Wornyo, Edem

    The objective of this research is to thoroughly investigate the shape memory effect in polymers, characterize, and optimize these polymers for applications in information storage systems. Previous research effort in this field concentrated on shape memory metals for biomedical applications such as stents. Minimal work has been done on shape memory polymers; and the available work on shape memory polymers has not characterized the behaviors of this category of polymers fully. Copolymer shape memory materials based on diethylene glycol dimethacrylate (DEGDMA) crosslinker, and tert butyl acrylate (tBA) monomer are designed. The design encompasses a careful control of the backbone chemistry of the materials. Characterization methods such as dynamic mechanical analysis (DMA), differential scanning calorimetry (DSC); and novel nanoscale techniques such as atomic force microscopy (AFM), and nanoindentation are applied to this system of materials. Designed experiments are conducted on the materials to optimize spin coating conditions for thin films. Furthermore, the recovery, a key for the use of these polymeric materials for information storage, is examined in detail with respect to temperature. In sum, the overarching objectives of the proposed research are to: (i) Design shape memory polymers based on polyethylene glycol dimethacrylate (PEGDMA) and diethylene glycol dimethacrylate (DEGDMA) crosslinkers, 2-hydroxyethyl methacrylate (HEMA) and tert-butyl acrylate monomer (tBA). (ii) Utilize dynamic mechanical analysis (DMA) to comprehend the thermomechanical properties of shape memory polymers based on DEGDMA and tBA. (iii) Utilize nanoindentation and atomic force microscopy (AFM) to understand the nanoscale behavior of these SMPs, and explore the strain storage and recovery of the polymers from a deformed state. (iv) Study spin coating conditions on thin film quality with designed experiments. (iv) Apply neural networks and genetic algorithms to optimize these systems.

  19. Optimization of a human IgG B-cell ELISpot assay for the analysis of vaccine-induced B-cell responses.

    PubMed

    Jahnmatz, Maja; Kesa, Gun; Netterlid, Eva; Buisman, Anne-Marie; Thorstensson, Rigmor; Ahlborg, Niklas

    2013-05-31

    B-cell responses after infection or vaccination are often measured as serum titers of antigen-specific antibodies. Since this does not address the aspect of memory B-cell activity, it may not give a complete picture of the B-cell response. Analysis of memory B cells by ELISpot is therefore an important complement to conventional serology. B-cell ELISpot was developed more than 25 years ago and many assay protocols/reagents would benefit from optimization. We therefore aimed at developing an optimized B-cell ELISpot for the analysis of vaccine-induced human IgG-secreting memory B cells. A protocol was developed based on new monoclonal antibodies to human IgG and biotin-avidin amplification to increase the sensitivity. After comparison of various compounds commonly used to in vitro-activate memory B cells for ELISpot analysis, the TLR agonist R848 plus interleukin (IL)-2 was selected as the most efficient activator combination. The new protocol was subsequently compared to an established protocol, previously used in vaccine studies, based on polyclonal antibodies without biotin avidin amplification and activation of memory B-cells using a mix of antigen, CpG, IL-2 and IL-10. The new protocol displayed significantly better detection sensitivity, shortened the incubation time needed for the activation of memory B cells and reduced the amount of antigen required for the assay. The functionality of the new protocol was confirmed by analyzing specific memory B cells to five different antigens, induced in a limited number of subjects vaccinated against tetanus, diphtheria and pertussis. The limited number of subjects did not allow for a direct comparison with other vaccine studies. Optimization of the B-cell ELISpot will facilitate an improved analysis of IgG-secreting B cells in vaccine studies. Copyright © 2013 Elsevier B.V. All rights reserved.

  20. The optimal timing of stimulation to induce long-lasting positive effects on episodic memory in physiological aging.

    PubMed

    Manenti, Rosa; Sandrini, Marco; Brambilla, Michela; Cotelli, Maria

    2016-09-15

    Episodic memory displays the largest degree of age-related decline. A noninvasive brain stimulation technique that can be used to modulate memory in physiological aging is transcranial Direct Current Stimulation (tDCS). However, an aspect that has not been adequately investigated in previous studies is the optimal timing of stimulation to induce long-lasting positive effects on episodic memory function. Our previous studies showed episodic memory enhancement in older adults when anodal tDCS was applied over the left lateral prefrontal cortex during encoding or after memory consolidation with or without a contextual reminder. Here we directly compared the two studies to explore which of the tDCS protocols would induce longer-lasting positive effects on episodic memory function in older adults. In addition, we aimed to determine whether subjective memory complaints would be related to the changes in memory performance (forgetting) induced by tDCS, a relevant issue in aging research since individuals with subjective memory complaints seem to be at higher risk of later memory decline. The results showed that anodal tDCS applied after consolidation with a contextual reminder induced longer-lasting positive effects on episodic memory, conceivably through reconsolidation, than anodal tDCS during encoding. Furthermore, we reported, providing new data, a moderate negative correlation between subjective memory complaints and forgetting when anodal tDCS was applied after consolidation with a contextual reminder. This study sheds light on the best-suited timing of stimulation to induce long-lasting positive effects on memory function and might help the clinicians to select the most effective tDCS protocol to prevent memory decline. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Sign epistasis caused by hierarchy within signalling cascades.

    PubMed

    Nghe, Philippe; Kogenaru, Manjunatha; Tans, Sander J

    2018-04-13

    Sign epistasis is a central evolutionary constraint, but its causal factors remain difficult to predict. Here we use the notion of parameterised optima to explain epistasis within a signalling cascade, and test these predictions in Escherichia coli. We show that sign epistasis arises from the benefit of tuning phenotypic parameters of cascade genes with respect to each other, rather than from their complex and incompletely known genetic bases. Specifically, sign epistasis requires only that the optimal phenotypic parameters of one gene depend on the phenotypic parameters of another, independent of other details, such as activating or repressing nature, position within the cascade, intra-genic pleiotropy or genotype. Mutational effects change sign more readily in downstream genes, indicating that optimising downstream genes is more constrained. The findings show that sign epistasis results from the inherent upstream-downstream hierarchy between signalling cascade genes, and can be addressed without exhaustive genotypic mapping.

  2. The role of nonlinear viscoelasticity on the functionality of laminating shortenings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Macias-Rodriguez, Braulio A.; Peyronel, Fernanda; Marangoni, Alejandro G.

    The rheology of fats is essential for the development of homogeneous and continuous layered structures of doughs. Here, we define laminating shortenings in terms of rheological behavior displayed during linear-to-nonlinear shear deformations, investigated by large amplitude oscillatory shear rheology. Likewise, we associate the rheological behavior of the shortenings with structural length scales elucidated by ultra-small angle x-ray scattering and cryo-electron microscopy. Shortenings exhibited solid-like viscoelastic and viscoelastoplastic behaviors in the linear and nonlinear regimes respectively. In the nonlinear region, laminating shortenings dissipated more viscous energy (larger normalized dynamic viscosities) than a cake bakery shortening. The fat solid-like network of laminatingmore » shortening displayed a three-hierarchy structure and layered crystal aggregates, in comparison to two-hierarchy structure and spherical-like crystal aggregates of a cake shortening. We argue that the observed rheology, correlated to the structural network, is crucial for optimal laminating performance of shortenings.« less

  3. Research on comprehensive decision-making of PV power station connecting system

    NASA Astrophysics Data System (ADS)

    Zhou, Erxiong; Xin, Chaoshan; Ma, Botao; Cheng, Kai

    2018-04-01

    In allusion to the incomplete indexes system and not making decision on the subjectivity and objectivity of PV power station connecting system, based on the combination of improved Analytic Hierarchy Process (AHP), Criteria Importance Through Intercriteria Correlation (CRITIC) as well as grey correlation degree analysis (GCDA) is comprehensively proposed to select the appropriate system connecting scheme of PV power station. Firstly, indexes of PV power station connecting system are divided the recursion order hierarchy and calculated subjective weight by the improved AHP. Then, CRITIC is adopted to determine the objective weight of each index through the comparison intensity and conflict between indexes. The last the improved GCDA is applied to screen the optimal scheme, so as to, from the subjective and objective angle, select the connecting system. Comprehensive decision of Xinjiang PV power station is conducted and reasonable analysis results are attained. The research results might provide scientific basis for investment decision.

  4. Whole-Body Movements in Long-Term Weightlessness: Hierarchies of the Controlled Variables Are Gravity-Dependent.

    PubMed

    Casellato, Claudia; Pedrocchi, Alessandra; Ferrigno, Giancarlo

    2017-01-01

    Switching between contexts affects the mechanisms underlying motion planning, in particular it may entail reranking the variables to be controlled in defining the motor solutions. Three astronauts performed multiple sessions of whole-body pointing, in normogravity before launch, in prolonged weightlessness onboard the International Space Station, and after return. The effect of gravity context on kinematic and dynamic components was evaluated. Hand trajectory was gravity independent; center-of-mass excursion was highly variable within and between subjects. The body-environment effort exchange, expressed as inertial ankle momentum, was systematically lower in weightlessness than in normogravity. After return on Earth, the system underwent a rapid 1-week readaptation. The study indicates that minimizing the control effort is given greater weight when optimizing the motor plan in weightlessness compared to normogravity: the hierarchies of the controlled variables are gravity dependent.

  5. The Deterministic Information Bottleneck

    NASA Astrophysics Data System (ADS)

    Strouse, D. J.; Schwab, David

    2015-03-01

    A fundamental and ubiquitous task that all organisms face is prediction of the future based on past sensory experience. Since an individual's memory resources are limited and costly, however, there is a tradeoff between memory cost and predictive payoff. The information bottleneck (IB) method (Tishby, Pereira, & Bialek 2000) formulates this tradeoff as a mathematical optimization problem using an information theoretic cost function. IB encourages storing as few bits of past sensory input as possible while selectively preserving the bits that are most predictive of the future. Here we introduce an alternative formulation of the IB method, which we call the deterministic information bottleneck (DIB). First, we argue for an alternative cost function, which better represents the biologically-motivated goal of minimizing required memory resources. Then, we show that this seemingly minor change has the dramatic effect of converting the optimal memory encoder from stochastic to deterministic. Next, we propose an iterative algorithm for solving the DIB problem. Additionally, we compare the IB and DIB methods on a variety of synthetic datasets, and examine the performance of retinal ganglion cell populations relative to the optimal encoding strategy for each problem.

  6. [Change in short-term memory in pupils of 5-7th classes in the process of class work].

    PubMed

    Rybakov, V P; Orlova, N I

    2014-01-01

    The subject of this study was the investigation of the short-term memory (STM) of visual (SVM) and auditory (SAM) modality in boys and girls of the middle school age, as in the daytime, and during the course of the school week. The obtained data show that in pupils from the 5th to the 7th class SVM and SAM playback volume in children of both genders is significantly increased, while SVM productivity in boys from 6 - 7th classes is higher than in girls of the same age. The amplitude of day changes in SVM and SAM was found to decrease significantly with the age. In all age groups the range of daily fluctuations in short-term memory of both modalities in boys appears to be higher than in girls. In all age groups a significant part of schoolchildren was revealed to possess optimal forms of temporal organization of short-term memory: morning, day and morning-day types, in that while during the school week in pupils of 5th to 7th classes of both genders the number of optimal waveforms of curves of daily dynamics of short-term memory increases, which contributes to the optimization of their mental performance.

  7. YAPPA: a Compiler-Based Parallelization Framework for Irregular Applications on MPSoCs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lovergine, Silvia; Tumeo, Antonino; Villa, Oreste

    Modern embedded systems include hundreds of cores. Because of the difficulty in providing a fast, coherent memory architecture, these systems usually rely on non-coherent, non-uniform memory architectures with private memories for each core. However, programming these systems poses significant challenges. The developer must extract large amounts of parallelism, while orchestrating communication among cores to optimize application performance. These issues become even more significant with irregular applications, which present data sets difficult to partition, unpredictable memory accesses, unbalanced control flow and fine grained communication. Hand-optimizing every single aspect is hard and time-consuming, and it often does not lead to the expectedmore » performance. There is a growing gap between such complex and highly-parallel architectures and the high level languages used to describe the specification, which were designed for simpler systems and do not consider these new issues. In this paper we introduce YAPPA (Yet Another Parallel Programming Approach), a compilation framework for the automatic parallelization of irregular applications on modern MPSoCs based on LLVM. We start by considering an efficient parallel programming approach for irregular applications on distributed memory systems. We then propose a set of transformations that can reduce the development and optimization effort. The results of our initial prototype confirm the correctness of the proposed approach.« less

  8. On the Efficacy of Source Code Optimizations for Cache-Based Systems

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Saphir, William C.

    1998-01-01

    Obtaining high performance without machine-specific tuning is an important goal of scientific application programmers. Since most scientific processing is done on commodity microprocessors with hierarchical memory systems, this goal of "portable performance" can be achieved if a common set of optimization principles is effective for all such systems. It is widely believed, or at least hoped, that portable performance can be realized. The rule of thumb for optimization on hierarchical memory systems is to maximize temporal and spatial locality of memory references by reusing data and minimizing memory access stride. We investigate the effects of a number of optimizations on the performance of three related kernels taken from a computational fluid dynamics application. Timing the kernels on a range of processors, we observe an inconsistent and often counterintuitive impact of the optimizations on performance. In particular, code variations that have a positive impact on one architecture can have a negative impact on another, and variations expected to be unimportant can produce large effects. Moreover, we find that cache miss rates - as reported by a cache simulation tool, and confirmed by hardware counters - only partially explain the results. By contrast, the compiler-generated assembly code provides more insight by revealing the importance of processor-specific instructions and of compiler maturity, both of which strongly, and sometimes unexpectedly, influence performance. We conclude that it is difficult to obtain performance portability on modern cache-based computers, and comment on the implications of this result.

  9. On the Efficacy of Source Code Optimizations for Cache-Based Systems

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Saphir, William C.; Saini, Subhash (Technical Monitor)

    1998-01-01

    Obtaining high performance without machine-specific tuning is an important goal of scientific application programmers. Since most scientific processing is done on commodity microprocessors with hierarchical memory systems, this goal of "portable performance" can be achieved if a common set of optimization principles is effective for all such systems. It is widely believed, or at least hoped, that portable performance can be realized. The rule of thumb for optimization on hierarchical memory systems is to maximize temporal and spatial locality of memory references by reusing data and minimizing memory access stride. We investigate the effects of a number of optimizations on the performance of three related kernels taken from a computational fluid dynamics application. Timing the kernels on a range of processors, we observe an inconsistent and often counterintuitive impact of the optimizations on performance. In particular, code variations that have a positive impact on one architecture can have a negative impact on another, and variations expected to be unimportant can produce large effects. Moreover, we find that cache miss rates-as reported by a cache simulation tool, and confirmed by hardware counters-only partially explain the results. By contrast, the compiler-generated assembly code provides more insight by revealing the importance of processor-specific instructions and of compiler maturity, both of which strongly, and sometimes unexpectedly, influence performance. We conclude that it is difficult to obtain performance portability on modern cache-based computers, and comment on the implications of this result.

  10. Inferring Soil Moisture Memory from Streamflow Observations Using a Simple Water Balance Model

    NASA Technical Reports Server (NTRS)

    Orth, Rene; Koster, Randal Dean; Seneviratne, Sonia I.

    2013-01-01

    Soil moisture is known for its integrative behavior and resulting memory characteristics. Soil moisture anomalies can persist for weeks or even months into the future, making initial soil moisture a potentially important contributor to skill in weather forecasting. A major difficulty when investigating soil moisture and its memory using observations is the sparse availability of long-term measurements and their limited spatial representativeness. In contrast, there is an abundance of long-term streamflow measurements for catchments of various sizes across the world. We investigate in this study whether such streamflow measurements can be used to infer and characterize soil moisture memory in respective catchments. Our approach uses a simple water balance model in which evapotranspiration and runoff ratios are expressed as simple functions of soil moisture; optimized functions for the model are determined using streamflow observations, and the optimized model in turn provides information on soil moisture memory on the catchment scale. The validity of the approach is demonstrated with data from three heavily monitored catchments. The approach is then applied to streamflow data in several small catchments across Switzerland to obtain a spatially distributed description of soil moisture memory and to show how memory varies, for example, with altitude and topography.

  11. Adapting Wave-front Algorithms to Efficiently Utilize Systems with Deep Communication Hierarchies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerbyson, Darren J.; Lang, Michael; Pakin, Scott

    2011-09-30

    Large-scale systems increasingly exhibit a differential between intra-chip and inter-chip communication performance especially in hybrid systems using accelerators. Processorcores on the same socket are able to communicate at lower latencies, and with higher bandwidths, than cores on different sockets either within the same node or between nodes. A key challenge is to efficiently use this communication hierarchy and hence optimize performance. We consider here the class of applications that contains wavefront processing. In these applications data can only be processed after their upstream neighbors have been processed. Similar dependencies result between processors in which communication is required to pass boundarymore » data downstream and whose cost is typically impacted by the slowest communication channel in use. In this work we develop a novel hierarchical wave-front approach that reduces the use of slower communications in the hierarchy but at the cost of additional steps in the parallel computation and higher use of on-chip communications. This tradeoff is explored using a performance model. An implementation using the Reverse-acceleration programming model on the petascale Roadrunner system demonstrates a 27% performance improvement at full system-scale on a kernel application. The approach is generally applicable to large-scale multi-core and accelerated systems where a differential in system communication performance exists.« less

  12. Adapting wave-front algorithms to efficiently utilize systems with deep communication hierarchies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerbyson, Darren J; Lang, Michael; Pakin, Scott

    2009-01-01

    Large-scale systems increasingly exhibit a differential between intra-chip and inter-chip communication performance. Processor-cores on the same socket are able to communicate at lower latencies, and with higher bandwidths, than cores on different sockets either within the same node or between nodes. A key challenge is to efficiently use this communication hierarchy and hence optimize performance. We consider here the class of applications that contain wave-front processing. In these applications data can only be processed after their upstream neighbors have been processed. Similar dependencies result between processors in which communication is required to pass boundary data downstream and whose cost ismore » typically impacted by the slowest communication channel in use. In this work we develop a novel hierarchical wave-front approach that reduces the use of slower communications in the hierarchy but at the cost of additional computation and higher use of on-chip communications. This tradeoff is explored using a performance model and an implementation on the Petascale Roadrunner system demonstrates a 27% performance improvement at full system-scale on a kernel application. The approach is generally applicable to large-scale multi-core and accelerated systems where a differential in system communication performance exists.« less

  13. An Ideal Observer Analysis of Visual Working Memory

    ERIC Educational Resources Information Center

    Sims, Chris R.; Jacobs, Robert A.; Knill, David C.

    2012-01-01

    Limits in visual working memory (VWM) strongly constrain human performance across many tasks. However, the nature of these limits is not well understood. In this article we develop an ideal observer analysis of human VWM by deriving the expected behavior of an optimally performing but limited-capacity memory system. This analysis is framed around…

  14. Evaluation of Goal Programming for the Optimal Assignment of Inspectors to Construction Projects

    DTIC Science & Technology

    1988-09-01

    Inputs ..... .............. 90 Equation Coefficients . ....... .. 90 Weights, Priorities and the AHP . . 91 Right-Hand Side Values ........ .. 91...the AHP Hierarchy with k Levels . . 36 3. Sample Matrix for Pairwise Comparison ........ .. 37 4. Assignment of I and p for Example Problem...Weights for Example Problem ... 61 3. AHP Weights and Coefficient ci, Values. ........ 63 vii AFIT/GEM/LSM/88S-16 Abstract The purpose of this study was

  15. Enlisted Personnel Allocation System

    DTIC Science & Technology

    1989-03-01

    hierarchy is further subdivided into two characteristic groupings: intelligence qualifications and physical qualifications. 41 I I 7 -, S- ie p if- i - LL...weighted as 30% of the applicant’s Intelligence Qualifications score). As shown in Figure 6, a step function generates a score based on the...34 There is no aritificial time window imposed on any MOS. Any open training date within the full DEP horizon may be recommended by the optimization

  16. Selection of reference standard during method development using the analytical hierarchy process.

    PubMed

    Sun, Wan-yang; Tong, Ling; Li, Dong-xiang; Huang, Jing-yi; Zhou, Shui-ping; Sun, Henry; Bi, Kai-shun

    2015-03-25

    Reference standard is critical for ensuring reliable and accurate method performance. One important issue is how to select the ideal one from the alternatives. Unlike the optimization of parameters, the criteria of the reference standard are always immeasurable. The aim of this paper is to recommend a quantitative approach for the selection of reference standard during method development based on the analytical hierarchy process (AHP) as a decision-making tool. Six alternative single reference standards were assessed in quantitative analysis of six phenolic acids from Salvia Miltiorrhiza and its preparations by using ultra-performance liquid chromatography. The AHP model simultaneously considered six criteria related to reference standard characteristics and method performance, containing feasibility to obtain, abundance in samples, chemical stability, accuracy, precision and robustness. The priority of each alternative was calculated using standard AHP analysis method. The results showed that protocatechuic aldehyde is the ideal reference standard, and rosmarinic acid is about 79.8% ability as the second choice. The determination results successfully verified the evaluation ability of this model. The AHP allowed us comprehensive considering the benefits and risks of the alternatives. It was an effective and practical tool for optimization of reference standards during method development. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Optimal evaluation of infectious medical waste disposal companies using the fuzzy analytic hierarchy process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ho, Chao Chung, E-mail: ho919@pchome.com.tw

    Ever since Taiwan's National Health Insurance implemented the diagnosis-related groups payment system in January 2010, hospital income has declined. Therefore, to meet their medical waste disposal needs, hospitals seek suppliers that provide high-quality services at a low cost. The enactment of the Waste Disposal Act in 1974 had facilitated some improvement in the management of waste disposal. However, since the implementation of the National Health Insurance program, the amount of medical waste from disposable medical products has been increasing. Further, of all the hazardous waste types, the amount of infectious medical waste has increased at the fastest rate. This ismore » because of the increase in the number of items considered as infectious waste by the Environmental Protection Administration. The present study used two important findings from previous studies to determine the critical evaluation criteria for selecting infectious medical waste disposal firms. It employed the fuzzy analytic hierarchy process to set the objective weights of the evaluation criteria and select the optimal infectious medical waste disposal firm through calculation and sorting. The aim was to propose a method of evaluation with which medical and health care institutions could objectively and systematically choose appropriate infectious medical waste disposal firms.« less

  18. Managing search complexity in linguistic geometry.

    PubMed

    Stilman, B

    1997-01-01

    This paper is a new step in the development of linguistic geometry. This formal theory is intended to discover and generalize the inner properties of human expert heuristics, which have been successful in a certain class of complex control systems, and apply them to different systems. In this paper, we investigate heuristics extracted in the form of hierarchical networks of planning paths of autonomous agents. Employing linguistic geometry tools the dynamic hierarchy of networks is represented as a hierarchy of formal attribute languages. The main ideas of this methodology are shown in the paper on two pilot examples of the solution of complex optimization problems. The first example is a problem of strategic planning for the air combat, in which concurrent actions of four vehicles are simulated as serial interleaving moves. The second example is a problem of strategic planning for the space comb of eight autonomous vehicles (with interleaving moves) that requires generation of the search tree of the depth 25 with the branching factor 30. This is beyond the capabilities of modern and conceivable future computers (employing conventional approaches). In both examples the linguistic geometry tools showed deep and highly selective searches in comparison with conventional search algorithms. For the first example a sketch of the proof of optimality of the solution is considered.

  19. Optimal evaluation of infectious medical waste disposal companies using the fuzzy analytic hierarchy process.

    PubMed

    Ho, Chao Chung

    2011-07-01

    Ever since Taiwan's National Health Insurance implemented the diagnosis-related groups payment system in January 2010, hospital income has declined. Therefore, to meet their medical waste disposal needs, hospitals seek suppliers that provide high-quality services at a low cost. The enactment of the Waste Disposal Act in 1974 had facilitated some improvement in the management of waste disposal. However, since the implementation of the National Health Insurance program, the amount of medical waste from disposable medical products has been increasing. Further, of all the hazardous waste types, the amount of infectious medical waste has increased at the fastest rate. This is because of the increase in the number of items considered as infectious waste by the Environmental Protection Administration. The present study used two important findings from previous studies to determine the critical evaluation criteria for selecting infectious medical waste disposal firms. It employed the fuzzy analytic hierarchy process to set the objective weights of the evaluation criteria and select the optimal infectious medical waste disposal firm through calculation and sorting. The aim was to propose a method of evaluation with which medical and health care institutions could objectively and systematically choose appropriate infectious medical waste disposal firms. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Optimized distributed computing environment for mask data preparation

    NASA Astrophysics Data System (ADS)

    Ahn, Byoung-Sup; Bang, Ju-Mi; Ji, Min-Kyu; Kang, Sun; Jang, Sung-Hoon; Choi, Yo-Han; Ki, Won-Tai; Choi, Seong-Woon; Han, Woo-Sung

    2005-11-01

    As the critical dimension (CD) becomes smaller, various resolution enhancement techniques (RET) are widely adopted. In developing sub-100nm devices, the complexity of optical proximity correction (OPC) is severely increased and applied OPC layers are expanded to non-critical layers. The transformation of designed pattern data by OPC operation causes complexity, which cause runtime overheads to following steps such as mask data preparation (MDP), and collapse of existing design hierarchy. Therefore, many mask shops exploit the distributed computing method in order to reduce the runtime of mask data preparation rather than exploit the design hierarchy. Distributed computing uses a cluster of computers that are connected to local network system. However, there are two things to limit the benefit of the distributing computing method in MDP. First, every sequential MDP job, which uses maximum number of available CPUs, is not efficient compared to parallel MDP job execution due to the input data characteristics. Second, the runtime enhancement over input cost is not sufficient enough since the scalability of fracturing tools is limited. In this paper, we will discuss optimum load balancing environment that is useful in increasing the uptime of distributed computing system by assigning appropriate number of CPUs for each input design data. We will also describe the distributed processing (DP) parameter optimization to obtain maximum throughput in MDP job processing.

  1. An Ideal Observer Analysis of Visual Working Memory

    PubMed Central

    Sims, Chris R.; Jacobs, Robert A.; Knill, David C.

    2013-01-01

    Limits in visual working memory (VWM) strongly constrain human performance across many tasks. However, the nature of these limits is not well understood. In this paper we develop an ideal observer analysis of human visual working memory, by deriving the expected behavior of an optimally performing, but limited-capacity memory system. This analysis is framed around rate–distortion theory, a branch of information theory that provides optimal bounds on the accuracy of information transmission subject to a fixed information capacity. The result of the ideal observer analysis is a theoretical framework that provides a task-independent and quantitative definition of visual memory capacity and yields novel predictions regarding human performance. These predictions are subsequently evaluated and confirmed in two empirical studies. Further, the framework is general enough to allow the specification and testing of alternative models of visual memory (for example, how capacity is distributed across multiple items). We demonstrate that a simple model developed on the basis of the ideal observer analysis—one which allows variability in the number of stored memory representations, but does not assume the presence of a fixed item limit—provides an excellent account of the empirical data, and further offers a principled re-interpretation of existing models of visual working memory. PMID:22946744

  2. Patterning optimization for 55nm design rule DRAM/flash memory using production-ready customized illuminations

    NASA Astrophysics Data System (ADS)

    Chen, Ting; Van Den Broeke, Doug; Hsu, Stephen; Hsu, Michael; Park, Sangbong; Berger, Gabriel; Coskun, Tamer; de Vocht, Joep; Chen, Fung; Socha, Robert; Park, JungChul; Gronlund, Keith

    2005-11-01

    Illumination optimization, often combined with optical proximity corrections (OPC) to the mask, is becoming one of the critical components for a production-worthy lithography process for 55nm-node DRAM/Flash memory devices and beyond. At low-k1, e.g. k1<0.31, both resolution and imaging contrast can be severely limited by the current imaging tools while using the standard illumination sources. Illumination optimization is a process where the source shape is varied, in both profile and intensity distribution, to achieve enhancement in the final image contrast as compared to using the non-optimized sources. The optimization can be done efficiently for repetitive patterns such as DRAM/Flash memory cores. However, illumination optimization often produces source shapes that are "free-form" like and they can be too complex to be directly applicable for production and lack the necessary radial and annular symmetries desirable for the diffractive optical element (DOE) based illumination systems in today's leading lithography tools. As a result, post-optimization rendering and verification of the optimized source shape are often necessary to meet the production-ready or manufacturability requirements and ensure optimal performance gains. In this work, we describe our approach to the illumination optimization for k1<0.31 DRAM/Flash memory patterns, using an ASML XT:1400i at NA 0.93, where the all necessary manufacturability requirements are fully accounted for during the optimization. The imaging contrast in the resist is optimized in a reduced solution space constrained by the manufacturability requirements, which include minimum distance between poles, minimum opening pole angles, minimum ring width and minimum source filling factor in the sigma space. For additional performance gains, the intensity within the optimized source can vary in a gray-tone fashion (eight shades used in this work). Although this new optimization approach can sometimes produce closely spaced solutions as gauged by the NILS based metrics, we show that the optimal and production-ready source shape solution can be easily determined by comparing the best solutions to the "free-form" solution and more importantly, by their respective imaging fidelity and process latitude ranking. Imaging fidelity and process latitude simulations are performed to analyze the impact and sensitivity of the manufacturability requirements on pattern specific illumination optimizations using ASML XT:1400i and other latest imaging systems. Mask model based OPC (MOPC) is applied and optimized sequentially to ensure that the CD uniformity requirements are met.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Luning; Neuscamman, Eric

    We present a modification to variational Monte Carlo’s linear method optimization scheme that addresses a critical memory bottleneck while maintaining compatibility with both the traditional ground state variational principle and our recently-introduced variational principle for excited states. For wave function ansatzes with tens of thousands of variables, our modification reduces the required memory per parallel process from tens of gigabytes to hundreds of megabytes, making the methodology a much better fit for modern supercomputer architectures in which data communication and per-process memory consumption are primary concerns. We verify the efficacy of the new optimization scheme in small molecule tests involvingmore » both the Hilbert space Jastrow antisymmetric geminal power ansatz and real space multi-Slater Jastrow expansions. Satisfied with its performance, we have added the optimizer to the QMCPACK software package, with which we demonstrate on a hydrogen ring a prototype approach for making systematically convergent, non-perturbative predictions of Mott-insulators’ optical band gaps.« less

  4. A comparison of IQ and memory cluster solutions in moderate and severe pediatric traumatic brain injury.

    PubMed

    Thaler, Nicholas S; Terranova, Jennifer; Turner, Alisa; Mayfield, Joan; Allen, Daniel N

    2015-01-01

    Recent studies have examined heterogeneous neuropsychological outcomes in childhood traumatic brain injury (TBI) using cluster analysis. These studies have identified homogeneous subgroups based on tests of IQ, memory, and other cognitive abilities that show some degree of association with specific cognitive, emotional, and behavioral outcomes, and have demonstrated that the clusters derived for children with TBI are different from those observed in normal populations. However, the extent to which these subgroups are stable across abilities has not been examined, and this has significant implications for the generalizability and clinical utility of TBI clusters. The current study addressed this by comparing IQ and memory profiles of 137 children who sustained moderate-to-severe TBI. Cluster analysis of IQ and memory scores indicated that a four-cluster solution was optimal for the IQ scores and a five-cluster solution was optimal for the memory scores. Three clusters on each battery differed primarily by level of performance, while the others had pattern variations. Cross-plotting the clusters across respective IQ and memory test scores indicated that clusters defined by level were generally stable, while clusters defined by pattern differed. Notably, children with slower processing speed exhibited low-average to below-average performance on memory indexes. These results provide some support for the stability of previously identified memory and IQ clusters and provide information about the relationship between IQ and memory in children with TBI.

  5. Memory effects in funnel ratchet of self-propelled particles

    NASA Astrophysics Data System (ADS)

    Hu, Cai-Tian; Wu, Jian-Chun; Ai, Bao-Quan

    2017-05-01

    The transport of self-propelled particles with memory effects is investigated in a two-dimensional periodic channel. Funnel-shaped barriers are regularly arrayed in the channel. Due to the asymmetry of the barriers, the self-propelled particles can be rectified. It is found that the memory effects of the rotational diffusion can strongly affect the rectified transport. The memory effects do not always break the rectified transport, and there exists an optimal finite value of correlation time at which the rectified efficiency takes its maximal value. We also find that the optimal values of parameters (the self-propulsion speed, the translocation diffusion coefficient, the rotational noise intensity, and the self-rotational diffusion coefficient) can facilitate the rectified transport. When introducing a finite load, particles with different self-propulsion speeds move to different directions and can be separated.

  6. Adjusting process count on demand for petascale global optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sosonkina, Masha; Watson, Layne T.; Radcliffe, Nicholas R.

    2012-11-23

    There are many challenges that need to be met before efficient and reliable computation at the petascale is possible. Many scientific and engineering codes running at the petascale are likely to be memory intensive, which makes thrashing a serious problem for many petascale applications. One way to overcome this challenge is to use a dynamic number of processes, so that the total amount of memory available for the computation can be increased on demand. This paper describes modifications made to the massively parallel global optimization code pVTdirect in order to allow for a dynamic number of processes. In particular, themore » modified version of the code monitors memory use and spawns new processes if the amount of available memory is determined to be insufficient. The primary design challenges are discussed, and performance results are presented and analyzed.« less

  7. Data Movement Dominates: Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacob, Bruce L.

    Over the past three years in this project, what we have observed is that the primary reason for data movement in large-scale systems is that the per-node capacity is not large enough—i.e., one of the solutions to the data-movement problem (certainly not the only solution that is required, but a significant one nonetheless) is to increase per-node capacity so that inter-node traffic is reduced. This unfortunately is not as simple as it sounds. Today’s main memory systems for datacenters, enterprise computing systems, and supercomputers, fail to provide high per-socket capacity [Dirik & Jacob 2009; Cooper-Balis et al. 2012], except atmore » extremely high price points (factors of 10–100x the cost/bit of consumer main-memory systems) [Stokes 2008]. The reason is that our choice of technology for today’s main memory systems—i.e., DRAM, which we have used as a main-memory technology since the 1970s [Jacob et al. 2007]—can no longer keep up with our needs for density and price per bit. Main memory systems have always been built from the cheapest, densest, lowest-power memory technology available, and DRAM is no longer the cheapest, the densest, nor the lowest-power storage technology out there. It is now time for DRAM to go the way that SRAM went: move out of the way for a cheaper, slower, denser storage technology, and become a cache instead. This inflection point has happened before, in the context of SRAM yielding to DRAM. There was once a time that SRAM was the storage technology of choice for all main memories [Tomasulo 1967; Thornton 1970; Kidder 1981]. However, once DRAM hit volume production in the 1970s and 80s, it supplanted SRAM as a main memory technology because it was cheaper, and it was denser. It also happened to be lower power, but that was not the primary consideration of the day. At the time, it was recognized that DRAM was much slower than SRAM, but it was only at the supercomputer level (For instance the Cray X-MP in the 1980s and its follow-on, the Cray Y-MP, in the 1990s) that could one afford to build ever- larger main memories out of SRAM—the reasoning for moving to DRAM was that an appropriately designed memory hierarchy, built of DRAM as main memory and SRAM as a cache, would approach the performance of SRAM, at the price-per-bit of DRAM [Mashey 1999]. Today it is quite clear that, were one to build an entire multi-gigabyte main memory out of SRAM instead of DRAM, one could improve the performance of almost any computer system by up to an order of magnitude—but this option is not even considered, because to build that system would be prohibitively expensive. It is now time to revisit the same design choice in the context of modern technologies and modern systems. For reasons both technical and economic, we can no longer afford to build ever-larger main memory systems out of DRAM. Flash memory, on the other hand, is significantly cheaper and denser than DRAM and therefore should take its place. While it is true that flash is significantly slower than DRAM, one can afford to build much larger main memories out of flash than out of DRAM, and we show that an appropriately designed memory hierarchy, built of flash as main memory and DRAM as a cache, will approach the performance of DRAM, at the price-per-bit of flash. In our studies as part of this project, we have investigated Non-Volatile Main Memory (NVMM), a new main-memory architecture for large-scale computing systems, one that is specifically designed to address the weaknesses described previously. In particular, it provides the following features: non-volatility: The bulk of the storage is comprised of NAND flash, and in this organization DRAM is used only as a cache, not as main memory. Furthermore, the flash is journaled, which means that operations such as checkpoint/restore are already built into the system. 1+ terabytes of storage per socket: SSDs and DRAM DIMMs have roughly the same form factor (several square inches of PCB surface area), and terabyte SSDs are now commonplace. performance approaching that of DRAM: DRAM is used as a cache to the flash system. price-per-bit approaching that of NAND: Flash is currently well under $0.50 per gigabyte; DDR3 SDRAM is currently just over $10 per gigabyte [Newegg 2014]. Even today, one can build an easily affordable main memory system with a terabyte or more of NAND storage per CPU socket (which would be extremely expensive were one to use DRAM), and our cycle- accurate, full-system experiments show that this can be done at a performance point that lies within a factor of two of DRAM.« less

  8. Dynamic Network Selection for Multicast Services in Wireless Cooperative Networks

    NASA Astrophysics Data System (ADS)

    Chen, Liang; Jin, Le; He, Feng; Cheng, Hanwen; Wu, Lenan

    In next generation mobile multimedia communications, different wireless access networks are expected to cooperate. However, it is a challenging task to choose an optimal transmission path in this scenario. This paper focuses on the problem of selecting the optimal access network for multicast services in the cooperative mobile and broadcasting networks. An algorithm is proposed, which considers multiple decision factors and multiple optimization objectives. An analytic hierarchy process (AHP) method is applied to schedule the service queue and an artificial neural network (ANN) is used to improve the flexibility of the algorithm. Simulation results show that by applying the AHP method, a group of weight ratios can be obtained to improve the performance of multiple objectives. And ANN method is effective to adaptively adjust weight ratios when users' new waiting threshold is generated.

  9. Imaging Tasks Scheduling for High-Altitude Airship in Emergency Condition Based on Energy-Aware Strategy

    PubMed Central

    Zhimeng, Li; Chuan, He; Dishan, Qiu; Jin, Liu; Manhao, Ma

    2013-01-01

    Aiming to the imaging tasks scheduling problem on high-altitude airship in emergency condition, the programming models are constructed by analyzing the main constraints, which take the maximum task benefit and the minimum energy consumption as two optimization objectives. Firstly, the hierarchy architecture is adopted to convert this scheduling problem into three subproblems, that is, the task ranking, value task detecting, and energy conservation optimization. Then, the algorithms are designed for the sub-problems, and the solving results are corresponding to feasible solution, efficient solution, and optimization solution of original problem, respectively. This paper makes detailed introduction to the energy-aware optimization strategy, which can rationally adjust airship's cruising speed based on the distribution of task's deadline, so as to decrease the total energy consumption caused by cruising activities. Finally, the application results and comparison analysis show that the proposed strategy and algorithm are effective and feasible. PMID:23864822

  10. OpenMDAO: Framework for Flexible Multidisciplinary Design, Analysis and Optimization Methods

    NASA Technical Reports Server (NTRS)

    Heath, Christopher M.; Gray, Justin S.

    2012-01-01

    The OpenMDAO project is underway at NASA to develop a framework which simplifies the implementation of state-of-the-art tools and methods for multidisciplinary design, analysis and optimization. Foremost, OpenMDAO has been designed to handle variable problem formulations, encourage reconfigurability, and promote model reuse. This work demonstrates the concept of iteration hierarchies in OpenMDAO to achieve a flexible environment for supporting advanced optimization methods which include adaptive sampling and surrogate modeling techniques. In this effort, two efficient global optimization methods were applied to solve a constrained, single-objective and constrained, multiobjective version of a joint aircraft/engine sizing problem. The aircraft model, NASA's nextgeneration advanced single-aisle civil transport, is being studied as part of the Subsonic Fixed Wing project to help meet simultaneous program goals for reduced fuel burn, emissions, and noise. This analysis serves as a realistic test problem to demonstrate the flexibility and reconfigurability offered by OpenMDAO.

  11. Towards robust algorithms for current deposition and dynamic load-balancing in a GPU particle in cell code

    NASA Astrophysics Data System (ADS)

    Rossi, Francesco; Londrillo, Pasquale; Sgattoni, Andrea; Sinigardi, Stefano; Turchetti, Giorgio

    2012-12-01

    We present `jasmine', an implementation of a fully relativistic, 3D, electromagnetic Particle-In-Cell (PIC) code, capable of running simulations in various laser plasma acceleration regimes on Graphics-Processing-Units (GPUs) HPC clusters. Standard energy/charge preserving FDTD-based algorithms have been implemented using double precision and quadratic (or arbitrary sized) shape functions for the particle weighting. When porting a PIC scheme to the GPU architecture (or, in general, a shared memory environment), the particle-to-grid operations (e.g. the evaluation of the current density) require special care to avoid memory inconsistencies and conflicts. Here we present a robust implementation of this operation that is efficient for any number of particles per cell and particle shape function order. Our algorithm exploits the exposed GPU memory hierarchy and avoids the use of atomic operations, which can hurt performance especially when many particles lay on the same cell. We show the code multi-GPU scalability results and present a dynamic load-balancing algorithm. The code is written using a python-based C++ meta-programming technique which translates in a high level of modularity and allows for easy performance tuning and simple extension of the core algorithms to various simulation schemes.

  12. A Probabilistic Model of Social Working Memory for Information Retrieval in Social Interactions.

    PubMed

    Li, Liyuan; Xu, Qianli; Gan, Tian; Tan, Cheston; Lim, Joo-Hwee

    2018-05-01

    Social working memory (SWM) plays an important role in navigating social interactions. Inspired by studies in psychology, neuroscience, cognitive science, and machine learning, we propose a probabilistic model of SWM to mimic human social intelligence for personal information retrieval (IR) in social interactions. First, we establish a semantic hierarchy as social long-term memory to encode personal information. Next, we propose a semantic Bayesian network as the SWM, which integrates the cognitive functions of accessibility and self-regulation. One subgraphical model implements the accessibility function to learn the social consensus about IR-based on social information concept, clustering, social context, and similarity between persons. Beyond accessibility, one more layer is added to simulate the function of self-regulation to perform the personal adaptation to the consensus based on human personality. Two learning algorithms are proposed to train the probabilistic SWM model on a raw dataset of high uncertainty and incompleteness. One is an efficient learning algorithm of Newton's method, and the other is a genetic algorithm. Systematic evaluations show that the proposed SWM model is able to learn human social intelligence effectively and outperforms the baseline Bayesian cognitive model. Toward real-world applications, we implement our model on Google Glass as a wearable assistant for social interaction.

  13. CHIMERA: Top-down model for hierarchical, overlapping and directed cluster structures in directed and weighted complex networks

    NASA Astrophysics Data System (ADS)

    Franke, R.

    2016-11-01

    In many networks discovered in biology, medicine, neuroscience and other disciplines special properties like a certain degree distribution and hierarchical cluster structure (also called communities) can be observed as general organizing principles. Detecting the cluster structure of an unknown network promises to identify functional subdivisions, hierarchy and interactions on a mesoscale. It is not trivial choosing an appropriate detection algorithm because there are multiple network, cluster and algorithmic properties to be considered. Edges can be weighted and/or directed, clusters overlap or build a hierarchy in several ways. Algorithms differ not only in runtime, memory requirements but also in allowed network and cluster properties. They are based on a specific definition of what a cluster is, too. On the one hand, a comprehensive network creation model is needed to build a large variety of benchmark networks with different reasonable structures to compare algorithms. On the other hand, if a cluster structure is already known, it is desirable to separate effects of this structure from other network properties. This can be done with null model networks that mimic an observed cluster structure to improve statistics on other network features. A third important application is the general study of properties in networks with different cluster structures, possibly evolving over time. Currently there are good benchmark and creation models available. But what is left is a precise sandbox model to build hierarchical, overlapping and directed clusters for undirected or directed, binary or weighted complex random networks on basis of a sophisticated blueprint. This gap shall be closed by the model CHIMERA (Cluster Hierarchy Interconnection Model for Evaluation, Research and Analysis) which will be introduced and described here for the first time.

  14. Holistic, model-based optimization of edge leveling as an enabler for lithographic focus control: application to a memory use case

    NASA Astrophysics Data System (ADS)

    Hasan, T.; Kang, Y.-S.; Kim, Y.-J.; Park, S.-J.; Jang, S.-Y.; Hu, K.-Y.; Koop, E. J.; Hinnen, P. C.; Voncken, M. M. A. J.

    2016-03-01

    Advancement of the next generation technology nodes and emerging memory devices demand tighter lithographic focus control. Although the leveling performance of the latest-generation scanners is state of the art, challenges remain at the wafer edge due to large process variations. There are several customer configurable leveling control options available in ASML scanners, some of which are application specific in their scope of leveling improvement. In this paper, we assess the usability of leveling non-correctable error models to identify yield limiting edge dies. We introduce a novel dies-inspec based holistic methodology for leveling optimization to guide tool users in selecting an optimal configuration of leveling options. Significant focus gain, and consequently yield gain, can be achieved with this integrated approach. The Samsung site in Hwaseong observed an improved edge focus performance in a production of a mid-end memory product layer running on an ASML NXT 1960 system. 50% improvement in focus and a 1.5%p gain in edge yield were measured with the optimized configurations.

  15. Efficient Machine Learning Approach for Optimizing Scientific Computing Applications on Emerging HPC Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arumugam, Kamesh

    Efficient parallel implementations of scientific applications on multi-core CPUs with accelerators such as GPUs and Xeon Phis is challenging. This requires - exploiting the data parallel architecture of the accelerator along with the vector pipelines of modern x86 CPU architectures, load balancing, and efficient memory transfer between different devices. It is relatively easy to meet these requirements for highly structured scientific applications. In contrast, a number of scientific and engineering applications are unstructured. Getting performance on accelerators for these applications is extremely challenging because many of these applications employ irregular algorithms which exhibit data-dependent control-ow and irregular memory accesses. Furthermore,more » these applications are often iterative with dependency between steps, and thus making it hard to parallelize across steps. As a result, parallelism in these applications is often limited to a single step. Numerical simulation of charged particles beam dynamics is one such application where the distribution of work and memory access pattern at each time step is irregular. Applications with these properties tend to present significant branch and memory divergence, load imbalance between different processor cores, and poor compute and memory utilization. Prior research on parallelizing such irregular applications have been focused around optimizing the irregular, data-dependent memory accesses and control-ow during a single step of the application independent of the other steps, with the assumption that these patterns are completely unpredictable. We observed that the structure of computation leading to control-ow divergence and irregular memory accesses in one step is similar to that in the next step. It is possible to predict this structure in the current step by observing the computation structure of previous steps. In this dissertation, we present novel machine learning based optimization techniques to address the parallel implementation challenges of such irregular applications on different HPC architectures. In particular, we use supervised learning to predict the computation structure and use it to address the control-ow and memory access irregularities in the parallel implementation of such applications on GPUs, Xeon Phis, and heterogeneous architectures composed of multi-core CPUs with GPUs or Xeon Phis. We use numerical simulation of charged particles beam dynamics simulation as a motivating example throughout the dissertation to present our new approach, though they should be equally applicable to a wide range of irregular applications. The machine learning approach presented here use predictive analytics and forecasting techniques to adaptively model and track the irregular memory access pattern at each time step of the simulation to anticipate the future memory access pattern. Access pattern forecasts can then be used to formulate optimization decisions during application execution which improves the performance of the application at a future time step based on the observations from earlier time steps. In heterogeneous architectures, forecasts can also be used to improve the memory performance and resource utilization of all the processing units to deliver a good aggregate performance. We used these optimization techniques and anticipation strategy to design a cache-aware, memory efficient parallel algorithm to address the irregularities in the parallel implementation of charged particles beam dynamics simulation on different HPC architectures. Experimental result using a diverse mix of HPC architectures shows that our approach in using anticipation strategy is effective in maximizing data reuse, ensuring workload balance, minimizing branch and memory divergence, and in improving resource utilization.« less

  16. Influence of an immunodominant herpes simplex virus type 1 CD8+ T cell epitope on the target hierarchy and function of subdominant CD8+ T cells

    PubMed Central

    2017-01-01

    Herpes simplex virus type 1 (HSV-1) latency in sensory ganglia such as trigeminal ganglia (TG) is associated with a persistent immune infiltrate that includes effector memory CD8+ T cells that can influence HSV-1 reactivation. In C57BL/6 mice, HSV-1 induces a highly skewed CD8+ T cell repertoire, in which half of CD8+ T cells (gB-CD8s) recognize a single epitope on glycoprotein B (gB498-505), while the remainder (non-gB-CD8s) recognize, in varying proportions, 19 subdominant epitopes on 12 viral proteins. The gB-CD8s remain functional in TG throughout latency, while non-gB-CD8s exhibit varying degrees of functional compromise. To understand how dominance hierarchies relate to CD8+ T cell function during latency, we characterized the TG-associated CD8+ T cells following corneal infection with a recombinant HSV-1 lacking the immunodominant gB498-505 epitope (S1L). S1L induced a numerically equivalent CD8+ T cell infiltrate in the TG that was HSV-specific, but lacked specificity for gB498-505. Instead, there was a general increase of non-gB-CD8s with specific subdominant epitopes arising to codominance. In a latent S1L infection, non-gB-CD8s in the TG showed a hierarchy targeting different epitopes at latency compared to at acute times, and these cells retained an increased functionality at latency. In a latent S1L infection, these non-gB-CD8s also display an equivalent ability to block HSV reactivation in ex vivo ganglionic cultures compared to TG infected with wild type HSV-1. These data indicate that loss of the immunodominant gB498-505 epitope alters the dominance hierarchy and reduces functional compromise of CD8+ T cells specific for subdominant HSV-1 epitopes during viral latency. PMID:29206240

  17. Execution time supports for adaptive scientific algorithms on distributed memory machines

    NASA Technical Reports Server (NTRS)

    Berryman, Harry; Saltz, Joel; Scroggs, Jeffrey

    1990-01-01

    Optimizations are considered that are required for efficient execution of code segments that consists of loops over distributed data structures. The PARTI (Parallel Automated Runtime Toolkit at ICASE) execution time primitives are designed to carry out these optimizations and can be used to implement a wide range of scientific algorithms on distributed memory machines. These primitives allow the user to control array mappings in a way that gives an appearance of shared memory. Computations can be based on a global index set. Primitives are used to carry out gather and scatter operations on distributed arrays. Communications patterns are derived at runtime, and the appropriate send and receive messages are automatically generated.

  18. Execution time support for scientific programs on distributed memory machines

    NASA Technical Reports Server (NTRS)

    Berryman, Harry; Saltz, Joel; Scroggs, Jeffrey

    1990-01-01

    Optimizations are considered that are required for efficient execution of code segments that consists of loops over distributed data structures. The PARTI (Parallel Automated Runtime Toolkit at ICASE) execution time primitives are designed to carry out these optimizations and can be used to implement a wide range of scientific algorithms on distributed memory machines. These primitives allow the user to control array mappings in a way that gives an appearance of shared memory. Computations can be based on a global index set. Primitives are used to carry out gather and scatter operations on distributed arrays. Communications patterns are derived at runtime, and the appropriate send and receive messages are automatically generated.

  19. Livermore Big Artificial Neural Network Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Essen, Brian Van; Jacobs, Sam; Kim, Hyojin

    2016-07-01

    LBANN is a toolkit that is designed to train artificial neural networks efficiently on high performance computing architectures. It is optimized to take advantages of key High Performance Computing features to accelerate neural network training. Specifically it is optimized for low-latency, high bandwidth interconnects, node-local NVRAM, node-local GPU accelerators, and high bandwidth parallel file systems. It is built on top of the open source Elemental distributed-memory dense and spars-direct linear algebra and optimization library that is released under the BSD license. The algorithms contained within LBANN are drawn from the academic literature and implemented to work within a distributed-memory framework.

  20. High Performance Databases For Scientific Applications

    NASA Technical Reports Server (NTRS)

    French, James C.; Grimshaw, Andrew S.

    1997-01-01

    The goal for this task is to develop an Extensible File System (ELFS). ELFS attacks the problem of the following: 1. Providing high bandwidth performance architectures; 2. Reducing the cognitive burden faced by applications programmers when they attempt to optimize; and 3. Seamlessly managing the proliferation of data formats and architectural differences. The approach for ELFS solution consists of language and run-time system support that permits the specification on a hierarchy of file classes.

  1. Modeling and optimization of shape memory-superelastic antagonistic beam assembly

    NASA Astrophysics Data System (ADS)

    Tabesh, Majid; Elahinia, Mohammad H.

    2010-04-01

    Superelasticity (SE), shape memory effect (SM), high damping capacity, corrosion resistance, and biocompatibility are the properties of NiTi that makes the alloy ideal for biomedical devices. In this work, the 1D model developed by Brinson was modified to capture the shape memory effect, superelasticity and hysteresis behavior, as well as partial transformation in both positive and negative directions. This model was combined with the Euler beam equation which, by approximation, considers 1D compression and tension stress-strain relationships in different layers of a 3D beam assembly cross-section. A shape memory-superelastic NiTi antagonistic beam assembly was simulated with this model. This wire-tube assembly is designed to enhance the performance of the pedicle screws in osteoporotic bones. For the purpose of this study, an objective design is pursued aiming at optimizing the dimensions and initial configurations of the SMA wire-tube assembly.

  2. Historical Contingency in Controlled Evolution

    NASA Astrophysics Data System (ADS)

    Schuster, Peter

    2014-12-01

    A basic question in evolution is dealing with the nature of an evolutionary memory. At thermodynamic equilibrium, at stable stationary states or other stable attractors the memory on the path leading to the long-time solution is erased, at least in part. Similar arguments hold for unique optima. Optimality in biology is discussed on the basis of microbial metabolism. Biology, on the other hand, is characterized by historical contingency, which has recently become accessible to experimental test in bacterial populations evolving under controlled conditions. Computer simulations give additional insight into the nature of the evolutionary memory, which is ultimately caused by the enormous space of possibilities that is so large that it escapes all attempts of visualization. In essence, this contribution is dealing with two questions of current evolutionary theory: (i) Are organisms operating at optimal performance? and (ii) How is the evolutionary memory built up in populations?

  3. HEP - A semaphore-synchronized multiprocessor with central control. [Heterogeneous Element Processor

    NASA Technical Reports Server (NTRS)

    Gilliland, M. C.; Smith, B. J.; Calvert, W.

    1976-01-01

    The paper describes the design concept of the Heterogeneous Element Processor (HEP), a system tailored to the special needs of scientific simulation. In order to achieve high-speed computation required by simulation, HEP features a hierarchy of processes executing in parallel on a number of processors, with synchronization being largely accomplished by hardware. A full-empty-reserve scheme of synchronization is realized by zero-one-valued hardware semaphores. A typical system has, besides the control computer and the scheduler, an algebraic module, a memory module, a first-in first-out (FIFO) module, an integrator module, and an I/O module. The architecture of the scheduler and the algebraic module is examined in detail.

  4. Characterizing Task-Based OpenMP Programs

    PubMed Central

    Muddukrishna, Ananya; Jonsson, Peter A.; Brorsson, Mats

    2015-01-01

    Programmers struggle to understand performance of task-based OpenMP programs since profiling tools only report thread-based performance. Performance tuning also requires task-based performance in order to balance per-task memory hierarchy utilization against exposed task parallelism. We provide a cost-effective method to extract detailed task-based performance information from OpenMP programs. We demonstrate the utility of our method by quickly diagnosing performance problems and characterizing exposed task parallelism and per-task instruction profiles of benchmarks in the widely-used Barcelona OpenMP Tasks Suite. Programmers can tune performance faster and understand performance tradeoffs more effectively than existing tools by using our method to characterize task-based performance. PMID:25860023

  5. Extreme-scale Algorithms and Solver Resilience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dongarra, Jack

    A widening gap exists between the peak performance of high-performance computers and the performance achieved by complex applications running on these platforms. Over the next decade, extreme-scale systems will present major new challenges to algorithm development that could amplify this mismatch in such a way that it prevents the productive use of future DOE Leadership computers due to the following; Extreme levels of parallelism due to multicore processors; An increase in system fault rates requiring algorithms to be resilient beyond just checkpoint/restart; Complex memory hierarchies and costly data movement in both energy and performance; Heterogeneous system architectures (mixing CPUs, GPUs,more » etc.); and Conflicting goals of performance, resilience, and power requirements.« less

  6. T Cell Receptor-Major Histocompatibility Complex Interaction Strength Defines Trafficking and CD103+ Memory Status of CD8 T Cells in the Brain.

    PubMed

    Sanecka, Anna; Yoshida, Nagisa; Kolawole, Elizabeth Motunrayo; Patel, Harshil; Evavold, Brian D; Frickel, Eva-Maria

    2018-01-01

    T cell receptor-major histocompatibility complex (TCR-MHC) affinities span a wide range in a polyclonal T cell response, yet it is undefined how affinity shapes long-term properties of CD8 T cells during chronic infection with persistent antigen. Here, we investigate how the affinity of the TCR-MHC interaction shapes the phenotype of memory CD8 T cells in the chronically Toxoplasma gondii- infected brain. We employed CD8 T cells from three lines of transnuclear (TN) mice that harbor in their endogenous loci different T cell receptors specific for the same Toxoplasma antigenic epitope ROP7. The three TN CD8 T cell clones span a wide range of affinities to MHCI-ROP7. These three CD8 T cell clones have a distinct and fixed hierarchy in terms of effector function in response to the antigen measured as proliferation capacity, trafficking, T cell maintenance, and memory formation. In particular, the T cell clone of lowest affinity does not home to the brain. The two higher affinity T cell clones show differences in establishing resident-like memory populations (CD103 + ) in the brain with the higher affinity clone persisting longer in the host during chronic infection. Transcriptional profiling of naïve and activated ROP7-specific CD8 T cells revealed that Klf2 encoding a transcription factor that is known to be a negative marker for T cell trafficking is upregulated in the activated lowest affinity ROP7 clone. Our data thus suggest that TCR-MHC affinity dictates memory CD8 T cell fate at the site of infection.

  7. Optimal Design for Hetero-Associative Memory: Hippocampal CA1 Phase Response Curve and Spike-Timing-Dependent Plasticity

    PubMed Central

    Miyata, Ryota; Ota, Keisuke; Aonishi, Toru

    2013-01-01

    Recently reported experimental findings suggest that the hippocampal CA1 network stores spatio-temporal spike patterns and retrieves temporally reversed and spread-out patterns. In this paper, we explore the idea that the properties of the neural interactions and the synaptic plasticity rule in the CA1 network enable it to function as a hetero-associative memory recalling such reversed and spread-out spike patterns. In line with Lengyel’s speculation (Lengyel et al., 2005), we firstly derive optimally designed spike-timing-dependent plasticity (STDP) rules that are matched to neural interactions formalized in terms of phase response curves (PRCs) for performing the hetero-associative memory function. By maximizing object functions formulated in terms of mutual information for evaluating memory retrieval performance, we search for STDP window functions that are optimal for retrieval of normal and doubly spread-out patterns under the constraint that the PRCs are those of CA1 pyramidal neurons. The system, which can retrieve normal and doubly spread-out patterns, can also retrieve reversed patterns with the same quality. Finally, we demonstrate that purposely designed STDP window functions qualitatively conform to typical ones found in CA1 pyramidal neurons. PMID:24204822

  8. The Role of Semantic Clustering in Optimal Memory Foraging.

    PubMed

    Montez, Priscilla; Thompson, Graham; Kello, Christopher T

    2015-11-01

    Recent studies of semantic memory have investigated two theories of optimal search adopted from the animal foraging literature: Lévy flights and marginal value theorem. Each theory makes different simplifying assumptions and addresses different findings in search behaviors. In this study, an experiment is conducted to test whether clustering in semantic memory may play a role in evidence for both theories. Labeled magnets and a whiteboard were used to elicit spatial representations of semantic knowledge about animals. Category recall sequences from a separate experiment were used to trace search paths over the spatial representations of animal knowledge. Results showed that spatial distances between animal names arranged on the whiteboard were correlated with inter-response intervals (IRIs) during category recall, and distributions of both dependent measures approximated inverse power laws associated with Lévy flights. In addition, IRIs were relatively shorter when paths first entered animal clusters, and longer when they exited clusters, which is consistent with marginal value theorem. In conclusion, area-restricted searches over clustered semantic spaces may account for two different patterns of results interpreted as supporting two different theories of optimal memory foraging. Copyright © 2015 Cognitive Science Society, Inc.

  9. Impacts of memory on a regular lattice for different population sizes with asynchronous update in spatial snowdrift game

    NASA Astrophysics Data System (ADS)

    Shu, Feng; Liu, Xingwen; Li, Min

    2018-05-01

    Memory is an important factor on the evolution of cooperation in spatial structure. For evolutionary biologists, the problem is often how cooperation acts can emerge in an evolving system. In the case of snowdrift game, it is found that memory can boost cooperation level for large cost-to-benefit ratio r, while inhibit cooperation for small r. Thus, how to enlarge the range of r for the purpose of enhancing cooperation becomes a hot issue recently. This paper addresses a new memory-based approach and its core lies in: Each agent applies the given rule to compare its own historical payoffs in a certain memory size, and take the obtained maximal one as virtual payoff. In order to get the optimal strategy, each agent randomly selects one of its neighbours to compare their virtual payoffs, which can lead to the optimal strategy. Both constant-size memory and size-varying memory are investigated by means of a scenario of asynchronous updating algorithm on regular lattices with different sizes. Simulation results show that this approach effectively enhances cooperation level in spatial structure and makes the high cooperation level simultaneously emerge for both small and large r. Moreover, it is discovered that population sizes have a significant influence on the effects of cooperation.

  10. Memory-efficient table look-up optimized algorithm for context-based adaptive variable length decoding in H.264/advanced video coding

    NASA Astrophysics Data System (ADS)

    Wang, Jianhua; Cheng, Lianglun; Wang, Tao; Peng, Xiaodong

    2016-03-01

    Table look-up operation plays a very important role during the decoding processing of context-based adaptive variable length decoding (CAVLD) in H.264/advanced video coding (AVC). However, frequent table look-up operation can result in big table memory access, and then lead to high table power consumption. Aiming to solve the problem of big table memory access of current methods, and then reduce high power consumption, a memory-efficient table look-up optimized algorithm is presented for CAVLD. The contribution of this paper lies that index search technology is introduced to reduce big memory access for table look-up, and then reduce high table power consumption. Specifically, in our schemes, we use index search technology to reduce memory access by reducing the searching and matching operations for code_word on the basis of taking advantage of the internal relationship among length of zero in code_prefix, value of code_suffix and code_lengh, thus saving the power consumption of table look-up. The experimental results show that our proposed table look-up algorithm based on index search can lower about 60% memory access consumption compared with table look-up by sequential search scheme, and then save much power consumption for CAVLD in H.264/AVC.

  11. Limited-memory fast gradient descent method for graph regularized nonnegative matrix factorization.

    PubMed

    Guan, Naiyang; Wei, Lei; Luo, Zhigang; Tao, Dacheng

    2013-01-01

    Graph regularized nonnegative matrix factorization (GNMF) decomposes a nonnegative data matrix X[Symbol:see text]R(m x n) to the product of two lower-rank nonnegative factor matrices, i.e.,W[Symbol:see text]R(m x r) and H[Symbol:see text]R(r x n) (r < min {m,n}) and aims to preserve the local geometric structure of the dataset by minimizing squared Euclidean distance or Kullback-Leibler (KL) divergence between X and WH. The multiplicative update rule (MUR) is usually applied to optimize GNMF, but it suffers from the drawback of slow-convergence because it intrinsically advances one step along the rescaled negative gradient direction with a non-optimal step size. Recently, a multiple step-sizes fast gradient descent (MFGD) method has been proposed for optimizing NMF which accelerates MUR by searching the optimal step-size along the rescaled negative gradient direction with Newton's method. However, the computational cost of MFGD is high because 1) the high-dimensional Hessian matrix is dense and costs too much memory; and 2) the Hessian inverse operator and its multiplication with gradient cost too much time. To overcome these deficiencies of MFGD, we propose an efficient limited-memory FGD (L-FGD) method for optimizing GNMF. In particular, we apply the limited-memory BFGS (L-BFGS) method to directly approximate the multiplication of the inverse Hessian and the gradient for searching the optimal step size in MFGD. The preliminary results on real-world datasets show that L-FGD is more efficient than both MFGD and MUR. To evaluate the effectiveness of L-FGD, we validate its clustering performance for optimizing KL-divergence based GNMF on two popular face image datasets including ORL and PIE and two text corpora including Reuters and TDT2. The experimental results confirm the effectiveness of L-FGD by comparing it with the representative GNMF solvers.

  12. Bayer image parallel decoding based on GPU

    NASA Astrophysics Data System (ADS)

    Hu, Rihui; Xu, Zhiyong; Wei, Yuxing; Sun, Shaohua

    2012-11-01

    In the photoelectrical tracking system, Bayer image is decompressed in traditional method, which is CPU-based. However, it is too slow when the images become large, for example, 2K×2K×16bit. In order to accelerate the Bayer image decoding, this paper introduces a parallel speedup method for NVIDA's Graphics Processor Unit (GPU) which supports CUDA architecture. The decoding procedure can be divided into three parts: the first is serial part, the second is task-parallelism part, and the last is data-parallelism part including inverse quantization, inverse discrete wavelet transform (IDWT) as well as image post-processing part. For reducing the execution time, the task-parallelism part is optimized by OpenMP techniques. The data-parallelism part could advance its efficiency through executing on the GPU as CUDA parallel program. The optimization techniques include instruction optimization, shared memory access optimization, the access memory coalesced optimization and texture memory optimization. In particular, it can significantly speed up the IDWT by rewriting the 2D (Tow-dimensional) serial IDWT into 1D parallel IDWT. Through experimenting with 1K×1K×16bit Bayer image, data-parallelism part is 10 more times faster than CPU-based implementation. Finally, a CPU+GPU heterogeneous decompression system was designed. The experimental result shows that it could achieve 3 to 5 times speed increase compared to the CPU serial method.

  13. Reasoning and Memory: People Make Varied Use of the Information Available in Working Memory

    ERIC Educational Resources Information Center

    Hardman, Kyle O.; Cowan, Nelson

    2016-01-01

    Working memory (WM) is used for storing information in a highly accessible state so that other mental processes, such as reasoning, can use that information. Some WM tasks require that participants not only store information, but also reason about that information to perform optimally on the task. In this study, we used visual WM tasks that had…

  14. Everyday memory and working memory in adolescents with mild intellectual disability.

    PubMed

    Van der Molen, M J; Van Luit, J E H; Van der Molen, Maurits W; Jongmans, Marian J

    2010-05-01

    Everyday memory and its relationship to working memory was investigated in adolescents with mild intellectual disability and compared to typically developing adolescents of the same age (CA) and younger children matched on mental age (MA). Results showed a delay on almost all memory measures for the adolescents with mild intellectual disability compared to the CA control adolescents. Compared to the MA control children, the adolescents with mild intellectual disability performed less well on a general everyday memory index. Only some significant associations were found between everyday memory and working memory for the mild intellectual disability group. These findings were interpreted to suggest that adolescents with mild intellectual disability have difficulty in making optimal use of their working memory when new or complex situations tax their abilities.

  15. A unified construction for the algebro-geometric quasiperiodic solutions of the Lotka-Volterra and relativistic Lotka-Volterra hierarchy

    NASA Astrophysics Data System (ADS)

    Zhao, Peng; Fan, Engui

    2015-04-01

    In this paper, a new type of integrable differential-difference hierarchy, namely, the generalized relativistic Lotka-Volterra (GRLV) hierarchy, is introduced. This hierarchy is closely related to Lotka-Volterra lattice and relativistic Lotka-Volterra lattice, which allows us to provide a unified and effective way to obtain some exact solutions for both the Lotka-Volterra hierarchy and the relativistic Lotka-Volterra hierarchy. In particular, we shall construct algebro-geometric quasiperiodic solutions for the LV hierarchy and the RLV hierarchy in a unified manner on the basis of the finite gap integration theory.

  16. Random Boolean networks for autoassociative memory: Optimization and sequential learning

    NASA Astrophysics Data System (ADS)

    Sherrington, D.; Wong, K. Y. M.

    Conventional neural networks are based on synaptic storage of information, even when the neural states are discrete and bounded. In general, the set of potential local operations is much greater. Here we discuss some aspects of the properties of networks of binary neurons with more general Boolean functions controlling the local dynamics. Two specific aspects are emphasised; (i) optimization in the presence of noise and (ii) a simple model for short-term memory exhibiting primacy and recency in the recall of sequentially taught patterns.

  17. Hierarchy, Dominance, and Deliberation: Egalitarian Values Require Mental Effort.

    PubMed

    Van Berkel, Laura; Crandall, Christian S; Eidelman, Scott; Blanchar, John C

    2015-09-01

    Hierarchy and dominance are ubiquitous. Because social hierarchy is early learned and highly rehearsed, the value of hierarchy enjoys relative ease over competing egalitarian values. In six studies, we interfere with deliberate thinking and measure endorsement of hierarchy and egalitarianism. In Study 1, bar patrons' blood alcohol content was correlated with hierarchy preference. In Study 2, cognitive load increased the authority/hierarchy moral foundation. In Study 3, low-effort thought instructions increased hierarchy endorsement and reduced equality endorsement. In Study 4, ego depletion increased hierarchy endorsement and caused a trend toward reduced equality endorsement. In Study 5, low-effort thought instructions increased endorsement of hierarchical attitudes among those with a sense of low personal power. In Study 6, participants' thinking quickly allocated more resources to high-status groups. Across five operationalizations of impaired deliberative thought, hierarchy endorsement increased and egalitarianism receded. These data suggest hierarchy may persist in part because it has a psychological advantage. © 2015 by the Society for Personality and Social Psychology, Inc.

  18. Exascale Hardware Architectures Working Group

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hemmert, S; Ang, J; Chiang, P

    2011-03-15

    The ASC Exascale Hardware Architecture working group is challenged to provide input on the following areas impacting the future use and usability of potential exascale computer systems: processor, memory, and interconnect architectures, as well as the power and resilience of these systems. Going forward, there are many challenging issues that will need to be addressed. First, power constraints in processor technologies will lead to steady increases in parallelism within a socket. Additionally, all cores may not be fully independent nor fully general purpose. Second, there is a clear trend toward less balanced machines, in terms of compute capability compared tomore » memory and interconnect performance. In order to mitigate the memory issues, memory technologies will introduce 3D stacking, eventually moving on-socket and likely on-die, providing greatly increased bandwidth but unfortunately also likely providing smaller memory capacity per core. Off-socket memory, possibly in the form of non-volatile memory, will create a complex memory hierarchy. Third, communication energy will dominate the energy required to compute, such that interconnect power and bandwidth will have a significant impact. All of the above changes are driven by the need for greatly increased energy efficiency, as current technology will prove unsuitable for exascale, due to unsustainable power requirements of such a system. These changes will have the most significant impact on programming models and algorithms, but they will be felt across all layers of the machine. There is clear need to engage all ASC working groups in planning for how to deal with technological changes of this magnitude. The primary function of the Hardware Architecture Working Group is to facilitate codesign with hardware vendors to ensure future exascale platforms are capable of efficiently supporting the ASC applications, which in turn need to meet the mission needs of the NNSA Stockpile Stewardship Program. This issue is relatively immediate, as there is only a small window of opportunity to influence hardware design for 2018 machines. Given the short timeline a firm co-design methodology with vendors is of prime importance.« less

  19. Trends in Process Analytical Technology: Present State in Bioprocessing.

    PubMed

    Jenzsch, Marco; Bell, Christian; Buziol, Stefan; Kepert, Felix; Wegele, Harald; Hakemeyer, Christian

    2017-08-04

    Process analytical technology (PAT), the regulatory initiative for incorporating quality in pharmaceutical manufacturing, is an area of intense research and interest. If PAT is effectively applied to bioprocesses, this can increase process understanding and control, and mitigate the risk from substandard drug products to both manufacturer and patient. To optimize the benefits of PAT, the entire PAT framework must be considered and each elements of PAT must be carefully selected, including sensor and analytical technology, data analysis techniques, control strategies and algorithms, and process optimization routines. This chapter discusses the current state of PAT in the biopharmaceutical industry, including several case studies demonstrating the degree of maturity of various PAT tools. Graphical Abstract Hierarchy of QbD components.

  20. [Multi-mathematical modelings for compatibility optimization of Jiangzhi granules].

    PubMed

    Yang, Ming; Zhang, Li; Ge, Yingli; Lu, Yanliu; Ji, Guang

    2011-12-01

    To investigate into the method of "multi activity index evaluation and combination optimized of mult-component" for Chinese herbal formulas. According to the scheme of uniform experimental design, efficacy experiment, multi index evaluation, least absolute shrinkage, selection operator (LASSO) modeling, evolutionary optimization algorithm, validation experiment, we optimized the combination of Jiangzhi granules based on the activity indexes of blood serum ALT, ALT, AST, TG, TC, HDL, LDL and TG level of liver tissues, ratio of liver tissue to body. Analytic hierarchy process (AHP) combining with criteria importance through intercriteria correlation (CRITIC) for multi activity index evaluation was more reasonable and objective, it reflected the information of activity index's order and objective sample data. LASSO algorithm modeling could accurately reflect the relationship between different combination of Jiangzhi granule and the activity comprehensive indexes. The optimized combination of Jiangzhi granule showed better values of the activity comprehensive indexed than the original formula after the validation experiment. AHP combining with CRITIC can be used for multi activity index evaluation and LASSO algorithm, it is suitable for combination optimized of Chinese herbal formulas.

  1. Compiler-Directed File Layout Optimization for Hierarchical Storage Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, Wei; Zhang, Yuanrui; Kandemir, Mahmut

    File layout of array data is a critical factor that effects the behavior of storage caches, and has so far taken not much attention in the context of hierarchical storage systems. The main contribution of this paper is a compiler-driven file layout optimization scheme for hierarchical storage caches. This approach, fully automated within an optimizing compiler, analyzes a multi-threaded application code and determines a file layout for each disk-resident array referenced by the code, such that the performance of the target storage cache hierarchy is maximized. We tested our approach using 16 I/O intensive application programs and compared its performancemore » against two previously proposed approaches under different cache space management schemes. Our experimental results show that the proposed approach improves the execution time of these parallel applications by 23.7% on average.« less

  2. Compiler-Directed File Layout Optimization for Hierarchical Storage Systems

    DOE PAGES

    Ding, Wei; Zhang, Yuanrui; Kandemir, Mahmut; ...

    2013-01-01

    File layout of array data is a critical factor that effects the behavior of storage caches, and has so far taken not much attention in the context of hierarchical storage systems. The main contribution of this paper is a compiler-driven file layout optimization scheme for hierarchical storage caches. This approach, fully automated within an optimizing compiler, analyzes a multi-threaded application code and determines a file layout for each disk-resident array referenced by the code, such that the performance of the target storage cache hierarchy is maximized. We tested our approach using 16 I/O intensive application programs and compared its performancemore » against two previously proposed approaches under different cache space management schemes. Our experimental results show that the proposed approach improves the execution time of these parallel applications by 23.7% on average.« less

  3. PIPS-SBB: A Parallel Distributed-Memory Branch-and-Bound Algorithm for Stochastic Mixed-Integer Programs

    DOE PAGES

    Munguia, Lluis-Miquel; Oxberry, Geoffrey; Rajan, Deepak

    2016-05-01

    Stochastic mixed-integer programs (SMIPs) deal with optimization under uncertainty at many levels of the decision-making process. When solved as extensive formulation mixed- integer programs, problem instances can exceed available memory on a single workstation. In order to overcome this limitation, we present PIPS-SBB: a distributed-memory parallel stochastic MIP solver that takes advantage of parallelism at multiple levels of the optimization process. We also show promising results on the SIPLIB benchmark by combining methods known for accelerating Branch and Bound (B&B) methods with new ideas that leverage the structure of SMIPs. Finally, we expect the performance of PIPS-SBB to improve furthermore » as more functionality is added in the future.« less

  4. Phenomenological analysis of medical time series with regular and stochastic components

    NASA Astrophysics Data System (ADS)

    Timashev, Serge F.; Polyakov, Yuriy S.

    2007-06-01

    Flicker-Noise Spectroscopy (FNS), a general approach to the extraction and parameterization of resonant and stochastic components contained in medical time series, is presented. The basic idea of FNS is to treat the correlation links present in sequences of different irregularities, such as spikes, "jumps", and discontinuities in derivatives of different orders, on all levels of the spatiotemporal hierarchy of the system under study as main information carriers. The tools to extract and analyze the information are power spectra and difference moments (structural functions), which complement the information of each other. The structural function stochastic component is formed exclusively by "jumps" of the dynamic variable while the power spectrum stochastic component is formed by both spikes and "jumps" on every level of the hierarchy. The information "passport" characteristics that are determined by fitting the derived expressions to the experimental variations for the stochastic components of power spectra and structural functions are interpreted as the correlation times and parameters that describe the rate of "memory loss" on these correlation time intervals for different irregularities. The number of the extracted parameters is determined by the requirements of the problem under study. Application of this approach to the analysis of tremor velocity signals for a Parkinsonian patient is discussed.

  5. Cell of origin associated classification of B-cell malignancies by gene signatures of the normal B-cell hierarchy.

    PubMed

    Johnsen, Hans Erik; Bergkvist, Kim Steve; Schmitz, Alexander; Kjeldsen, Malene Krag; Hansen, Steen Møller; Gaihede, Michael; Nørgaard, Martin Agge; Bæch, John; Grønholdt, Marie-Louise; Jensen, Frank Svendsen; Johansen, Preben; Bødker, Julie Støve; Bøgsted, Martin; Dybkær, Karen

    2014-06-01

    Recent findings have suggested biological classification of B-cell malignancies as exemplified by the "activated B-cell-like" (ABC), the "germinal-center B-cell-like" (GCB) and primary mediastinal B-cell lymphoma (PMBL) subtypes of diffuse large B-cell lymphoma and "recurrent translocation and cyclin D" (TC) classification of multiple myeloma. Biological classification of B-cell derived cancers may be refined by a direct and systematic strategy where identification and characterization of normal B-cell differentiation subsets are used to define the cancer cell of origin phenotype. Here we propose a strategy combining multiparametric flow cytometry, global gene expression profiling and biostatistical modeling to generate B-cell subset specific gene signatures from sorted normal human immature, naive, germinal centrocytes and centroblasts, post-germinal memory B-cells, plasmablasts and plasma cells from available lymphoid tissues including lymph nodes, tonsils, thymus, peripheral blood and bone marrow. This strategy will provide an accurate image of the stage of differentiation, which prospectively can be used to classify any B-cell malignancy and eventually purify tumor cells. This report briefly describes the current models of the normal B-cell subset differentiation in multiple tissues and the pathogenesis of malignancies originating from the normal germinal B-cell hierarchy.

  6. Memory reconsolidation and psychotherapeutic process.

    PubMed

    Liberzon, Israel; Javanbakht, Arash

    2015-01-01

    Lane et al. propose a heuristic model in which distinct, and seemingly irreconcilable, therapies can coexist. Authors postulate that memory reconsolidation is a key common neurobiological process mediating the therapeutic effects. This conceptualization raises a set of important questions regarding neuroscience and translational aspects of fear memory reconsolidation. We discuss the implications of the target article's memory reconsolidation model in the development of more effective interventions, and in the identification of less effective, or potentially harmful approaches, as well as concepts of contextualization, optimal arousal, and combined therapy.

  7. Disease spread across multiple scales in a spatial hierarchy: effect of host spatial structure and of inoculum quantity and distribution.

    PubMed

    Gosme, Marie; Lucas, Philippe

    2009-07-01

    Spatial patterns of both the host and the disease influence disease spread and crop losses. Therefore, the manipulation of these patterns might help improve control strategies. Considering disease spread across multiple scales in a spatial hierarchy allows one to capture important features of epidemics developing in space without using explicitly spatialized variables. Thus, if the system under study is composed of roots, plants, and planting hills, the effect of host spatial pattern can be studied by varying the number of plants per planting hill. A simulation model based on hierarchy theory was used to simulate the effects of large versus small planting hills, low versus high level of initial infections, and aggregated versus uniform distribution of initial infections. The results showed that aggregating the initially infected plants always resulted in slower epidemics than spreading out the initial infections uniformly. Simulation results also showed that, in most cases, disease epidemics were slower in the case of large host aggregates (100 plants/hill) than with smaller aggregates (25 plants/hill), except when the initially infected plants were both numerous and spread out uniformly. The optimal strategy for disease control depends on several factors, including initial conditions. More importantly, the model offers a framework to account for the interplay between the spatial characteristics of the system, rates of infection, and aggregation of the disease.

  8. Performance measurements of the first RAID prototype

    NASA Technical Reports Server (NTRS)

    Chervenak, Ann L.

    1990-01-01

    The performance is examined of Redundant Arrays of Inexpensive Disks (RAID) the First, a prototype disk array. A hierarchy of bottlenecks was discovered in the system that limit overall performance. The most serious is the memory system contention on the Sun 4/280 host CPU, which limits array bandwidth to 2.3 MBytes/sec. The array performs more successfully on small random operations, achieving nearly 300 I/Os per second before the Sun 4/280 becomes CPU limited. Other bottlenecks in the system are the VME backplane, bandwidth on the disk controller, and overheads associated with the SCSI protocol. All are examined in detail. The main conclusion is that to achieve the potential bandwidth of arrays, more powerful CPU's alone will not suffice. Just as important are adequate host memory bandwidth and support for high bandwidth on disk controllers. Current disk controllers are more often designed to achieve large numbers of small random operations, rather than high bandwidth. Operating systems also need to change to support high bandwidth from disk arrays. In particular, they should transfer data in larger blocks, and should support asynchronous I/O to improve sequential write performance.

  9. Roofline model toolkit: A practical tool for architectural and program analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lo, Yu Jung; Williams, Samuel; Van Straalen, Brian

    We present preliminary results of the Roofline Toolkit for multicore, many core, and accelerated architectures. This paper focuses on the processor architecture characterization engine, a collection of portable instrumented micro benchmarks implemented with Message Passing Interface (MPI), and OpenMP used to express thread-level parallelism. These benchmarks are specialized to quantify the behavior of different architectural features. Compared to previous work on performance characterization, these microbenchmarks focus on capturing the performance of each level of the memory hierarchy, along with thread-level parallelism, instruction-level parallelism and explicit SIMD parallelism, measured in the context of the compilers and run-time environments. We also measuremore » sustained PCIe throughput with four GPU memory managed mechanisms. By combining results from the architecture characterization with the Roofline model based solely on architectural specifications, this work offers insights for performance prediction of current and future architectures and their software systems. To that end, we instrument three applications and plot their resultant performance on the corresponding Roofline model when run on a Blue Gene/Q architecture.« less

  10. Using CLIPS in the domain of knowledge-based massively parallel programming

    NASA Technical Reports Server (NTRS)

    Dvorak, Jiri J.

    1994-01-01

    The Program Development Environment (PDE) is a tool for massively parallel programming of distributed-memory architectures. Adopting a knowledge-based approach, the PDE eliminates the complexity introduced by parallel hardware with distributed memory and offers complete transparency in respect of parallelism exploitation. The knowledge-based part of the PDE is realized in CLIPS. Its principal task is to find an efficient parallel realization of the application specified by the user in a comfortable, abstract, domain-oriented formalism. A large collection of fine-grain parallel algorithmic skeletons, represented as COOL objects in a tree hierarchy, contains the algorithmic knowledge. A hybrid knowledge base with rule modules and procedural parts, encoding expertise about application domain, parallel programming, software engineering, and parallel hardware, enables a high degree of automation in the software development process. In this paper, important aspects of the implementation of the PDE using CLIPS and COOL are shown, including the embedding of CLIPS with C++-based parts of the PDE. The appropriateness of the chosen approach and of the CLIPS language for knowledge-based software engineering are discussed.

  11. The fluency of social hierarchy: the ease with which hierarchical relationships are seen, remembered, learned, and liked.

    PubMed

    Zitek, Emily M; Tiedens, Larissa Z

    2012-01-01

    We tested the hypothesis that social hierarchies are fluent social stimuli; that is, they are processed more easily and therefore liked better than less hierarchical stimuli. In Study 1, pairs of people in a hierarchy based on facial dominance were identified faster than pairs of people equal in their facial dominance. In Study 2, a diagram representing hierarchy was memorized more quickly than a diagram representing equality or a comparison diagram. This faster processing led the hierarchy diagram to be liked more than the equality diagram. In Study 3, participants were best able to learn a set of relationships that represented hierarchy (asymmetry of power)--compared to relationships in which there was asymmetry of friendliness, or compared to relationships in which there was symmetry--and this processing ease led them to like the hierarchy the most. In Study 4, participants found it easier to make decisions about a company that was more hierarchical and thus thought the hierarchical organization had more positive qualities. In Study 5, familiarity as a basis for the fluency of hierarchy was demonstrated by showing greater fluency for male than female hierarchies. This study also showed that when social relationships are difficult to learn, people's preference for hierarchy increases. Taken together, these results suggest one reason people might like hierarchies--hierarchies are easy to process. This fluency for social hierarchies might contribute to the construction and maintenance of hierarchies.

  12. Hybrid-optimization strategy for the communication of large-scale Kinetic Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Wu, Baodong; Li, Shigang; Zhang, Yunquan; Nie, Ningming

    2017-02-01

    The parallel Kinetic Monte Carlo (KMC) algorithm based on domain decomposition has been widely used in large-scale physical simulations. However, the communication overhead of the parallel KMC algorithm is critical, and severely degrades the overall performance and scalability. In this paper, we present a hybrid optimization strategy to reduce the communication overhead for the parallel KMC simulations. We first propose a communication aggregation algorithm to reduce the total number of messages and eliminate the communication redundancy. Then, we utilize the shared memory to reduce the memory copy overhead of the intra-node communication. Finally, we optimize the communication scheduling using the neighborhood collective operations. We demonstrate the scalability and high performance of our hybrid optimization strategy by both theoretical and experimental analysis. Results show that the optimized KMC algorithm exhibits better performance and scalability than the well-known open-source library-SPPARKS. On 32-node Xeon E5-2680 cluster (total 640 cores), the optimized algorithm reduces the communication time by 24.8% compared with SPPARKS.

  13. Dreaming and Offline Memory Consolidation

    PubMed Central

    Wamsley, Erin J.

    2015-01-01

    Converging evidence suggests that dreaming is influenced by the consolidation of memory during sleep. Following encoding, recently formed memory traces are gradually stabilized and reorganized into a more permanent form of long-term storage. Sleep provides an optimal neurophysiological state to facilitate this process, allowing memory networks to be repeatedly reactivated in the absence of new sensory input. The process of memory reactivation and consolidation in the sleeping brain appears to influence conscious experience during sleep, contributing to dream content recalled on awakening. This article outlines several lines of evidence in support of this hypothesis, and responds to some common objections. PMID:24477388

  14. Theory of Wavelet-Based Coarse-Graining Hierarchies for Molecular Dynamics

    DTIC Science & Technology

    2017-04-01

    resolution. ............................................... 15 Fig. 6 Fourier transform of the y-component of 1,000 atoms in crystalline PE (100,800 atoms...of magnitude of optimal representation. . 16 Fig. 7 Top row: Fourier transform of the y-component of a 100,800 atom crystalline PE sampled at 1 fs. 3... transform of the z-component of alanine dipeptide in vacuum excluding zero frequency to allow detail at other frequencies. MD at 500 K and 1 atm. Left

  15. Optimization of the development of reproductive organs celepuk jawa (otus angelinae) owl which supplemented by turmeric powder

    NASA Astrophysics Data System (ADS)

    Rini Saraswati, Tyas; Yuniwarti, Enny Yusuf W.; Tana, Silvan

    2018-03-01

    Otus angelinae is included as a protected animal because of its endangered existence. Whereas, it has many values such as for mice pest control. Therefore, this research aims to optimize the reproductive function of Otus angelinae by administering turmeric powder mixed in its feed. This study was held on a laboratory scale with two male and two female Otus angelinae three months of age. Each subject is divided into two groups: a control group and a treatment group which is treated with turmeric powder 108 mg/owl/day mixed in 30 g catfish/day for a month. The parameter observed were the development of hierarchy follicles and the ovarium weight of female Otus angelinae, whereas the testis organs and testes weight were observed for the male. Both the female’s and male’s body weight, liver weight and the length of ductus reproduction were also observed. The data was analyzed descriptively. The results showed that the administration of turmeric powder can induce the development of ovarian follicles hierarchy and the length of ductus reproduction of female Otus angelinae and also induce the development of the testes and the length of ductus reproduction of male Otus angelinae. The addition of turmeric powder increased the liver weight of the female Otus angelinae, however it does not affect the body weight.

  16. A Bandwidth-Optimized Multi-Core Architecture for Irregular Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Secchi, Simone; Tumeo, Antonino; Villa, Oreste

    This paper presents an architecture template for next-generation high performance computing systems specifically targeted to irregular applications. We start our work by considering that future generation interconnection and memory bandwidth full-system numbers are expected to grow by a factor of 10. In order to keep up with such a communication capacity, while still resorting to fine-grained multithreading as the main way to tolerate unpredictable memory access latencies of irregular applications, we show how overall performance scaling can benefit from the multi-core paradigm. At the same time, we also show how such an architecture template must be coupled with specific techniquesmore » in order to optimize bandwidth utilization and achieve the maximum scalability. We propose a technique based on memory references aggregation, together with the related hardware implementation, as one of such optimization techniques. We explore the proposed architecture template by focusing on the Cray XMT architecture and, using a dedicated simulation infrastructure, validate the performance of our template with two typical irregular applications. Our experimental results prove the benefits provided by both the multi-core approach and the bandwidth optimization reference aggregation technique.« less

  17. Spin generalization of the Calogero–Moser hierarchy and the matrix KP hierarchy

    NASA Astrophysics Data System (ADS)

    Pashkov, V.; Zabrodin, A.

    2018-05-01

    We establish a correspondence between rational solutions to the matrix KP hierarchy and the spin generalization of the Calogero–Moser system on the level of hierarchies. Namely, it is shown that the rational solutions to the matrix KP hierarchy appear to be isomorphic to the spin Calogero–Moser system in a sense that the dynamics of poles of solutions to the matrix KP hierarchy in the higher times is governed by the higher Hamiltonians of the spin Calogero–Moser integrable hierarchy with rational potential.

  18. Attention modulations on the perception of social hierarchy at distinct temporal stages: an electrophysiological investigation.

    PubMed

    Feng, Chunliang; Tian, Tengxiang; Feng, Xue; Luo, Yue-Jia

    2015-04-01

    Recent behavioral and neuroscientific studies have revealed the preferential processing of superior-hierarchy cues. However, it remains poorly understood whether top-down controlled mechanisms modulate temporal dynamics of neurocognitive substrates underlying the preferential processing of these biologically and socially relevant cues. This was investigated in the current study by recording event-related potentials from participants who were presented with superior or inferior social hierarchy. Participants performed a hierarchy-judgment task that required attention to hierarchy cues or a gender-judgment task that withdrew their attention from these cues. Superior-hierarchy cues evoked stronger neural responses than inferior-hierarchy cues at both early (N170/N200) and late (late positive potential, LPP) temporal stages. Notably, the modulations of top-down attention were identified on the LPP component, such that superior-hierarchy cues evoked larger LPP amplitudes than inferior-hierarchy cues only in the attended condition; whereas the modulations of the N170/N200 component by hierarchy cues were evident in both attended and unattended conditions. These findings suggest that the preferential perception of superior-hierarchy cues involves both relatively automatic attentional bias at the early temporal stage as well as flexible and voluntary cognitive evaluation at the late temporal stage. Finally, these hierarchy-related effects were absent when participants were shown the same stimuli which, however, were not associated with social-hierarchy information in a non-hierarchy task (Experiment 2), suggesting that effects of social hierarchy at early and late temporal stages could not be accounted for by differences in physical attributes between these social cues. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Modeling the Emergence of Modular Leadership Hierarchy During the Collective Motion of Herds Made of Harems

    NASA Astrophysics Data System (ADS)

    Ozogány, Katalin; Vicsek, Tamás

    2015-02-01

    Gregarious animals need to make collective decisions in order to keep their cohesiveness. Several species of them live in multilevel societies, and form herds composed of smaller communities. We present a model for the development of a leadership hierarchy in a herd consisting of loosely connected sub-groups (e.g. harems) by combining self organization and social dynamics. It starts from unfamiliar individuals without relationships and reproduces the emergence of a hierarchical and modular leadership network that promotes an effective spreading of the decisions from more capable individuals to the others, and thus gives rise to a beneficial collective decision. Our results stemming from the model are in a good agreement with our observations of a Przewalski horse herd (Hortobágy, Hungary). We find that the harem-leader to harem-member ratio observed in Przewalski horses corresponds to an optimal network in this approach regarding common success, and that the observed and modeled harem size distributions are close to a lognormal.

  20. Shape design of an optimal comfortable pillow based on the analytic hierarchy process method

    PubMed Central

    Liu, Shuo-Fang; Lee, Yann-Long; Liang, Jung-Chin

    2011-01-01

    Objective Few studies have analyzed the shapes of pillows. The purpose of this study was to investigate the relationship between the pillow shape design and subjective comfort level for asymptomatic subjects. Methods Four basic pillow designs factors were selected on the basis of literature review and recombined into 8 configurations for testing the rank of degrees of comfort. The data were analyzed by the analytic hierarchy process method to determine the most comfortable pillow. Results Pillow number 4 was the most comfortable pillow in terms of head, neck, shoulder, height, and overall comfort. The design factors of pillow number 4 were using a combination of standard, cervical, and shoulder pillows. A prototype of this pillow was developed on the basis of the study results for designing future pillow shapes. Conclusions This study investigated the comfort level of particular users and redesign features of a pillow. A deconstruction analysis would simplify the process of determining the most comfortable pillow design and aid designers in designing pillows for groups. PMID:22654680

  1. The power of servant leadership to transform health care organizations for the 21st-century economy.

    PubMed

    Schwartz, Richard W; Tumblin, Thomas F

    2002-12-01

    Physician leadership is emerging as a vital component in transforming the nation's health care industry. Because few physicians have been introduced to the large body of literature on leadership and organizations, we herein provide a concise review, as this literature relates to competitive health care organizations and the leaders who serve them. Although the US health care industry has transitioned to a dynamic market economy governed by a wide range of internal and external forces, health care organizations continue to be dominated by leaders who practice an outmoded transactional style of leadership and by organizational hierarchies that are inherently stagnant. In contrast, outside the health care sector, service industries have repeatedly demonstrated that transformational, situational, and servant leadership styles are most successful in energizing human resources within organizations. This optimization of intellectual capital is further enhanced by transforming organizations into adaptable learning organizations where traditional institutional hierarchies are flattened and efforts to evoke change are typically team driven and mission oriented.

  2. On quantum symmetries of compact metric spaces

    NASA Astrophysics Data System (ADS)

    Chirvasitu, Alexandru

    2015-08-01

    An action of a compact quantum group on a compact metric space (X , d) is (D)-isometric if the distance function is preserved by a diagonal action on X × X. In this study, we show that an isometric action in this sense has the following additional property: the corresponding action on the algebra of continuous functions on X by the convolution semigroup of probability measures on the quantum group contracts Lipschitz constants. In other words, it is isometric in another sense due to Li, Quaegebeur, and Sabbe, which partially answers a question posed by Goswami. We also introduce other possible notions of isometric quantum actions in terms of the Wasserstein p-distances between probability measures on X for p ≥ 1, which are used extensively in optimal transportation. Indeed, all of these definitions of quantum isometry belong to a hierarchy of implications, where the two described above lie at the extreme ends of the hierarchy. We conjecture that they are all equivalent.

  3. BLACKCOMB2: Hardware-software co-design for non-volatile memory in exascale systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mudge, Trevor

    This work was part of a larger project, Blackcomb2, centered at Oak Ridge National Labs (Jeff Vetter PI) to investigate the opportunities for replacing or supplementing DRAM main memory with nonvolatile memory (NVmemory) in Exascale memory systems. The goal was to reduce the energy consumed by in future supercomputer memory systems and to improve their resiliency. Building on the accomplishments of the original Blackcomb Project, funded in 2010, the goal for Blackcomb2 was to identify, evaluate, and optimize the most promising emerging memory technologies, architecture hardware and software technologies, which are essential to provide the necessary memory capacity, performance, resilience,more » and energy efficiency in Exascale systems. Capacity and energy are the key drivers.« less

  4. Probing neutrino mass hierarchy by comparing the charged-current and neutral-current interaction rates of supernova neutrinos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lai, Kwang-Chang; Leung Center for Cosmology and Particle Astrophysics; Lee, Fei-Fan

    2016-07-22

    The neutrino mass hierarchy is one of the neutrino fundamental properties yet to be determined. We introduce a method to determine neutrino mass hierarchy by comparing the interaction rate of neutral current (NC) interactions, ν(ν-bar)+p→ν(ν-bar)+p, and inverse beta decays (IBD), ν-bar{sub e}+p→n+e{sup +}, of supernova neutrinos in scintillation detectors. Neutrino flavor conversions inside the supernova are sensitive to neutrino mass hierarchy. Due to Mikheyev-Smirnov-Wolfenstein effects, the full swapping of ν-bar{sub e} flux with the ν-bar{sub x} (x=μ, τ) one occurs in the inverted hierarchy, while such a swapping does not occur in the normal hierarchy. As a result, more highmore » energy IBD events occur in the detector for the inverted hierarchy than the high energy IBD events in the normal hierarchy. By comparing IBD interaction rate with the mass hierarchy independent NC interaction rate, one can determine the neutrino mass hierarchy.« less

  5. Probing neutrino mass hierarchy by comparing the charged-current and neutral-current interaction rates of supernova neutrinos

    NASA Astrophysics Data System (ADS)

    Lai, Kwang-Chang; Lee, Fei-Fan; Lee, Feng-Shiuh; Lin, Guey-Lin; Liu, Tsung-Che; Yang, Yi

    2016-07-01

    The neutrino mass hierarchy is one of the neutrino fundamental properties yet to be determined. We introduce a method to determine neutrino mass hierarchy by comparing the interaction rate of neutral current (NC) interactions, ν(bar nu) + p → ν(bar nu) + p, and inverse beta decays (IBD), bar nue + p → n + e+, of supernova neutrinos in scintillation detectors. Neutrino flavor conversions inside the supernova are sensitive to neutrino mass hierarchy. Due to Mikheyev-Smirnov-Wolfenstein effects, the full swapping of bar nue flux with the bar nux (x = μ, τ) one occurs in the inverted hierarchy, while such a swapping does not occur in the normal hierarchy. As a result, more high energy IBD events occur in the detector for the inverted hierarchy than the high energy IBD events in the normal hierarchy. By comparing IBD interaction rate with the mass hierarchy independent NC interaction rate, one can determine the neutrino mass hierarchy.

  6. On the Run-Time Optimization of the Boolean Logic of a Program.

    ERIC Educational Resources Information Center

    Cadolino, C.; Guazzo, M.

    1982-01-01

    Considers problem of optimal scheduling of Boolean expression (each Boolean variable represents binary outcome of program module) on single-processor system. Optimization discussed consists of finding operand arrangement that minimizes average execution costs representing consumption of resources (elapsed time, main memory, number of…

  7. Layout optimization of DRAM cells using rigorous simulation model for NTD

    NASA Astrophysics Data System (ADS)

    Jeon, Jinhyuck; Kim, Shinyoung; Park, Chanha; Yang, Hyunjo; Yim, Donggyu; Kuechler, Bernd; Zimmermann, Rainer; Muelders, Thomas; Klostermann, Ulrich; Schmoeller, Thomas; Do, Mun-hoe; Choi, Jung-Hoe

    2014-03-01

    DRAM chip space is mainly determined by the size of the memory cell array patterns which consist of periodic memory cell features and edges of the periodic array. Resolution Enhancement Techniques (RET) are used to optimize the periodic pattern process performance. Computational Lithography such as source mask optimization (SMO) to find the optimal off axis illumination and optical proximity correction (OPC) combined with model based SRAF placement are applied to print patterns on target. For 20nm Memory Cell optimization we see challenges that demand additional tool competence for layout optimization. The first challenge is a memory core pattern of brick-wall type with a k1 of 0.28, so it allows only two spectral beams to interfere. We will show how to analytically derive the only valid geometrically limited source. Another consequence of two-beam interference limitation is a "super stable" core pattern, with the advantage of high depth of focus (DoF) but also low sensitivity to proximity corrections or changes of contact aspect ratio. This makes an array edge correction very difficult. The edge can be the most critical pattern since it forms the transition from the very stable regime of periodic patterns to non-periodic periphery, so it combines the most critical pitch and highest susceptibility to defocus. Above challenge makes the layout correction to a complex optimization task demanding a layout optimization that finds a solution with optimal process stability taking into account DoF, exposure dose latitude (EL), mask error enhancement factor (MEEF) and mask manufacturability constraints. This can only be achieved by simultaneously considering all criteria while placing and sizing SRAFs and main mask features. The second challenge is the use of a negative tone development (NTD) type resist, which has a strong resist effect and is difficult to characterize experimentally due to negative resist profile taper angles that perturb CD at bottom characterization by scanning electron microscope (SEM) measurements. High resist impact and difficult model data acquisition demand for a simulation model that hat is capable of extrapolating reliably beyond its calibration dataset. We use rigorous simulation models to provide that predictive performance. We have discussed the need of a rigorous mask optimization process for DRAM contact cell layout yielding mask layouts that are optimal in process performance, mask manufacturability and accuracy. In this paper, we have shown the step by step process from analytical illumination source derivation, a NTD and application tailored model calibration to layout optimization such as OPC and SRAF placement. Finally the work has been verified with simulation and experimental results on wafer.

  8. Concurrent Monte Carlo transport and fluence optimization with fluence adjusting scalable transport Monte Carlo

    PubMed Central

    Svatos, M.; Zankowski, C.; Bednarz, B.

    2016-01-01

    Purpose: The future of radiation therapy will require advanced inverse planning solutions to support single-arc, multiple-arc, and “4π” delivery modes, which present unique challenges in finding an optimal treatment plan over a vast search space, while still preserving dosimetric accuracy. The successful clinical implementation of such methods would benefit from Monte Carlo (MC) based dose calculation methods, which can offer improvements in dosimetric accuracy when compared to deterministic methods. The standard method for MC based treatment planning optimization leverages the accuracy of the MC dose calculation and efficiency of well-developed optimization methods, by precalculating the fluence to dose relationship within a patient with MC methods and subsequently optimizing the fluence weights. However, the sequential nature of this implementation is computationally time consuming and memory intensive. Methods to reduce the overhead of the MC precalculation have been explored in the past, demonstrating promising reductions of computational time overhead, but with limited impact on the memory overhead due to the sequential nature of the dose calculation and fluence optimization. The authors propose an entirely new form of “concurrent” Monte Carlo treat plan optimization: a platform which optimizes the fluence during the dose calculation, reduces wasted computation time being spent on beamlets that weakly contribute to the final dose distribution, and requires only a low memory footprint to function. In this initial investigation, the authors explore the key theoretical and practical considerations of optimizing fluence in such a manner. Methods: The authors present a novel derivation and implementation of a gradient descent algorithm that allows for optimization during MC particle transport, based on highly stochastic information generated through particle transport of very few histories. A gradient rescaling and renormalization algorithm, and the concept of momentum from stochastic gradient descent were used to address obstacles unique to performing gradient descent fluence optimization during MC particle transport. The authors have applied their method to two simple geometrical phantoms, and one clinical patient geometry to examine the capability of this platform to generate conformal plans as well as assess its computational scaling and efficiency, respectively. Results: The authors obtain a reduction of at least 50% in total histories transported in their investigation compared to a theoretical unweighted beamlet calculation and subsequent fluence optimization method, and observe a roughly fixed optimization time overhead consisting of ∼10% of the total computation time in all cases. Finally, the authors demonstrate a negligible increase in memory overhead of ∼7–8 MB to allow for optimization of a clinical patient geometry surrounded by 36 beams using their platform. Conclusions: This study demonstrates a fluence optimization approach, which could significantly improve the development of next generation radiation therapy solutions while incurring minimal additional computational overhead. PMID:27277051

  9. A portable approach for PIC on emerging architectures

    NASA Astrophysics Data System (ADS)

    Decyk, Viktor

    2016-03-01

    A portable approach for designing Particle-in-Cell (PIC) algorithms on emerging exascale computers, is based on the recognition that 3 distinct programming paradigms are needed. They are: low level vector (SIMD) processing, middle level shared memory parallel programing, and high level distributed memory programming. In addition, there is a memory hierarchy associated with each level. Such algorithms can be initially developed using vectorizing compilers, OpenMP, and MPI. This is the approach recommended by Intel for the Phi processor. These algorithms can then be translated and possibly specialized to other programming models and languages, as needed. For example, the vector processing and shared memory programming might be done with CUDA instead of vectorizing compilers and OpenMP, but generally the algorithm itself is not greatly changed. The UCLA PICKSC web site at http://www.idre.ucla.edu/ contains example open source skeleton codes (mini-apps) illustrating each of these three programming models, individually and in combination. Fortran2003 now supports abstract data types, and design patterns can be used to support a variety of implementations within the same code base. Fortran2003 also supports interoperability with C so that implementations in C languages are also easy to use. Finally, main codes can be translated into dynamic environments such as Python, while still taking advantage of high performing compiled languages. Parallel languages are still evolving with interesting developments in co-Array Fortran, UPC, and OpenACC, among others, and these can also be supported within the same software architecture. Work supported by NSF and DOE Grants.

  10. Efficient algorithms for accurate hierarchical clustering of huge datasets: tackling the entire protein space.

    PubMed

    Loewenstein, Yaniv; Portugaly, Elon; Fromer, Menachem; Linial, Michal

    2008-07-01

    UPGMA (average linking) is probably the most popular algorithm for hierarchical data clustering, especially in computational biology. However, UPGMA requires the entire dissimilarity matrix in memory. Due to this prohibitive requirement, UPGMA is not scalable to very large datasets. We present a novel class of memory-constrained UPGMA (MC-UPGMA) algorithms. Given any practical memory size constraint, this framework guarantees the correct clustering solution without explicitly requiring all dissimilarities in memory. The algorithms are general and are applicable to any dataset. We present a data-dependent characterization of hardness and clustering efficiency. The presented concepts are applicable to any agglomerative clustering formulation. We apply our algorithm to the entire collection of protein sequences, to automatically build a comprehensive evolutionary-driven hierarchy of proteins from sequence alone. The newly created tree captures protein families better than state-of-the-art large-scale methods such as CluSTr, ProtoNet4 or single-linkage clustering. We demonstrate that leveraging the entire mass embodied in all sequence similarities allows to significantly improve on current protein family clusterings which are unable to directly tackle the sheer mass of this data. Furthermore, we argue that non-metric constraints are an inherent complexity of the sequence space and should not be overlooked. The robustness of UPGMA allows significant improvement, especially for multidomain proteins, and for large or divergent families. A comprehensive tree built from all UniProt sequence similarities, together with navigation and classification tools will be made available as part of the ProtoNet service. A C++ implementation of the algorithm is available on request.

  11. The art of war: beyond memory-one strategies in population games.

    PubMed

    Lee, Christopher; Harper, Marc; Fryer, Dashiell

    2015-01-01

    We show that the history of play in a population game contains exploitable information that can be successfully used by sophisticated strategies to defeat memory-one opponents, including zero determinant strategies. The history allows a player to label opponents by their strategies, enabling a player to determine the population distribution and to act differentially based on the opponent's strategy in each pairwise interaction. For the Prisoner's Dilemma, these advantages lead to the natural formation of cooperative coalitions among similarly behaving players and eventually to unilateral defection against opposing player types. We show analytically and empirically that optimal play in population games depends strongly on the population distribution. For example, the optimal strategy for a minority player type against a resident TFT population is ALLC, while for a majority player type the optimal strategy versus TFT players is ALLD. Such behaviors are not accessible to memory-one strategies. Drawing inspiration from Sun Tzu's the Art of War, we implemented a non-memory-one strategy for population games based on techniques from machine learning and statistical inference that can exploit the history of play in this manner. Via simulation we find that this strategy is essentially uninvadable and can successfully invade (significantly more likely than a neutral mutant) essentially all known memory-one strategies for the Prisoner's Dilemma, including ALLC (always cooperate), ALLD (always defect), tit-for-tat (TFT), win-stay-lose-shift (WSLS), and zero determinant (ZD) strategies, including extortionate and generous strategies.

  12. Optimization of immunolabeling and clearing techniques for indelibly-labeled memory traces.

    PubMed

    Pavlova, Ina P; Shipley, Shannon C; Lanio, Marcos; Hen, René; Denny, Christine A

    2018-04-16

    Recent genetic tools have allowed researchers to visualize and manipulate memory traces (i.e. engrams) in small brain regions. However, the ultimate goal is to visualize memory traces across the entire brain in order to better understand how memories are stored in neural networks and how multiple memories may coexist. Intact tissue clearing and imaging is a new and rapidly growing area of focus that could accomplish this task. Here, we utilized the leading protocols for whole-brain clearing and applied them to the ArcCreER T2 mice, a murine line that allows for the indelible labeling of memory traces. We found that CLARITY and PACT greatly distorted the tissue, and iDISCO quenched enhanced yellow fluorescent protein (EYFP) fluorescence and hindered immunolabeling. Alternative clearing solutions, such as tert-Butanol, circumvented these harmful effects, but still did not permit whole-brain immunolabeling. CUBIC and CUBIC with Reagent 1A produced improved antibody penetration and preserved EYFP fluorescence, but also did not allow for whole-brain memory trace visualization. Modification of CUBIC with Reagent-1A resulted in EYFP fluorescence preservation and immunolabeling of the immediate early gene (IEG) Arc in deep brain areas; however, optimized memory trace labeling still required tissue slicing into mm-thick tissue sections. In summary, our data show that CUBIC with Reagent-1A* is the ideal method for reproducible clearing and immunolabeling for the visualization of memory traces in mm-thick tissue sections from ArcCreER T2 mice. This article is protected by copyright. All rights reserved. © 2018 Wiley Periodicals, Inc.

  13. Neural suppression of irrelevant information underlies optimal working memory performance.

    PubMed

    Zanto, Theodore P; Gazzaley, Adam

    2009-03-11

    Our ability to focus attention on task-relevant information and ignore distractions is reflected by differential enhancement and suppression of neural activity in sensory cortex (i.e., top-down modulation). Such selective, goal-directed modulation of activity may be intimately related to memory, such that the focus of attention biases the likelihood of successfully maintaining relevant information by limiting interference from irrelevant stimuli. Despite recent studies elucidating the mechanistic overlap between attention and memory, the relationship between top-down modulation of visual processing during working memory (WM) encoding, and subsequent recognition performance has not yet been established. Here, we provide neurophysiological evidence in healthy, young adults that top-down modulation of early visual processing (< 200 ms from stimulus onset) is intimately related to subsequent WM performance, such that the likelihood of successfully remembering relevant information is associated with limiting interference from irrelevant stimuli. The consequences of a failure to ignore distractors on recognition performance was replicated for two types of feature-based memory, motion direction and color. Moreover, attention to irrelevant stimuli was reflected neurally during the WM maintenance period as an increased memory load. These results suggest that neural enhancement of relevant information is not the primary determinant of high-level performance, but rather optimal WM performance is dependent on effectively filtering irrelevant information through neural suppression to prevent overloading a limited memory capacity.

  14. Ray Casting of Large Multi-Resolution Volume Datasets

    NASA Astrophysics Data System (ADS)

    Lux, C.; Fröhlich, B.

    2009-04-01

    High quality volume visualization through ray casting on graphics processing units (GPU) has become an important approach for many application domains. We present a GPU-based, multi-resolution ray casting technique for the interactive visualization of massive volume data sets commonly found in the oil and gas industry. Large volume data sets are represented as a multi-resolution hierarchy based on an octree data structure. The original volume data is decomposed into small bricks of a fixed size acting as the leaf nodes of the octree. These nodes are the highest resolution of the volume. Coarser resolutions are represented through inner nodes of the hierarchy which are generated by down sampling eight neighboring nodes on a finer level. Due to limited memory resources of current desktop workstations and graphics hardware only a limited working set of bricks can be locally maintained for a frame to be displayed. This working set is chosen to represent the whole volume at different local resolution levels depending on the current viewer position, transfer function and distinct areas of interest. During runtime the working set of bricks is maintained in CPU- and GPU memory and is adaptively updated by asynchronously fetching data from external sources like hard drives or a network. The CPU memory hereby acts as a secondary level cache for these sources from which the GPU representation is updated. Our volume ray casting algorithm is based on a 3D texture-atlas in GPU memory. This texture-atlas contains the complete working set of bricks of the current multi-resolution representation of the volume. This enables the volume ray casting algorithm to access the whole working set of bricks through only a single 3D texture. For traversing rays through the volume, information about the locations and resolution levels of visited bricks are required for correct compositing computations. We encode this information into a small 3D index texture which represents the current octree subdivision on its finest level and spatially organizes the bricked data. This approach allows us to render a bricked multi-resolution volume data set utilizing only a single rendering pass with no loss of compositing precision. In contrast most state-of-the art volume rendering systems handle the bricked data as individual 3D textures, which are rendered one at a time while the results are composited into a lower precision frame buffer. Furthermore, our method enables us to integrate advanced volume rendering techniques like empty-space skipping, adaptive sampling and preintegrated transfer functions in a very straightforward manner with virtually no extra costs. Our interactive volume ray tracing implementation allows high quality visualizations of massive volume data sets of tens of Gigabytes in size on standard desktop workstations.

  15. Boosting Long-Term Memory via Wakeful Rest: Intentional Rehearsal Is Not Necessary, Consolidation Is Sufficient

    PubMed Central

    Dewar, Michaela; Alber, Jessica; Cowan, Nelson; Della Sala, Sergio

    2014-01-01

    People perform better on tests of delayed free recall if learning is followed immediately by a short wakeful rest than by a short period of sensory stimulation. Animal and human work suggests that wakeful resting provides optimal conditions for the consolidation of recently acquired memories. However, an alternative account cannot be ruled out, namely that wakeful resting provides optimal conditions for intentional rehearsal of recently acquired memories, thus driving superior memory. Here we utilised non-recallable words to examine whether wakeful rest boosts long-term memory, even when new memories could not be rehearsed intentionally during the wakeful rest delay. The probing of non-recallable words requires a recognition paradigm. Therefore, we first established, via Experiment 1, that the rest-induced boost in memory observed via free recall can be replicated in a recognition paradigm, using concrete nouns. In Experiment 2, participants heard 30 non-recallable non-words, presented as ‘foreign names in a bridge club abroad’ and then either rested wakefully or played a visual spot-the-difference game for 10 minutes. Retention was probed via recognition at two time points, 15 minutes and 7 days after presentation. As in Experiment 1, wakeful rest boosted recognition significantly, and this boost was maintained for at least 7 days. Our results indicate that the enhancement of memory via wakeful rest is not dependent upon intentional rehearsal of learned material during the rest period. We thus conclude that consolidation is sufficient for this rest-induced memory boost to emerge. We propose that wakeful resting allows for superior memory consolidation, resulting in stronger and/or more veridical representations of experienced events which can be detected via tests of free recall and recognition. PMID:25333957

  16. Boosting long-term memory via wakeful rest: intentional rehearsal is not necessary, consolidation is sufficient.

    PubMed

    Dewar, Michaela; Alber, Jessica; Cowan, Nelson; Della Sala, Sergio

    2014-01-01

    People perform better on tests of delayed free recall if learning is followed immediately by a short wakeful rest than by a short period of sensory stimulation. Animal and human work suggests that wakeful resting provides optimal conditions for the consolidation of recently acquired memories. However, an alternative account cannot be ruled out, namely that wakeful resting provides optimal conditions for intentional rehearsal of recently acquired memories, thus driving superior memory. Here we utilised non-recallable words to examine whether wakeful rest boosts long-term memory, even when new memories could not be rehearsed intentionally during the wakeful rest delay. The probing of non-recallable words requires a recognition paradigm. Therefore, we first established, via Experiment 1, that the rest-induced boost in memory observed via free recall can be replicated in a recognition paradigm, using concrete nouns. In Experiment 2, participants heard 30 non-recallable non-words, presented as 'foreign names in a bridge club abroad' and then either rested wakefully or played a visual spot-the-difference game for 10 minutes. Retention was probed via recognition at two time points, 15 minutes and 7 days after presentation. As in Experiment 1, wakeful rest boosted recognition significantly, and this boost was maintained for at least 7 days. Our results indicate that the enhancement of memory via wakeful rest is not dependent upon intentional rehearsal of learned material during the rest period. We thus conclude that consolidation is sufficient for this rest-induced memory boost to emerge. We propose that wakeful resting allows for superior memory consolidation, resulting in stronger and/or more veridical representations of experienced events which can be detected via tests of free recall and recognition.

  17. Block algebra in two-component BKP and D type Drinfeld-Sokolov hierarchies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Chuanzhong, E-mail: lichuanzhong@nbu.edu.cn; He, Jingsong, E-mail: hejingsong@nbu.edu.cn

    We construct generalized additional symmetries of a two-component BKP hierarchy defined by two pseudo-differential Lax operators. These additional symmetry flows form a Block type algebra with some modified (or additional) terms because of a B type reduction condition of this integrable hierarchy. Further we show that the D type Drinfeld-Sokolov hierarchy, which is a reduction of the two-component BKP hierarchy, possess a complete Block type additional symmetry algebra. That D type Drinfeld-Sokolov hierarchy has a similar algebraic structure as the bigraded Toda hierarchy which is a differential-discrete integrable system.

  18. GrouseFlocks: steerable exploration of graph hierarchy space.

    PubMed

    Archambault, Daniel; Munzner, Tamara; Auber, David

    2008-01-01

    Several previous systems allow users to interactively explore a large input graph through cuts of a superimposed hierarchy. This hierarchy is often created using clustering algorithms or topological features present in the graph. However, many graphs have domain-specific attributes associated with the nodes and edges, which could be used to create many possible hierarchies providing unique views of the input graph. GrouseFlocks is a system for the exploration of this graph hierarchy space. By allowing users to see several different possible hierarchies on the same graph, the system helps users investigate graph hierarchy space instead of a single fixed hierarchy. GrouseFlocks provides a simple set of operations so that users can create and modify their graph hierarchies based on selections. These selections can be made manually or based on patterns in the attribute data provided with the graph. It provides feedback to the user within seconds, allowing interactive exploration of this space.

  19. The exposure hierarchy as a measure of progress and efficacy in the treatment of social anxiety disorder.

    PubMed

    Katerelos, Marina; Hawley, Lance L; Antony, Martin M; McCabe, Randi E

    2008-07-01

    This study explored the psychometric properties and utility of the exposure hierarchy as a measure of treatment outcome for social anxiety disorder (SAD). An exposure hierarchy was created for each of 103 individuals with a diagnosis of SAD who completed a course of cognitive behavioral group therapy. Exposure hierarchy ratings were collected on a weekly basis, and a series of self-report measures were collected before and after treatment. Results indicated that the exposure hierarchy demonstrated high test-retest reliability, as well as significant convergent validity, as participants' exposure hierarchy ratings correlated positively with scores on conceptually related measures. Hierarchy ratings were significantly associated with changes in SAD symptoms over time. However, exposure hierarchy ratings were correlated to general measures of psychopathology, suggesting limited discriminant validity. The study highlights the clinical and scientific utility of the exposure hierarchy.

  20. Toda hierarchies and their applications

    NASA Astrophysics Data System (ADS)

    Takasaki, Kanehisa

    2018-05-01

    The 2D Toda hierarchy occupies a central position in the family of integrable hierarchies of the Toda type. The 1D Toda hierarchy and the Ablowitz–Ladik (aka relativistic Toda) hierarchy can be derived from the 2D Toda hierarchy as reductions. These integrable hierarchies have been applied to various problems of mathematics and mathematical physics since 1990s. A recent example is a series of studies on models of statistical mechanics called the melting crystal model. This research has revealed that the aforementioned two reductions of the 2D Toda hierarchy underlie two different melting crystal models. Technical clues are a fermionic realization of the quantum torus algebra, special algebraic relations therein called shift symmetries, and a matrix factorization problem. The two melting crystal models thus exhibit remarkable similarity with the Hermitian and unitary matrix models for which the two reductions of the 2D Toda hierarchy play the role of fundamental integrable structures.

  1. Why and when hierarchy impacts team effectiveness: A meta-analytic integration.

    PubMed

    Greer, Lindred L; de Jong, Bart A; Schouten, Maartje E; Dannals, Jennifer E

    2018-06-01

    Hierarchy has the potential to both benefit and harm team effectiveness. In this article, we meta-analytically investigate different explanations for why and when hierarchy helps or hurts team effectiveness, drawing on results from 54 prior studies (N = 13,914 teams). Our findings show that, on net, hierarchy negatively impacts team effectiveness (performance: ρ = -.08; viability: ρ = -.11), and that this effect is mediated by increased conflict-enabling states. Additionally, we show that the negative relationship between hierarchy and team performance is exacerbated by aspects of the team structure (i.e., membership instability, skill differentiation) and the hierarchy itself (i.e., mutability), which make hierarchical teams prone to conflict. The predictions regarding the positive effect of hierarchy on team performance as mediated by coordination-enabling processes, and the moderating roles of several aspects of team tasks (i.e., interdependence, complexity) and the hierarchy (i.e., form) were not supported, with the exception that task ambiguity enhanced the positive effects of hierarchy. Given that our findings largely support dysfunctional views on hierarchy, future research is needed to understand when and why hierarchy may be more likely to live up to its purported functional benefits. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  2. Memory-optimized shift operator alternating direction implicit finite difference time domain method for plasma

    NASA Astrophysics Data System (ADS)

    Song, Wanjun; Zhang, Hou

    2017-11-01

    Through introducing the alternating direction implicit (ADI) technique and the memory-optimized algorithm to the shift operator (SO) finite difference time domain (FDTD) method, the memory-optimized SO-ADI FDTD for nonmagnetized collisional plasma is proposed and the corresponding formulae of the proposed method for programming are deduced. In order to further the computational efficiency, the iteration method rather than Gauss elimination method is employed to solve the equation set in the derivation of the formulae. Complicated transformations and convolutions are avoided in the proposed method compared with the Z transforms (ZT) ADI FDTD method and the piecewise linear JE recursive convolution (PLJERC) ADI FDTD method. The numerical dispersion of the SO-ADI FDTD method with different plasma frequencies and electron collision frequencies is analyzed and the appropriate ratio of grid size to the minimum wavelength is given. The accuracy of the proposed method is validated by the reflection coefficient test on a nonmagnetized collisional plasma sheet. The testing results show that the proposed method is advantageous for improving computational efficiency and saving computer memory. The reflection coefficient of a perfect electric conductor (PEC) sheet covered by multilayer plasma and the RCS of the objects coated by plasma are calculated by the proposed method and the simulation results are analyzed.

  3. PCM-Based Durable Write Cache for Fast Disk I/O

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Zhuo; Wang, Bin; Carpenter, Patrick

    2012-01-01

    Flash based solid-state devices (FSSDs) have been adopted within the memory hierarchy to improve the performance of hard disk drive (HDD) based storage system. However, with the fast development of storage-class memories, new storage technologies with better performance and higher write endurance than FSSDs are emerging, e.g., phase-change memory (PCM). Understanding how to leverage these state-of-the-art storage technologies for modern computing systems is important to solve challenging data intensive computing problems. In this paper, we propose to leverage PCM for a hybrid PCM-HDD storage architecture. We identify the limitations of traditional LRU caching algorithms for PCM-based caches, and develop amore » novel hash-based write caching scheme called HALO to improve random write performance of hard disks. To address the limited durability of PCM devices and solve the degraded spatial locality in traditional wear-leveling techniques, we further propose novel PCM management algorithms that provide effective wear-leveling while maximizing access parallelism. We have evaluated this PCM-based hybrid storage architecture using applications with a diverse set of I/O access patterns. Our experimental results demonstrate that the HALO caching scheme leads to an average reduction of 36.8% in execution time compared to the LRU caching scheme, and that the SFC wear leveling extends the lifetime of PCM by a factor of 21.6.« less

  4. Interactive Volume Exploration of Petascale Microscopy Data Streams Using a Visualization-Driven Virtual Memory Approach.

    PubMed

    Hadwiger, M; Beyer, J; Jeong, Won-Ki; Pfister, H

    2012-12-01

    This paper presents the first volume visualization system that scales to petascale volumes imaged as a continuous stream of high-resolution electron microscopy images. Our architecture scales to dense, anisotropic petascale volumes because it: (1) decouples construction of the 3D multi-resolution representation required for visualization from data acquisition, and (2) decouples sample access time during ray-casting from the size of the multi-resolution hierarchy. Our system is designed around a scalable multi-resolution virtual memory architecture that handles missing data naturally, does not pre-compute any 3D multi-resolution representation such as an octree, and can accept a constant stream of 2D image tiles from the microscopes. A novelty of our system design is that it is visualization-driven: we restrict most computations to the visible volume data. Leveraging the virtual memory architecture, missing data are detected during volume ray-casting as cache misses, which are propagated backwards for on-demand out-of-core processing. 3D blocks of volume data are only constructed from 2D microscope image tiles when they have actually been accessed during ray-casting. We extensively evaluate our system design choices with respect to scalability and performance, compare to previous best-of-breed systems, and illustrate the effectiveness of our system for real microscopy data from neuroscience.

  5. Algorithm for optimizing bipolar interconnection weights with applications in associative memories and multitarget classification.

    PubMed

    Chang, S; Wong, K W; Zhang, W; Zhang, Y

    1999-08-10

    An algorithm for optimizing a bipolar interconnection weight matrix with the Hopfield network is proposed. The effectiveness of this algorithm is demonstrated by computer simulation and optical implementation. In the optical implementation of the neural network the interconnection weights are biased to yield a nonnegative weight matrix. Moreover, a threshold subchannel is added so that the system can realize, in real time, the bipolar weighted summation in a single channel. Preliminary experimental results obtained from the applications in associative memories and multitarget classification with rotation invariance are shown.

  6. Algorithm for Optimizing Bipolar Interconnection Weights with Applications in Associative Memories and Multitarget Classification

    NASA Astrophysics Data System (ADS)

    Chang, Shengjiang; Wong, Kwok-Wo; Zhang, Wenwei; Zhang, Yanxin

    1999-08-01

    An algorithm for optimizing a bipolar interconnection weight matrix with the Hopfield network is proposed. The effectiveness of this algorithm is demonstrated by computer simulation and optical implementation. In the optical implementation of the neural network the interconnection weights are biased to yield a nonnegative weight matrix. Moreover, a threshold subchannel is added so that the system can realize, in real time, the bipolar weighted summation in a single channel. Preliminary experimental results obtained from the applications in associative memories and multitarget classification with rotation invariance are shown.

  7. Digital Equipment Corporation VAX/VMS Version 4.3

    DTIC Science & Technology

    1986-07-30

    operating system performs process-oriented paging that allows execution of programs that may be larger than the physical memory allocated to them... to higher privileged modes. (For an explanation of how the four access modes provide memory access protection see page 9, "Memory Management".) A... to optimize program performance for real-time applications or interactive environments. July 30, 1986 - 4 - Final Evaluation Report Digital VAX/VMS

  8. Spiking neural network simulation: memory-optimal synaptic event scheduling.

    PubMed

    Stewart, Robert D; Gurney, Kevin N

    2011-06-01

    Spiking neural network simulations incorporating variable transmission delays require synaptic events to be scheduled prior to delivery. Conventional methods have memory requirements that scale with the total number of synapses in a network. We introduce novel scheduling algorithms for both discrete and continuous event delivery, where the memory requirement scales instead with the number of neurons. Superior algorithmic performance is demonstrated using large-scale, benchmarking network simulations.

  9. An ideal observer analysis of visual working memory.

    PubMed

    Sims, Chris R; Jacobs, Robert A; Knill, David C

    2012-10-01

    Limits in visual working memory (VWM) strongly constrain human performance across many tasks. However, the nature of these limits is not well understood. In this article we develop an ideal observer analysis of human VWM by deriving the expected behavior of an optimally performing but limited-capacity memory system. This analysis is framed around rate-distortion theory, a branch of information theory that provides optimal bounds on the accuracy of information transmission subject to a fixed information capacity. The result of the ideal observer analysis is a theoretical framework that provides a task-independent and quantitative definition of visual memory capacity and yields novel predictions regarding human performance. These predictions are subsequently evaluated and confirmed in 2 empirical studies. Further, the framework is general enough to allow the specification and testing of alternative models of visual memory (e.g., how capacity is distributed across multiple items). We demonstrate that a simple model developed on the basis of the ideal observer analysis-one that allows variability in the number of stored memory representations but does not assume the presence of a fixed item limit-provides an excellent account of the empirical data and further offers a principled reinterpretation of existing models of VWM. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  10. Optimized collectives using a DMA on a parallel computer

    DOEpatents

    Chen, Dong [Croton On Hudson, NY; Gabor, Dozsa [Ardsley, NY; Giampapa, Mark E [Irvington, NY; Heidelberger,; Phillip, [Cortlandt Manor, NY

    2011-02-08

    Optimizing collective operations using direct memory access controller on a parallel computer, in one aspect, may comprise establishing a byte counter associated with a direct memory access controller for each submessage in a message. The byte counter includes at least a base address of memory and a byte count associated with a submessage. A byte counter associated with a submessage is monitored to determine whether at least a block of data of the submessage has been received. The block of data has a predetermined size, for example, a number of bytes. The block is processed when the block has been fully received, for example, when the byte count indicates all bytes of the block have been received. The monitoring and processing may continue for all blocks in all submessages in the message.

  11. DESTINY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-03-10

    DESTINY is a comprehensive tool for modeling 3D and 2D cache designs using SRAM,embedded DRAM (eDRAM), spin transfer torque RAM (STT-RAM), resistive RAM (ReRAM), and phase change RAM (PCN). In its purpose, it is similar to CACTI, CACTI-3DD or NVSim. DESTINY is very useful for performing design-space exploration across several dimensions, such as optimizing for a target (e.g. latency, area or energy-delay product) for agiven memory technology, choosing the suitable memory technology or fabrication method (i.e. 2D v/s 3D) for a given optimization target, etc. DESTINY has been validated against several cache prototypes. DESTINY is expected to boost studies ofmore » next-generation memory architectures used in systems ranging from mobile devices to extreme-scale supercomputers.« less

  12. A linear decomposition method for large optimization problems. Blueprint for development

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.

    1982-01-01

    A method is proposed for decomposing large optimization problems encountered in the design of engineering systems such as an aircraft into a number of smaller subproblems. The decomposition is achieved by organizing the problem and the subordinated subproblems in a tree hierarchy and optimizing each subsystem separately. Coupling of the subproblems is accounted for by subsequent optimization of the entire system based on sensitivities of the suboptimization problem solutions at each level of the tree to variables of the next higher level. A formalization of the procedure suitable for computer implementation is developed and the state of readiness of the implementation building blocks is reviewed showing that the ingredients for the development are on the shelf. The decomposition method is also shown to be compatible with the natural human organization of the design process of engineering systems. The method is also examined with respect to the trends in computer hardware and software progress to point out that its efficiency can be amplified by network computing using parallel processors.

  13. Optimized Infrastructure for the Earth System Prediction Capability

    DTIC Science & Technology

    2013-09-30

    for referencing memory between its native coupling datatype (MCT Attribute Vectors) and ESMF Arrays. This will reduce the copies required and will...introduced ability within CESM to share memory between ESMF and MCT datatypes makes using both tools together much easier. Using both is appealing

  14. Development of a high capacity bubble domain memory element and related epitaxial garnet materials for application in spacecraft data recorders. Item 2: The optimization of material-device parameters for application in bubble domain memory elements for spacecraft data recorders

    NASA Technical Reports Server (NTRS)

    Besser, P. J.

    1976-01-01

    Bubble domain materials and devices are discussed. One of the materials development goals was a materials system suitable for operation of 16 micrometer period bubble domain devices at 150 kHz over the temperature range -10 C to +60 C. Several material compositions and hard bubble suppression techniques were characterized and the most promising candidates were evaluated in device structures. The technique of pulsed laser stroboscopic microscopy was used to characterize bubble dynamic properties and device performance at 150 kHz. Techniques for large area LPE film growth were developed as a separate task. Device studies included detector optimization, passive replicator design and test and on-chip bridge evaluation. As a technology demonstration an 8 chip memory cell was designed, tested and delivered. The memory elements used in the cell were 10 kilobit serial registers.

  15. Tuning the cache memory usage in tomographic reconstruction on standard computers with Advanced Vector eXtensions (AVX)

    PubMed Central

    Agulleiro, Jose-Ignacio; Fernandez, Jose-Jesus

    2015-01-01

    Cache blocking is a technique widely used in scientific computing to minimize the exchange of information with main memory by reusing the data kept in cache memory. In tomographic reconstruction on standard computers using vector instructions, cache blocking turns out to be central to optimize performance. To this end, sinograms of the tilt-series and slices of the volumes to be reconstructed have to be divided into small blocks that fit into the different levels of cache memory. The code is then reorganized so as to operate with a block as much as possible before proceeding with another one. This data article is related to the research article titled Tomo3D 2.0 – Exploitation of Advanced Vector eXtensions (AVX) for 3D reconstruction (Agulleiro and Fernandez, 2015) [1]. Here we present data of a thorough study of the performance of tomographic reconstruction by varying cache block sizes, which allows derivation of expressions for their automatic quasi-optimal tuning. PMID:26217710

  16. Tuning the cache memory usage in tomographic reconstruction on standard computers with Advanced Vector eXtensions (AVX).

    PubMed

    Agulleiro, Jose-Ignacio; Fernandez, Jose-Jesus

    2015-06-01

    Cache blocking is a technique widely used in scientific computing to minimize the exchange of information with main memory by reusing the data kept in cache memory. In tomographic reconstruction on standard computers using vector instructions, cache blocking turns out to be central to optimize performance. To this end, sinograms of the tilt-series and slices of the volumes to be reconstructed have to be divided into small blocks that fit into the different levels of cache memory. The code is then reorganized so as to operate with a block as much as possible before proceeding with another one. This data article is related to the research article titled Tomo3D 2.0 - Exploitation of Advanced Vector eXtensions (AVX) for 3D reconstruction (Agulleiro and Fernandez, 2015) [1]. Here we present data of a thorough study of the performance of tomographic reconstruction by varying cache block sizes, which allows derivation of expressions for their automatic quasi-optimal tuning.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Y. M., E-mail: ymingy@gmail.com; Bednarz, B.; Svatos, M.

    Purpose: The future of radiation therapy will require advanced inverse planning solutions to support single-arc, multiple-arc, and “4π” delivery modes, which present unique challenges in finding an optimal treatment plan over a vast search space, while still preserving dosimetric accuracy. The successful clinical implementation of such methods would benefit from Monte Carlo (MC) based dose calculation methods, which can offer improvements in dosimetric accuracy when compared to deterministic methods. The standard method for MC based treatment planning optimization leverages the accuracy of the MC dose calculation and efficiency of well-developed optimization methods, by precalculating the fluence to dose relationship withinmore » a patient with MC methods and subsequently optimizing the fluence weights. However, the sequential nature of this implementation is computationally time consuming and memory intensive. Methods to reduce the overhead of the MC precalculation have been explored in the past, demonstrating promising reductions of computational time overhead, but with limited impact on the memory overhead due to the sequential nature of the dose calculation and fluence optimization. The authors propose an entirely new form of “concurrent” Monte Carlo treat plan optimization: a platform which optimizes the fluence during the dose calculation, reduces wasted computation time being spent on beamlets that weakly contribute to the final dose distribution, and requires only a low memory footprint to function. In this initial investigation, the authors explore the key theoretical and practical considerations of optimizing fluence in such a manner. Methods: The authors present a novel derivation and implementation of a gradient descent algorithm that allows for optimization during MC particle transport, based on highly stochastic information generated through particle transport of very few histories. A gradient rescaling and renormalization algorithm, and the concept of momentum from stochastic gradient descent were used to address obstacles unique to performing gradient descent fluence optimization during MC particle transport. The authors have applied their method to two simple geometrical phantoms, and one clinical patient geometry to examine the capability of this platform to generate conformal plans as well as assess its computational scaling and efficiency, respectively. Results: The authors obtain a reduction of at least 50% in total histories transported in their investigation compared to a theoretical unweighted beamlet calculation and subsequent fluence optimization method, and observe a roughly fixed optimization time overhead consisting of ∼10% of the total computation time in all cases. Finally, the authors demonstrate a negligible increase in memory overhead of ∼7–8 MB to allow for optimization of a clinical patient geometry surrounded by 36 beams using their platform. Conclusions: This study demonstrates a fluence optimization approach, which could significantly improve the development of next generation radiation therapy solutions while incurring minimal additional computational overhead.« less

  18. [P. Janet's concept of the notion of time].

    PubMed

    Fouks, L; Guibert, S; Montot, M

    1988-10-01

    The authors primarily show how P. Janet, influenced by Bergson, describes the evolution of the human mind, its complexities and progressive hierarchies, from the reflex arc to the differed arc which allows the emergence of feelings. The notion of time is late, it enters in the groups of feelings. It is interior, subjective and to its study succeeds the analysis of the concept of presence, absence, strain, memory which is for P. Janet essentially prospective, its essential act is narration. The notion of lived time is studied: in: the neurotics whose horror of the present is put forward, the depressed; in: melancholia of waiting where time does not fly, mania through the "delighted ones" and the "restless ones", the delirious.

  19. Calculating Reuse Distance from Source Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Narayanan, Sri Hari Krishna; Hovland, Paul

    The efficient use of a system is of paramount importance in high-performance computing. Applications need to be engineered for future systems even before the architecture of such a system is clearly known. Static performance analysis that generates performance bounds is one way to approach the task of understanding application behavior. Performance bounds provide an upper limit on the performance of an application on a given architecture. Predicting cache hierarchy behavior and accesses to main memory is a requirement for accurate performance bounds. This work presents our static reuse distance algorithm to generate reuse distance histograms. We then use these histogramsmore » to predict cache miss rates. Experimental results for kernels studied show that the approach is accurate.« less

  20. Optimizing Irregular Applications for Energy and Performance on the Tilera Many-core Architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chavarría-Miranda, Daniel; Panyala, Ajay R.; Halappanavar, Mahantesh

    Optimizing applications simultaneously for energy and performance is a complex problem. High performance, parallel, irregular applications are notoriously hard to optimize due to their data-dependent memory accesses, lack of structured locality and complex data structures and code patterns. Irregular kernels are growing in importance in applications such as machine learning, graph analytics and combinatorial scientific computing. Performance- and energy-efficient implementation of these kernels on modern, energy efficient, multicore and many-core platforms is therefore an important and challenging problem. We present results from optimizing two irregular applications { the Louvain method for community detection (Grappolo), and high-performance conjugate gradient (HPCCG) {more » on the Tilera many-core system. We have significantly extended MIT's OpenTuner auto-tuning framework to conduct a detailed study of platform-independent and platform-specific optimizations to improve performance as well as reduce total energy consumption. We explore the optimization design space along three dimensions: memory layout schemes, compiler-based code transformations, and optimization of parallel loop schedules. Using auto-tuning, we demonstrate whole node energy savings of up to 41% relative to a baseline instantiation, and up to 31% relative to manually optimized variants.« less

  1. Selective waste collection optimization in Romania and its impact to urban climate

    NASA Astrophysics Data System (ADS)

    Mihai, Šercǎianu; Iacoboaea, Cristina; Petrescu, Florian; Aldea, Mihaela; Luca, Oana; Gaman, Florian; Parlow, Eberhard

    2016-08-01

    According to European Directives, transposed in national legislation, the Member States should organize separate collection systems at least for paper, metal, plastic, and glass until 2015. In Romania, since 2011 only 12% of municipal collected waste was recovered, the rest being stored in landfills, although storage is considered the last option in the waste hierarchy. At the same time there was selectively collected only 4% of the municipal waste. Surveys have shown that the Romanian people do not have selective collection bins close to their residencies. The article aims to analyze the current situation in Romania in the field of waste collection and management and to make a proposal for selective collection containers layout, using geographic information systems tools, for a case study in Romania. Route optimization is used based on remote sensing technologies and network analyst protocols. Optimizing selective collection system the greenhouse gases, particles and dust emissions can be reduced.

  2. Stochastic Optimal Control via Bellman's Principle

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Sun, Jian Q.

    2003-01-01

    This paper presents a method for finding optimal controls of nonlinear systems subject to random excitations. The method is capable to generate global control solutions when state and control constraints are present. The solution is global in the sense that controls for all initial conditions in a region of the state space are obtained. The approach is based on Bellman's Principle of optimality, the Gaussian closure and the Short-time Gaussian approximation. Examples include a system with a state-dependent diffusion term, a system in which the infinite hierarchy of moment equations cannot be analytically closed, and an impact system with a elastic boundary. The uncontrolled and controlled dynamics are studied by creating a Markov chain with a control dependent transition probability matrix via the Generalized Cell Mapping method. In this fashion, both the transient and stationary controlled responses are evaluated. The results show excellent control performances.

  3. FDTD Simulation of Novel Polarimetric and Directional Reflectance and Transmittance Measurements from Optical Nano- and Micro-Structured Materials

    DTIC Science & Technology

    2012-03-22

    structures and lead to better designs. 84 Appendix A. Particle Swarm Optimization Algorithm In order to validate the need for a new BSDF model ...24 9. Hierarchy representation of a subset of ScatMech BSDF library model classes...polarimetric BRDF at λ=4.3μm of SPP structures with Λ=1.79μm (left), 2μm (middle) and 2.33μm (right). All components are normalized by dividing by s0

  4. Semilinear (topological) spaces and applications

    NASA Technical Reports Server (NTRS)

    Prakash, P.; Sertel, M. R.

    1971-01-01

    Semivector spaces are defined and some of their algebraic aspects are developed including some structure theory. These spaces are then topologized to obtain semilinear topological spaces for which a hierarchy of local convexity axioms is identified. A number of fixed point and minmax theorems for spaces with various local convexity properties are established. The spaces of concern arise naturally as various hyperspaces of linear and semilinear (topological) spaces. It is indicated briefly how all this can be applied in socio-economic analysis and optimization.

  5. A fast optimization algorithm for multicriteria intensity modulated proton therapy planning.

    PubMed

    Chen, Wei; Craft, David; Madden, Thomas M; Zhang, Kewu; Kooy, Hanne M; Herman, Gabor T

    2010-09-01

    To describe a fast projection algorithm for optimizing intensity modulated proton therapy (IMPT) plans and to describe and demonstrate the use of this algorithm in multicriteria IMPT planning. The authors develop a projection-based solver for a class of convex optimization problems and apply it to IMPT treatment planning. The speed of the solver permits its use in multicriteria optimization, where several optimizations are performed which span the space of possible treatment plans. The authors describe a plan database generation procedure which is customized to the requirements of the solver. The optimality precision of the solver can be specified by the user. The authors apply the algorithm to three clinical cases: A pancreas case, an esophagus case, and a tumor along the rib cage case. Detailed analysis of the pancreas case shows that the algorithm is orders of magnitude faster than industry-standard general purpose algorithms (MOSEK'S interior point optimizer, primal simplex optimizer, and dual simplex optimizer). Additionally, the projection solver has almost no memory overhead. The speed and guaranteed accuracy of the algorithm make it suitable for use in multicriteria treatment planning, which requires the computation of several diverse treatment plans. Additionally, given the low memory overhead of the algorithm, the method can be extended to include multiple geometric instances and proton range possibilities, for robust optimization.

  6. Seeking structure in social organization: compensatory control and the psychological advantages of hierarchy.

    PubMed

    Friesen, Justin P; Kay, Aaron C; Eibach, Richard P; Galinsky, Adam D

    2014-04-01

    Hierarchies are a ubiquitous form of human social organization. We hypothesized that 1 reason for the prevalence of hierarchies is that they offer structure and therefore satisfy the core motivational needs for order and control relative to less structured forms of social organization. This hypothesis is rooted in compensatory control theory, which posits that (a) individuals have a basic need to perceive the world as orderly and structured, and (b) personal and external sources of control are capable of satisfying this need because both serve the comforting belief that the world operates in an orderly fashion. Our first 2 studies confirmed that hierarchies were perceived as more structured and orderly relative to egalitarian arrangements (Study 1) and that working in a hierarchical workplace promotes a feeling of self-efficacy (Study 2). We threatened participants' sense of personal control and measured perceptions of and preferences for hierarchy in 5 subsequent experiments. Participants who lacked control perceived more hierarchy occurring in ambiguous social situations (Study 3) and preferred hierarchy more strongly in workplace contexts (Studies 4-5). We also provide evidence that hierarchies are indeed appealing because of their structure: Preference for hierarchy was higher among individuals high in Personal Need for Structure and a control threat increased preference for hierarchy even among participants low in Personal Need for Structure (Study 5). Framing a hierarchy as unstructured reversed the effect of control threat on hierarchy (Study 6). Finally, hierarchy-enhancing jobs were more appealing after control threat, even when they were low in power and status (Study 7). (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  7. Study of parameter degeneracy and hierarchy sensitivity of NO ν A in presence of sterile neutrino

    NASA Astrophysics Data System (ADS)

    Ghosh, Monojit; Gupta, Shivani; Matthews, Zachary M.; Sharma, Pankaj; Williams, Anthony G.

    2017-10-01

    The first hint of the neutrino mass hierarchy is believed to come from the long-baseline experiment NO ν A . Recent results from NO ν A shows a mild preference towards the C P phase δ13=-9 0 ° and normal hierarchy. Fortunately this is the favorable area of the parameter space which does not suffer from the hierarchy-δ13 degeneracy and thus NO ν A can have good hierarchy sensitivity for this true combination of hierarchy and δ13. Apart from the hierarchy-δ13 degeneracy there is also the octant-δ13 degeneracy. But this does not affect the favorable parameter space of NO ν A as this degeneracy can be resolved with a balanced neutrino and antineutrino run. However, if we consider the existence of a light sterile neutrino then there may be additional degeneracies which can spoil the hierarchy sensitivity of NO ν A even in the favorable parameter space. In the present work we find that apart from the degeneracies mentioned above, there are additional hierarchy and octant degeneracies that appear with the new phase δ14 in the presence of a light sterile neutrino in the eV scale. In contrast to the hierarchy and octant degeneracies appearing with δ13, the parameter space for hierarchy-δ14 degeneracy is different in neutrinos and antineutrinos though the octant-δ14 degeneracy behaves similarly in neutrinos and antineutrinos. We study the effect of these degeneracies on the hierarchy sensitivity of NO ν A for the true normal hierarchy.

  8. Shape memory polymers

    DOEpatents

    Wilson, Thomas S.; Bearinger, Jane P.

    2017-08-29

    New shape memory polymer compositions, methods for synthesizing new shape memory polymers, and apparatus comprising an actuator and a shape memory polymer wherein the shape memory polymer comprises at least a portion of the actuator. A shape memory polymer comprising a polymer composition which physically forms a network structure wherein the polymer composition has shape-memory behavior and can be formed into a permanent primary shape, re-formed into a stable secondary shape, and controllably actuated to recover the permanent primary shape. Polymers have optimal aliphatic network structures due to minimization of dangling chains by using monomers that are symmetrical and that have matching amine and hydroxl groups providing polymers and polymer foams with clarity, tight (narrow temperature range) single transitions, and high shape recovery and recovery force that are especially useful for implanting in the human body.

  9. Shape memory polymers

    DOEpatents

    Wilson, Thomas S.; Bearinger, Jane P.

    2015-06-09

    New shape memory polymer compositions, methods for synthesizing new shape memory polymers, and apparatus comprising an actuator and a shape memory polymer wherein the shape memory polymer comprises at least a portion of the actuator. A shape memory polymer comprising a polymer composition which physically forms a network structure wherein the polymer composition has shape-memory behavior and can be formed into a permanent primary shape, re-formed into a stable secondary shape, and controllably actuated to recover the permanent primary shape. Polymers have optimal aliphatic network structures due to minimization of dangling chains by using monomers that are symmetrical and that have matching amine and hydroxyl groups providing polymers and polymer foams with clarity, tight (narrow temperature range) single transitions, and high shape recovery and recovery force that are especially useful for implanting in the human body.

  10. Optimal error functional for parameter identification in anisotropic finite strain elasto-plasticity

    NASA Astrophysics Data System (ADS)

    Shutov, A. V.; Kaygorodtseva, A. A.; Dranishnikov, N. S.

    2017-10-01

    A problem of parameter identification for a model of finite strain elasto-plasticity is discussed. The utilized phenomenological material model accounts for nonlinear isotropic and kinematic hardening; the model kinematics is described by a nested multiplicative split of the deformation gradient. A hierarchy of optimization problems is considered. First, following the standard procedure, the material parameters are identified through minimization of a certain least square error functional. Next, the focus is placed on finding optimal weighting coefficients which enter the error functional. Toward that end, a stochastic noise with systematic and non-systematic components is introduced to the available measurement results; a superordinate optimization problem seeks to minimize the sensitivity of the resulting material parameters to the introduced noise. The advantage of this approach is that no additional experiments are required; it also provides an insight into the robustness of the identification procedure. As an example, experimental data for the steel 42CrMo4 are considered and a set of weighting coefficients is found, which is optimal in a certain class.

  11. Memory-efficient dynamic programming backtrace and pairwise local sequence alignment.

    PubMed

    Newberg, Lee A

    2008-08-15

    A backtrace through a dynamic programming algorithm's intermediate results in search of an optimal path, or to sample paths according to an implied probability distribution, or as the second stage of a forward-backward algorithm, is a task of fundamental importance in computational biology. When there is insufficient space to store all intermediate results in high-speed memory (e.g. cache) existing approaches store selected stages of the computation, and recompute missing values from these checkpoints on an as-needed basis. Here we present an optimal checkpointing strategy, and demonstrate its utility with pairwise local sequence alignment of sequences of length 10,000. Sample C++-code for optimal backtrace is available in the Supplementary Materials. Supplementary data is available at Bioinformatics online.

  12. Design of a 0.13-μm CMOS cascade expandable ΣΔ modulator for multi-standard RF telecom systems

    NASA Astrophysics Data System (ADS)

    Morgado, Alonso; del Río, Rocío; de la Rosa, José M.

    2007-05-01

    This paper reports a 130-nm CMOS programmable cascade ΣΔ modulator for multi-standard wireless terminals, capable of operating on three standards: GSM, Bluetooth and UMTS. The modulator is reconfigured at both architecture- and circuit- level in order to adapt its performance to the different standards specifications with optimized power consumption. The design of the building blocks is based upon a top-down CAD methodology that combines simulation and statistical optimization at different levels of the system hierarchy. Transistor-level simulations show correct operation for all standards, featuring 13-bit, 11.3-bit and 9-bit effective resolution within 200-kHz, 1-MHz and 4-MHz bandwidth, respectively.

  13. Generating Data Flow Programs from Nonprocedural Specifications.

    DTIC Science & Technology

    1983-03-01

    With the I-structures, Gajski points out, it is difficult to know ahead of time the optimal memory allocation scheme to pertition large arrays. amory...contention problems may occur for frequently accessed elements stored in the sam memory module. Gajski observes that these are the same problem which

  14. Improving Memory for Optimization and Learning in Dynamic Environments

    DTIC Science & Technology

    2011-07-01

    algorithm uses simple, in- cremental clustering to separate solutions into memory entries. The cluster centers are used as the models in the memory. This is...entire days of traffic with realistic traffic de - mands and turning ratios on a 32 intersection network modeled on downtown Pittsburgh, Pennsyl- vania...early/tardy problem. Management Science, 35(2):177–191, 1989. [78] Daniel Parrott and Xiaodong Li. A particle swarm model for tracking multiple peaks in

  15. Implementation and evaluation of shared-memory communication and synchronization operations in MPICH2 using the Nemesis communication subsystem.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buntinas, D.; Mercier, G.; Gropp, W.

    2007-09-01

    This paper presents the implementation of MPICH2 over the Nemesis communication subsystem and the evaluation of its shared-memory performance. We describe design issues as well as some of the optimization techniques we employed. We conducted a performance evaluation over shared memory using microbenchmarks. The evaluation shows that MPICH2 Nemesis has very low communication overhead, making it suitable for smaller-grained applications.

  16. System, methods and apparatus for program optimization for multi-threaded processor architectures

    DOEpatents

    Bastoul, Cedric; Lethin, Richard A; Leung, Allen K; Meister, Benoit J; Szilagyi, Peter; Vasilache, Nicolas T; Wohlford, David E

    2015-01-06

    Methods, apparatus and computer software product for source code optimization are provided. In an exemplary embodiment, a first custom computing apparatus is used to optimize the execution of source code on a second computing apparatus. In this embodiment, the first custom computing apparatus contains a memory, a storage medium and at least one processor with at least one multi-stage execution unit. The second computing apparatus contains at least two multi-stage execution units that allow for parallel execution of tasks. The first custom computing apparatus optimizes the code for parallelism, locality of operations and contiguity of memory accesses on the second computing apparatus. This Abstract is provided for the sole purpose of complying with the Abstract requirement rules. This Abstract is submitted with the explicit understanding that it will not be used to interpret or to limit the scope or the meaning of the claims.

  17. Application of ant colony optimization in development of models for prediction of anti-HIV-1 activity of HEPT derivatives.

    PubMed

    Zare-Shahabadi, Vali; Abbasitabar, Fatemeh

    2010-09-01

    Quantitative structure-activity relationship models were derived for 107 analogs of 1-[(2-hydroxyethoxy) methyl]-6-(phenylthio)thymine, a potent inhibitor of the HIV-1 reverse transcriptase. The activities of these compounds were investigated by means of multiple linear regression (MLR) technique. An ant colony optimization algorithm, called Memorized_ACS, was applied for selecting relevant descriptors and detecting outliers. This algorithm uses an external memory based upon knowledge incorporation from previous iterations. At first, the memory is empty, and then it is filled by running several ACS algorithms. In this respect, after each ACS run, the elite ant is stored in the memory and the process is continued to fill the memory. Here, pheromone updating is performed by all elite ants collected in the memory; this results in improvements in both exploration and exploitation behaviors of the ACS algorithm. The memory is then made empty and is filled again by performing several ACS algorithms using updated pheromone trails. This process is repeated for several iterations. At the end, the memory contains several top solutions for the problem. Number of appearance of each descriptor in the external memory is a good criterion for its importance. Finally, prediction is performed by the elitist ant, and interpretation is carried out by considering the importance of each descriptor. The best MLR model has a training error of 0.47 log (1/EC(50)) units (R(2) = 0.90) and a prediction error of 0.76 log (1/EC(50)) units (R(2) = 0.88). Copyright 2010 Wiley Periodicals, Inc.

  18. Differential working memory correlates for implicit sequence performance in young and older adults.

    PubMed

    Bo, Jin; Jennett, S; Seidler, R D

    2012-09-01

    Our recent work has revealed that visuospatial working memory (VSWM) relates to the rate of explicit motor sequence learning (Bo and Seidler in J Neurophysiol 101:3116-3125, 2009) and implicit sequence performance (Bo et al. in Exp Brain Res 214:73-81, 2011a) in young adults (YA). Although aging has a detrimental impact on many cognitive functions, including working memory, older adults (OA) still rely on their declining working memory resources in an effort to optimize explicit motor sequence learning. Here, we evaluated whether age-related differences in VSWM and/or verbal working memory (VWM) performance relates to implicit performance change in the serial reaction time (SRT) sequence task in OA. Participants performed two computerized working memory tasks adapted from change detection working memory assessments (Luck and Vogel in Nature 390:279-281, 1997), an implicit SRT task and several neuropsychological tests. We found that, although OA exhibited an overall reduction in both VSWM and VWM, both OA and YA showed similar performance in the implicit SRT task. Interestingly, while VSWM and VWM were significantly correlated with each other in YA, there was no correlation between these two working memory scores in OA. In YA, the rate of SRT performance change (exponential fit to the performance curve) was significantly correlated with both VSWM and VWM, while in contrast, OA's performance was only correlated with VWM, and not VSWM. These results demonstrate differential reliance on VSWM and VWM for SRT performance between YA and OA. OA may utilize VWM to maintain optimized performance of second-order conditional sequences.

  19. The Fragility of Individual-Based Explanations of Social Hierarchies: A Test Using Animal Pecking Orders

    PubMed Central

    2016-01-01

    The standard approach in accounting for hierarchical differentiation in biology and the social sciences considers a hierarchy as a static distribution of individuals possessing differing amounts of some valued commodity, assumes that the hierarchy is generated by micro-level processes involving individuals, and attempts to reverse engineer the processes that produced the hierarchy. However, sufficient experimental and analytical results are available to evaluate this standard approach in the case of animal dominance hierarchies (pecking orders). Our evaluation using evidence from hierarchy formation in small groups of both hens and cichlid fish reveals significant deficiencies in the three tenets of the standard approach in accounting for the organization of dominance hierarchies. In consequence, we suggest that a new approach is needed to explain the organization of pecking orders and, very possibly, by implication, for other kinds of social hierarchies. We develop an example of such an approach that considers dominance hierarchies to be dynamic networks, uses dynamic sequences of interaction (dynamic network motifs) to explain the organization of dominance hierarchies, and derives these dynamic sequences directly from observation of hierarchy formation. We test this dynamical explanation using computer simulation and find a good fit with actual dynamics of hierarchy formation in small groups of hens. We hypothesize that the same dynamic sequences are used in small groups of many other animal species forming pecking orders, and we discuss the data required to evaluate our hypothesis. Finally, we briefly consider how our dynamic approach may be generalized to other kinds of social hierarchies using the example of the distribution of empty gastropod (snail) shells occupied in populations of hermit crabs. PMID:27410230

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lyakh, Dmitry I.

    An efficient parallel tensor transpose algorithm is suggested for shared-memory computing units, namely, multicore CPU, Intel Xeon Phi, and NVidia GPU. The algorithm operates on dense tensors (multidimensional arrays) and is based on the optimization of cache utilization on x86 CPU and the use of shared memory on NVidia GPU. From the applied side, the ultimate goal is to minimize the overhead encountered in the transformation of tensor contractions into matrix multiplications in computer implementations of advanced methods of quantum many-body theory (e.g., in electronic structure theory and nuclear physics). A particular accent is made on higher-dimensional tensors that typicallymore » appear in the so-called multireference correlated methods of electronic structure theory. Depending on tensor dimensionality, the presented optimized algorithms can achieve an order of magnitude speedup on x86 CPUs and 2-3 times speedup on NVidia Tesla K20X GPU with respect to the na ve scattering algorithm (no memory access optimization). Furthermore, the tensor transpose routines developed in this work have been incorporated into a general-purpose tensor algebra library (TAL-SH).« less

  1. An efficient tensor transpose algorithm for multicore CPU, Intel Xeon Phi, and NVidia Tesla GPU

    NASA Astrophysics Data System (ADS)

    Lyakh, Dmitry I.

    2015-04-01

    An efficient parallel tensor transpose algorithm is suggested for shared-memory computing units, namely, multicore CPU, Intel Xeon Phi, and NVidia GPU. The algorithm operates on dense tensors (multidimensional arrays) and is based on the optimization of cache utilization on x86 CPU and the use of shared memory on NVidia GPU. From the applied side, the ultimate goal is to minimize the overhead encountered in the transformation of tensor contractions into matrix multiplications in computer implementations of advanced methods of quantum many-body theory (e.g., in electronic structure theory and nuclear physics). A particular accent is made on higher-dimensional tensors that typically appear in the so-called multireference correlated methods of electronic structure theory. Depending on tensor dimensionality, the presented optimized algorithms can achieve an order of magnitude speedup on x86 CPUs and 2-3 times speedup on NVidia Tesla K20X GPU with respect to the naïve scattering algorithm (no memory access optimization). The tensor transpose routines developed in this work have been incorporated into a general-purpose tensor algebra library (TAL-SH).

  2. Optimizing Performance of Combustion Chemistry Solvers on Intel's Many Integrated Core (MIC) Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sitaraman, Hariswaran; Grout, Ray W

    This work investigates novel algorithm designs and optimization techniques for restructuring chemistry integrators in zero and multidimensional combustion solvers, which can then be effectively used on the emerging generation of Intel's Many Integrated Core/Xeon Phi processors. These processors offer increased computing performance via large number of lightweight cores at relatively lower clock speeds compared to traditional processors (e.g. Intel Sandybridge/Ivybridge) used in current supercomputers. This style of processor can be productively used for chemistry integrators that form a costly part of computational combustion codes, in spite of their relatively lower clock speeds. Performance commensurate with traditional processors is achieved heremore » through the combination of careful memory layout, exposing multiple levels of fine grain parallelism and through extensive use of vendor supported libraries (Cilk Plus and Math Kernel Libraries). Important optimization techniques for efficient memory usage and vectorization have been identified and quantified. These optimizations resulted in a factor of ~ 3 speed-up using Intel 2013 compiler and ~ 1.5 using Intel 2017 compiler for large chemical mechanisms compared to the unoptimized version on the Intel Xeon Phi. The strategies, especially with respect to memory usage and vectorization, should also be beneficial for general purpose computational fluid dynamics codes.« less

  3. Categorical Biases in Spatial Memory: The Role of Certainty

    ERIC Educational Resources Information Center

    Holden, Mark P.; Newcombe, Nora S.; Shipley, Thomas F.

    2015-01-01

    Memories for spatial locations often show systematic errors toward the central value of the surrounding region. The Category Adjustment (CA) model suggests that this bias is due to a Bayesian combination of categorical and metric information, which offers an optimal solution under conditions of uncertainty (Huttenlocher, Hedges, & Duncan,…

  4. A neural model of the temporal dynamics of figure-ground segregation in motion perception.

    PubMed

    Raudies, Florian; Neumann, Heiko

    2010-03-01

    How does the visual system manage to segment a visual scene into surfaces and objects and manage to attend to a target object? Based on psychological and physiological investigations, it has been proposed that the perceptual organization and segmentation of a scene is achieved by the processing at different levels of the visual cortical hierarchy. According to this, motion onset detection, motion-defined shape segregation, and target selection are accomplished by processes which bind together simple features into fragments of increasingly complex configurations at different levels in the processing hierarchy. As an alternative to this hierarchical processing hypothesis, it has been proposed that the processing stages for feature detection and segregation are reflected in different temporal episodes in the response patterns of individual neurons. Such temporal epochs have been observed in the activation pattern of neurons as low as in area V1. Here, we present a neural network model of motion detection, figure-ground segregation and attentive selection which explains these response patterns in an unifying framework. Based on known principles of functional architecture of the visual cortex, we propose that initial motion and motion boundaries are detected at different and hierarchically organized stages in the dorsal pathway. Visual shapes that are defined by boundaries, which were generated from juxtaposed opponent motions, are represented at different stages in the ventral pathway. Model areas in the different pathways interact through feedforward and modulating feedback, while mutual interactions enable the communication between motion and form representations. Selective attention is devoted to shape representations by sending modulating feedback signals from higher levels (working memory) to intermediate levels to enhance their responses. Areas in the motion and form pathway are coupled through top-down feedback with V1 cells at the bottom end of the hierarchy. We propose that the different temporal episodes in the response pattern of V1 cells, as recorded in recent experiments, reflect the strength of modulating feedback signals. This feedback results from the consolidated shape representations from coherent motion patterns and the attentive modulation of responses along the cortical hierarchy. The model makes testable predictions concerning the duration and delay of the temporal episodes of V1 cell responses as well as their response variations that were caused by modulating feedback signals. Copyright 2009 Elsevier Ltd. All rights reserved.

  5. Decoding the Traumatic Memory among Women with PTSD: Implications for Neurocircuitry Models of PTSD and Real-Time fMRI Neurofeedback

    PubMed Central

    Cisler, Josh M.; Bush, Keith; James, G. Andrew; Smitherman, Sonet; Kilts, Clinton D.

    2015-01-01

    Posttraumatic Stress Disorder (PTSD) is characterized by intrusive recall of the traumatic memory. While numerous studies have investigated the neural processing mechanisms engaged during trauma memory recall in PTSD, these analyses have only focused on group-level contrasts that reveal little about the predictive validity of the identified brain regions. By contrast, a multivariate pattern analysis (MVPA) approach towards identifying the neural mechanisms engaged during trauma memory recall would entail testing whether a multivariate set of brain regions is reliably predictive of (i.e., discriminates) whether an individual is engaging in trauma or non-trauma memory recall. Here, we use a MVPA approach to test 1) whether trauma memory vs neutral memory recall can be predicted reliably using a multivariate set of brain regions among women with PTSD related to assaultive violence exposure (N=16), 2) the methodological parameters (e.g., spatial smoothing, number of memory recall repetitions, etc.) that optimize classification accuracy and reproducibility of the feature weight spatial maps, and 3) the correspondence between brain regions that discriminate trauma memory recall and the brain regions predicted by neurocircuitry models of PTSD. Cross-validation classification accuracy was significantly above chance for all methodological permutations tested; mean accuracy across participants was 76% for the methodological parameters selected as optimal for both efficiency and accuracy. Classification accuracy was significantly better for a voxel-wise approach relative to voxels within restricted regions-of-interest (ROIs); classification accuracy did not differ when using PTSD-related ROIs compared to randomly generated ROIs. ROI-based analyses suggested the reliable involvement of the left hippocampus in discriminating memory recall across participants and that the contribution of the left amygdala to the decision function was dependent upon PTSD symptom severity. These results have methodological implications for real-time fMRI neurofeedback of the trauma memory in PTSD and conceptual implications for neurocircuitry models of PTSD that attempt to explain core neural processing mechanisms mediating PTSD. PMID:26241958

  6. Decoding the Traumatic Memory among Women with PTSD: Implications for Neurocircuitry Models of PTSD and Real-Time fMRI Neurofeedback.

    PubMed

    Cisler, Josh M; Bush, Keith; James, G Andrew; Smitherman, Sonet; Kilts, Clinton D

    2015-01-01

    Posttraumatic Stress Disorder (PTSD) is characterized by intrusive recall of the traumatic memory. While numerous studies have investigated the neural processing mechanisms engaged during trauma memory recall in PTSD, these analyses have only focused on group-level contrasts that reveal little about the predictive validity of the identified brain regions. By contrast, a multivariate pattern analysis (MVPA) approach towards identifying the neural mechanisms engaged during trauma memory recall would entail testing whether a multivariate set of brain regions is reliably predictive of (i.e., discriminates) whether an individual is engaging in trauma or non-trauma memory recall. Here, we use a MVPA approach to test 1) whether trauma memory vs neutral memory recall can be predicted reliably using a multivariate set of brain regions among women with PTSD related to assaultive violence exposure (N=16), 2) the methodological parameters (e.g., spatial smoothing, number of memory recall repetitions, etc.) that optimize classification accuracy and reproducibility of the feature weight spatial maps, and 3) the correspondence between brain regions that discriminate trauma memory recall and the brain regions predicted by neurocircuitry models of PTSD. Cross-validation classification accuracy was significantly above chance for all methodological permutations tested; mean accuracy across participants was 76% for the methodological parameters selected as optimal for both efficiency and accuracy. Classification accuracy was significantly better for a voxel-wise approach relative to voxels within restricted regions-of-interest (ROIs); classification accuracy did not differ when using PTSD-related ROIs compared to randomly generated ROIs. ROI-based analyses suggested the reliable involvement of the left hippocampus in discriminating memory recall across participants and that the contribution of the left amygdala to the decision function was dependent upon PTSD symptom severity. These results have methodological implications for real-time fMRI neurofeedback of the trauma memory in PTSD and conceptual implications for neurocircuitry models of PTSD that attempt to explain core neural processing mechanisms mediating PTSD.

  7. Resilient 3D hierarchical architected metamaterials

    PubMed Central

    Meza, Lucas R.; Zelhofer, Alex J.; Clarke, Nigel; Mateos, Arturo J.; Kochmann, Dennis M.; Greer, Julia R.

    2015-01-01

    Hierarchically designed structures with architectural features that span across multiple length scales are found in numerous hard biomaterials, like bone, wood, and glass sponge skeletons, as well as manmade structures, like the Eiffel Tower. It has been hypothesized that their mechanical robustness and damage tolerance stem from sophisticated ordering within the constituents, but the specific role of hierarchy remains to be fully described and understood. We apply the principles of hierarchical design to create structural metamaterials from three material systems: (i) polymer, (ii) hollow ceramic, and (iii) ceramic–polymer composites that are patterned into self-similar unit cells in a fractal-like geometry. In situ nanomechanical experiments revealed (i) a nearly theoretical scaling of structural strength and stiffness with relative density, which outperforms existing nonhierarchical nanolattices; (ii) recoverability, with hollow alumina samples recovering up to 98% of their original height after compression to ≥50% strain; (iii) suppression of brittle failure and structural instabilities in hollow ceramic hierarchical nanolattices; and (iv) a range of deformation mechanisms that can be tuned by changing the slenderness ratios of the beams. Additional levels of hierarchy beyond a second order did not increase the strength or stiffness, which suggests the existence of an optimal degree of hierarchy to amplify resilience. We developed a computational model that captures local stress distributions within the nanolattices under compression and explains some of the underlying deformation mechanisms as well as validates the measured effective stiffness to be interpreted as a metamaterial property. PMID:26330605

  8. Resilient 3D hierarchical architected metamaterials.

    PubMed

    Meza, Lucas R; Zelhofer, Alex J; Clarke, Nigel; Mateos, Arturo J; Kochmann, Dennis M; Greer, Julia R

    2015-09-15

    Hierarchically designed structures with architectural features that span across multiple length scales are found in numerous hard biomaterials, like bone, wood, and glass sponge skeletons, as well as manmade structures, like the Eiffel Tower. It has been hypothesized that their mechanical robustness and damage tolerance stem from sophisticated ordering within the constituents, but the specific role of hierarchy remains to be fully described and understood. We apply the principles of hierarchical design to create structural metamaterials from three material systems: (i) polymer, (ii) hollow ceramic, and (iii) ceramic-polymer composites that are patterned into self-similar unit cells in a fractal-like geometry. In situ nanomechanical experiments revealed (i) a nearly theoretical scaling of structural strength and stiffness with relative density, which outperforms existing nonhierarchical nanolattices; (ii) recoverability, with hollow alumina samples recovering up to 98% of their original height after compression to ≥ 50% strain; (iii) suppression of brittle failure and structural instabilities in hollow ceramic hierarchical nanolattices; and (iv) a range of deformation mechanisms that can be tuned by changing the slenderness ratios of the beams. Additional levels of hierarchy beyond a second order did not increase the strength or stiffness, which suggests the existence of an optimal degree of hierarchy to amplify resilience. We developed a computational model that captures local stress distributions within the nanolattices under compression and explains some of the underlying deformation mechanisms as well as validates the measured effective stiffness to be interpreted as a metamaterial property.

  9. Gene function prediction based on Gene Ontology Hierarchy Preserving Hashing.

    PubMed

    Zhao, Yingwen; Fu, Guangyuan; Wang, Jun; Guo, Maozu; Yu, Guoxian

    2018-02-23

    Gene Ontology (GO) uses structured vocabularies (or terms) to describe the molecular functions, biological roles, and cellular locations of gene products in a hierarchical ontology. GO annotations associate genes with GO terms and indicate the given gene products carrying out the biological functions described by the relevant terms. However, predicting correct GO annotations for genes from a massive set of GO terms as defined by GO is a difficult challenge. To combat with this challenge, we introduce a Gene Ontology Hierarchy Preserving Hashing (HPHash) based semantic method for gene function prediction. HPHash firstly measures the taxonomic similarity between GO terms. It then uses a hierarchy preserving hashing technique to keep the hierarchical order between GO terms, and to optimize a series of hashing functions to encode massive GO terms via compact binary codes. After that, HPHash utilizes these hashing functions to project the gene-term association matrix into a low-dimensional one and performs semantic similarity based gene function prediction in the low-dimensional space. Experimental results on three model species (Homo sapiens, Mus musculus and Rattus norvegicus) for interspecies gene function prediction show that HPHash performs better than other related approaches and it is robust to the number of hash functions. In addition, we also take HPHash as a plugin for BLAST based gene function prediction. From the experimental results, HPHash again significantly improves the prediction performance. The codes of HPHash are available at: http://mlda.swu.edu.cn/codes.php?name=HPHash. Copyright © 2018 Elsevier Inc. All rights reserved.

  10. A consistent hierarchy of generalized kinetic equation approximations to the master equation applied to surface catalysis.

    PubMed

    Herschlag, Gregory J; Mitran, Sorin; Lin, Guang

    2015-06-21

    We develop a hierarchy of approximations to the master equation for systems that exhibit translational invariance and finite-range spatial correlation. Each approximation within the hierarchy is a set of ordinary differential equations that considers spatial correlations of varying lattice distance; the assumption is that the full system will have finite spatial correlations and thus the behavior of the models within the hierarchy will approach that of the full system. We provide evidence of this convergence in the context of one- and two-dimensional numerical examples. Lower levels within the hierarchy that consider shorter spatial correlations are shown to be up to three orders of magnitude faster than traditional kinetic Monte Carlo methods (KMC) for one-dimensional systems, while predicting similar system dynamics and steady states as KMC methods. We then test the hierarchy on a two-dimensional model for the oxidation of CO on RuO2(110), showing that low-order truncations of the hierarchy efficiently capture the essential system dynamics. By considering sequences of models in the hierarchy that account for longer spatial correlations, successive model predictions may be used to establish empirical approximation of error estimates. The hierarchy may be thought of as a class of generalized phenomenological kinetic models since each element of the hierarchy approximates the master equation and the lowest level in the hierarchy is identical to a simple existing phenomenological kinetic models.

  11. Enhancing early consolidation of human episodic memory by theta EEG neurofeedback.

    PubMed

    Rozengurt, Roman; Shtoots, Limor; Sheriff, Aviv; Sadka, Ofir; Levy, Daniel A

    2017-11-01

    Consolidation of newly formed memories is readily disrupted, but can it be enhanced? Given the prominent role of hippocampal theta oscillations in memory formation and retrieval, we hypothesized that upregulating theta power during early stages of consolidation might benefit memory stability and persistence. We used EEG neurofeedback to enable participants to selectively increase theta power in their EEG spectra following episodic memory encoding, while other participants engaged in low beta-focused neurofeedback or passively viewed a neutral nature movie. Free recall assessments immediately following the interventions, 24h later and 7d later all indicated benefit to memory of theta neurofeedback, relative to low beta neurofeedback or passive movie-viewing control conditions. The degree of benefit to memory was correlated with the extent of theta power modulation, but not with other spectral changes. Theta enhancement may provide optimal conditions for stabilization of new hippocampus-dependent memories. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Retribution as hierarchy regulation: Hierarchy preferences moderate the effect of offender socioeconomic status on support for retribution.

    PubMed

    Redford, Liz; Ratliff, Kate A

    2018-01-01

    People punish others for various reasons, including deterring future crime, incapacitating the offender, and retribution, or payback. The current research focuses on retribution, testing whether support for retribution is motivated by the desire to maintain social hierarchies. If so, then the retributive tendencies of hierarchy enhancers or hierarchy attenuators should depend on whether offenders are relatively lower or higher in status, respectively. Three studies showed that hierarchy attenuators were more retributive against high-status offenders than for low-status offenders, that hierarchy enhancers showed a stronger orientation towards retributive justice, and that relationship was stronger for low-status, rather than high-status, criminal offenders. These findings clarify the purpose and function of retributive punishment. They also reveal how hierarchy-regulating motives underlie retribution, motives which, if allowed to influence judgements, may contribute to biased or ineffective justice systems. © 2017 The British Psychological Society.

  13. ‘If you are good, I get better’: the role of social hierarchy in perceptual decision-making

    PubMed Central

    Pannunzi, Mario; Ayneto, Alba; Deco, Gustavo; Sebastián-Gallés, Nuria

    2014-01-01

    So far, it was unclear if social hierarchy could influence sensory or perceptual cognitive processes. We evaluated the effects of social hierarchy on these processes using a basic visual perceptual decision task. We constructed a social hierarchy where participants performed the perceptual task separately with two covertly simulated players (superior, inferior). Participants were faster (better) when performing the discrimination task with the superior player. We studied the time course when social hierarchy was processed using event-related potentials and observed hierarchical effects even in early stages of sensory-perceptual processing, suggesting early top–down modulation by social hierarchy. Moreover, in a parallel analysis, we fitted a drift-diffusion model (DDM) to the results to evaluate the decision making process of this perceptual task in the context of a social hierarchy. Consistently, the DDM pointed to nondecision time (probably perceptual encoding) as the principal period influenced by social hierarchy. PMID:23946003

  14. JiTTree: A Just-in-Time Compiled Sparse GPU Volume Data Structure.

    PubMed

    Labschütz, Matthias; Bruckner, Stefan; Gröller, M Eduard; Hadwiger, Markus; Rautek, Peter

    2016-01-01

    Sparse volume data structures enable the efficient representation of large but sparse volumes in GPU memory for computation and visualization. However, the choice of a specific data structure for a given data set depends on several factors, such as the memory budget, the sparsity of the data, and data access patterns. In general, there is no single optimal sparse data structure, but a set of several candidates with individual strengths and drawbacks. One solution to this problem are hybrid data structures which locally adapt themselves to the sparsity. However, they typically suffer from increased traversal overhead which limits their utility in many applications. This paper presents JiTTree, a novel sparse hybrid volume data structure that uses just-in-time compilation to overcome these problems. By combining multiple sparse data structures and reducing traversal overhead we leverage their individual advantages. We demonstrate that hybrid data structures adapt well to a large range of data sets. They are especially superior to other sparse data structures for data sets that locally vary in sparsity. Possible optimization criteria are memory, performance and a combination thereof. Through just-in-time (JIT) compilation, JiTTree reduces the traversal overhead of the resulting optimal data structure. As a result, our hybrid volume data structure enables efficient computations on the GPU, while being superior in terms of memory usage when compared to non-hybrid data structures.

  15. The neural optimal control hierarchy for motor control

    NASA Astrophysics Data System (ADS)

    DeWolf, T.; Eliasmith, C.

    2011-10-01

    Our empirical, neuroscientific understanding of biological motor systems has been rapidly growing in recent years. However, this understanding has not been systematically mapped to a quantitative characterization of motor control based in control theory. Here, we attempt to bridge this gap by describing the neural optimal control hierarchy (NOCH), which can serve as a foundation for biologically plausible models of neural motor control. The NOCH has been constructed by taking recent control theoretic models of motor control, analyzing the required processes, generating neurally plausible equivalent calculations and mapping them on to the neural structures that have been empirically identified to form the anatomical basis of motor control. We demonstrate the utility of the NOCH by constructing a simple model based on the identified principles and testing it in two ways. First, we perturb specific anatomical elements of the model and compare the resulting motor behavior with clinical data in which the corresponding area of the brain has been damaged. We show that damaging the assigned functions of the basal ganglia and cerebellum can cause the movement deficiencies seen in patients with Huntington's disease and cerebellar lesions. Second, we demonstrate that single spiking neuron data from our model's motor cortical areas explain major features of single-cell responses recorded from the same primate areas. We suggest that together these results show how NOCH-based models can be used to unify a broad range of data relevant to biological motor control in a quantitative, control theoretic framework.

  16. Accelerate quasi Monte Carlo method for solving systems of linear algebraic equations through shared memory

    NASA Astrophysics Data System (ADS)

    Lai, Siyan; Xu, Ying; Shao, Bo; Guo, Menghan; Lin, Xiaola

    2017-04-01

    In this paper we study on Monte Carlo method for solving systems of linear algebraic equations (SLAE) based on shared memory. Former research demostrated that GPU can effectively speed up the computations of this issue. Our purpose is to optimize Monte Carlo method simulation on GPUmemoryachritecture specifically. Random numbers are organized to storein shared memory, which aims to accelerate the parallel algorithm. Bank conflicts can be avoided by our Collaborative Thread Arrays(CTA)scheme. The results of experiments show that the shared memory based strategy can speed up the computaions over than 3X at most.

  17. A Fully GPU-Based Ray-Driven Backprojector via a Ray-Culling Scheme with Voxel-Level Parallelization for Cone-Beam CT Reconstruction.

    PubMed

    Park, Hyeong-Gyu; Shin, Yeong-Gil; Lee, Ho

    2015-12-01

    A ray-driven backprojector is based on ray-tracing, which computes the length of the intersection between the ray paths and each voxel to be reconstructed. To reduce the computational burden caused by these exhaustive intersection tests, we propose a fully graphics processing unit (GPU)-based ray-driven backprojector in conjunction with a ray-culling scheme that enables straightforward parallelization without compromising the high computing performance of a GPU. The purpose of the ray-culling scheme is to reduce the number of ray-voxel intersection tests by excluding rays irrelevant to a specific voxel computation. This rejection step is based on an axis-aligned bounding box (AABB) enclosing a region of voxel projection, where eight vertices of each voxel are projected onto the detector plane. The range of the rectangular-shaped AABB is determined by min/max operations on the coordinates in the region. Using the indices of pixels inside the AABB, the rays passing through the voxel can be identified and the voxel is weighted as the length of intersection between the voxel and the ray. This procedure makes it possible to reflect voxel-level parallelization, allowing an independent calculation at each voxel, which is feasible for a GPU implementation. To eliminate redundant calculations during ray-culling, a shared-memory optimization is applied to exploit the GPU memory hierarchy. In experimental results using real measurement data with phantoms, the proposed GPU-based ray-culling scheme reconstructed a volume of resolution 28032803176 in 77 seconds from 680 projections of resolution 10243768 , which is 26 times and 7.5 times faster than standard CPU-based and GPU-based ray-driven backprojectors, respectively. Qualitative and quantitative analyses showed that the ray-driven backprojector provides high-quality reconstruction images when compared with those generated by the Feldkamp-Davis-Kress algorithm using a pixel-driven backprojector, with an average of 2.5 times higher contrast-to-noise ratio, 1.04 times higher universal quality index, and 1.39 times higher normalized mutual information. © The Author(s) 2014.

  18. Cognitive representation of "musical fractals": Processing hierarchy and recursion in the auditory domain.

    PubMed

    Martins, Mauricio Dias; Gingras, Bruno; Puig-Waldmueller, Estela; Fitch, W Tecumseh

    2017-04-01

    The human ability to process hierarchical structures has been a longstanding research topic. However, the nature of the cognitive machinery underlying this faculty remains controversial. Recursion, the ability to embed structures within structures of the same kind, has been proposed as a key component of our ability to parse and generate complex hierarchies. Here, we investigated the cognitive representation of both recursive and iterative processes in the auditory domain. The experiment used a two-alternative forced-choice paradigm: participants were exposed to three-step processes in which pure-tone sequences were built either through recursive or iterative processes, and had to choose the correct completion. Foils were constructed according to generative processes that did not match the previous steps. Both musicians and non-musicians were able to represent recursion in the auditory domain, although musicians performed better. We also observed that general 'musical' aptitudes played a role in both recursion and iteration, although the influence of musical training was somehow independent from melodic memory. Moreover, unlike iteration, recursion in audition was well correlated with its non-auditory (recursive) analogues in the visual and action sequencing domains. These results suggest that the cognitive machinery involved in establishing recursive representations is domain-general, even though this machinery requires access to information resulting from domain-specific processes. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  19. Real Time Monitoring and Prediction of the Monsoon Intraseasonal Oscillations: An index based on Nonlinear Laplacian Spectral Analysis Technique

    NASA Astrophysics Data System (ADS)

    Cherumadanakadan Thelliyil, S.; Ravindran, A. M.; Giannakis, D.; Majda, A.

    2016-12-01

    An improved index for real time monitoring and forecast verification of monsoon intraseasonal oscillations (MISO) is introduced using the recently developed Nonlinear Laplacian Spectral Analysis (NLSA) algorithm. Previous studies has demonstrated the proficiency of NLSA in capturing low frequency variability and intermittency of a time series. Using NLSA a hierarchy of Laplace-Beltrami (LB) eigen functions are extracted from the unfiltered daily GPCP rainfall data over the south Asian monsoon region. Two modes representing the full life cycle of complex northeastward propagating boreal summer MISO are identified from the hierarchy of Laplace-Beltrami eigen functions. These two MISO modes have a number of advantages over the conventionally used Extended Empirical Orthogonal Function (EEOF) MISO modes including higher memory and better predictability, higher fractional variance over the western Pacific, Western Ghats and adjoining Arabian Sea regions and more realistic representation of regional heat sources associated with the MISO. The skill of NLSA based MISO indices in real time prediction of MISO is demonstrated using hindcasts of CFSv2 extended range prediction runs. It is shown that these indices yield a higher prediction skill than the other conventional indices supporting the use of NLSA in real time prediction of MISO. Real time monitoring and prediction of MISO finds its application in agriculture, construction and hydro-electric power sectors and hence an important component of monsoon prediction.

  20. Biologically-inspired robust and adaptive multi-sensor fusion and active control

    NASA Astrophysics Data System (ADS)

    Khosla, Deepak; Dow, Paul A.; Huber, David J.

    2009-04-01

    In this paper, we describe a method and system for robust and efficient goal-oriented active control of a machine (e.g., robot) based on processing, hierarchical spatial understanding, representation and memory of multimodal sensory inputs. This work assumes that a high-level plan or goal is known a priori or is provided by an operator interface, which translates into an overall perceptual processing strategy for the machine. Its analogy to the human brain is the download of plans and decisions from the pre-frontal cortex into various perceptual working memories as a perceptual plan that then guides the sensory data collection and processing. For example, a goal might be to look for specific colored objects in a scene while also looking for specific sound sources. This paper combines three key ideas and methods into a single closed-loop active control system. (1) Use high-level plan or goal to determine and prioritize spatial locations or waypoints (targets) in multimodal sensory space; (2) collect/store information about these spatial locations at the appropriate hierarchy and representation in a spatial working memory. This includes invariant learning of these spatial representations and how to convert between them; and (3) execute actions based on ordered retrieval of these spatial locations from hierarchical spatial working memory and using the "right" level of representation that can efficiently translate into motor actions. In its most specific form, the active control is described for a vision system (such as a pantilt- zoom camera system mounted on a robotic head and neck unit) which finds and then fixates on high saliency visual objects. We also describe the approach where the goal is to turn towards and sequentially foveate on salient multimodal cues that include both visual and auditory inputs.

  1. The Gremlin Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2013-09-26

    The Gremlin sofrware package is a performance analysis approach targeted to support the Co-Design process for future systems. It consists of a series of modules that can be used to alter a machine's behavior with the goal of emulating future machine properties. The modules can be divided into several classes; the most significant ones are detailed below. PowGre is a series of modules that help explore the power consumption properties of applications and to determine the impact of power constraints on applications. Most of them use low-level processor interfaces to directly control voltage and frequency settings as well as permore » nodes, socket, or memory power bounds. MemGre are memory Gremlins and implement a new performance analysis technique that captures the application's effective use of the storage capacity of different levels of the memory hierarchy as well as the bandwidth between adjacent levels. The approach models various memory components as resources and measures how much of each resource the application uses from the application's own perspective. To the application a given amount of a resource is "used" if not having this amount will degrade the application's performance. This is in contrast to the hardware-centric perspective that considers "use" as any hardware action that utilizes the resource, even if it has no effect on performance. ResGre are Gremlins that use fault injection techniques to emulate higher fault rates than currently present in today's systems. Faults can be injected through various means, including network interposition, static analysis, and code modification, or direct application notification. ResGre also includes patches to previously released LLNL codes that can counteract and react to injected failures.« less

  2. Lossy Wavefield Compression for Full-Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Boehm, C.; Fichtner, A.; de la Puente, J.; Hanzich, M.

    2015-12-01

    We present lossy compression techniques, tailored to the inexact computation of sensitivity kernels, that significantly reduce the memory requirements of adjoint-based minimization schemes. Adjoint methods are a powerful tool to solve tomography problems in full-waveform inversion (FWI). Yet they face the challenge of massive memory requirements caused by the opposite directions of forward and adjoint simulations and the necessity to access both wavefields simultaneously during the computation of the sensitivity kernel. Thus, storage, I/O operations, and memory bandwidth become key topics in FWI. In this talk, we present strategies for the temporal and spatial compression of the forward wavefield. This comprises re-interpolation with coarse time steps and an adaptive polynomial degree of the spectral element shape functions. In addition, we predict the projection errors on a hierarchy of grids and re-quantize the residuals with an adaptive floating-point accuracy to improve the approximation. Furthermore, we use the first arrivals of adjoint waves to identify "shadow zones" that do not contribute to the sensitivity kernel at all. Updating and storing the wavefield within these shadow zones is skipped, which reduces memory requirements and computational costs at the same time. Compared to check-pointing, our approach has only a negligible computational overhead, utilizing the fact that a sufficiently accurate sensitivity kernel does not require a fully resolved forward wavefield. Furthermore, we use adaptive compression thresholds during the FWI iterations to ensure convergence. Numerical experiments on the reservoir scale and for the Western Mediterranean prove the high potential of this approach with an effective compression factor of 500-1000. Furthermore, it is computationally cheap and easy to integrate in both, finite-differences and finite-element wave propagation codes.

  3. Efficient algorithms for accurate hierarchical clustering of huge datasets: tackling the entire protein space

    PubMed Central

    Loewenstein, Yaniv; Portugaly, Elon; Fromer, Menachem; Linial, Michal

    2008-01-01

    Motivation: UPGMA (average linking) is probably the most popular algorithm for hierarchical data clustering, especially in computational biology. However, UPGMA requires the entire dissimilarity matrix in memory. Due to this prohibitive requirement, UPGMA is not scalable to very large datasets. Application: We present a novel class of memory-constrained UPGMA (MC-UPGMA) algorithms. Given any practical memory size constraint, this framework guarantees the correct clustering solution without explicitly requiring all dissimilarities in memory. The algorithms are general and are applicable to any dataset. We present a data-dependent characterization of hardness and clustering efficiency. The presented concepts are applicable to any agglomerative clustering formulation. Results: We apply our algorithm to the entire collection of protein sequences, to automatically build a comprehensive evolutionary-driven hierarchy of proteins from sequence alone. The newly created tree captures protein families better than state-of-the-art large-scale methods such as CluSTr, ProtoNet4 or single-linkage clustering. We demonstrate that leveraging the entire mass embodied in all sequence similarities allows to significantly improve on current protein family clusterings which are unable to directly tackle the sheer mass of this data. Furthermore, we argue that non-metric constraints are an inherent complexity of the sequence space and should not be overlooked. The robustness of UPGMA allows significant improvement, especially for multidomain proteins, and for large or divergent families. Availability: A comprehensive tree built from all UniProt sequence similarities, together with navigation and classification tools will be made available as part of the ProtoNet service. A C++ implementation of the algorithm is available on request. Contact: lonshy@cs.huji.ac.il PMID:18586742

  4. Memory monitoring by animals and humans

    NASA Technical Reports Server (NTRS)

    Smith, J. D.; Shields, W. E.; Allendoerfer, K. R.; Washburn, D. A.; Rumbaugh, D. M. (Principal Investigator)

    1998-01-01

    The authors asked whether animals and humans would use similarly an uncertain response to escape indeterminate memories. Monkeys and humans performed serial probe recognition tasks that produced differential memory difficulty across serial positions (e.g., primacy and recency effects). Participants were given an escape option that let them avoid any trials they wished and receive a hint to the trial's answer. Across species, across tasks, and even across conspecifics with sharper or duller memories, monkeys and humans used the escape option selectively when more indeterminate memory traces were probed. Their pattern of escaping always mirrored the pattern of their primary memory performance across serial positions. Signal-detection analyses confirm the similarity of the animals' and humans' performances. Optimality analyses assess their efficiency. Several aspects of monkeys' performance suggest the cognitive sophistication of their decisions to escape.

  5. Mental Fitness for Life: Assessing the Impact of an 8-Week Mental Fitness Program on Healthy Aging.

    ERIC Educational Resources Information Center

    Cusack, Sandra A.; Thompson, Wendy J. A.; Rogers, Mary E.

    2003-01-01

    A mental fitness program taught goal setting, critical thinking, creativity, positive attitudes, learning, memory, and self-expression to adults over 50 (n=22). Pre/posttests of depression and cognition revealed significant impacts on mental fitness, cognitive confidence, goal setting, optimism, creativity, flexibility, and memory. Not significant…

  6. Seeing Like a Geologist: Bayesian Use of Expert Categories in Location Memory

    ERIC Educational Resources Information Center

    Holden, Mark P.; Newcombe, Nora S.; Resnick, Ilyse; Shipley, Thomas F.

    2016-01-01

    Memory for spatial location is typically biased, with errors trending toward the center of a surrounding region. According to the category adjustment model (CAM), this bias reflects the optimal, Bayesian combination of fine-grained and categorical representations of a location. However, there is disagreement about whether categories are malleable.…

  7. Sleep, Memory & Brain Rhythms

    PubMed Central

    Watson, Brendon O.; Buzsáki, György

    2015-01-01

    Sleep occupies roughly one-third of our lives, yet the scientific community is still not entirely clear on its purpose or function. Existing data point most strongly to its role in memory and homeostasis: that sleep helps maintain basic brain functioning via a homeostatic mechanism that loosens connections between overworked synapses, and that sleep helps consolidate and re-form important memories. In this review, we will summarize these theories, but also focus on substantial new information regarding the relation of electrical brain rhythms to sleep. In particular, while REM sleep may contribute to the homeostatic weakening of overactive synapses, a prominent and transient oscillatory rhythm called “sharp-wave ripple” seems to allow for consolidation of behaviorally relevant memories across many structures of the brain. We propose that a theory of sleep involving the division of labor between two states of sleep–REM and non-REM, the latter of which has an abundance of ripple electrical activity–might allow for a fusion of the two main sleep theories. This theory then postulates that sleep performs a combination of consolidation and homeostasis that promotes optimal knowledge retention as well as optimal waking brain function. PMID:26097242

  8. Mild traumatic brain injury: graph-model characterization of brain networks for episodic memory.

    PubMed

    Tsirka, Vasso; Simos, Panagiotis G; Vakis, Antonios; Kanatsouli, Kassiani; Vourkas, Michael; Erimaki, Sofia; Pachou, Ellie; Stam, Cornelis Jan; Micheloyannis, Sifis

    2011-02-01

    Episodic memory is among the cognitive functions that can be affected in the acute phase following mild traumatic brain injury (MTBI). The present study used EEG recordings to evaluate global synchronization and network organization of rhythmic activity during the encoding and recognition phases of an episodic memory task varying in stimulus type (kaleidoscope images, pictures, words, and pseudowords). Synchronization of oscillatory activity was assessed using a linear and nonlinear connectivity estimator and network analyses were performed using algorithms derived from graph theory. Twenty five MTBI patients (tested within days post-injury) and healthy volunteers were closely matched on demographic variables, verbal ability, psychological status variables, as well as on overall task performance. Patients demonstrated sub-optimal network organization, as reflected by changes in graph parameters in the theta and alpha bands during both encoding and recognition. There were no group differences in spectral energy during task performance or on network parameters during a control condition (rest). Evidence of less optimally organized functional networks during memory tasks was more prominent for pictorial than for verbal stimuli. Copyright © 2010 Elsevier B.V. All rights reserved.

  9. Sleep, Memory & Brain Rhythms.

    PubMed

    Watson, Brendon O; Buzsáki, György

    2015-01-01

    Sleep occupies roughly one-third of our lives, yet the scientific community is still not entirely clear on its purpose or function. Existing data point most strongly to its role in memory and homeostasis: that sleep helps maintain basic brain functioning via a homeostatic mechanism that loosens connections between overworked synapses, and that sleep helps consolidate and re-form important memories. In this review, we will summarize these theories, but also focus on substantial new information regarding the relation of electrical brain rhythms to sleep. In particular, while REM sleep may contribute to the homeostatic weakening of overactive synapses, a prominent and transient oscillatory rhythm called "sharp-wave ripple" seems to allow for consolidation of behaviorally relevant memories across many structures of the brain. We propose that a theory of sleep involving the division of labor between two states of sleep-REM and non-REM, the latter of which has an abundance of ripple electrical activity-might allow for a fusion of the two main sleep theories. This theory then postulates that sleep performs a combination of consolidation and homeostasis that promotes optimal knowledge retention as well as optimal waking brain function.

  10. Exploiting graphics processing units for computational biology and bioinformatics.

    PubMed

    Payne, Joshua L; Sinnott-Armstrong, Nicholas A; Moore, Jason H

    2010-09-01

    Advances in the video gaming industry have led to the production of low-cost, high-performance graphics processing units (GPUs) that possess more memory bandwidth and computational capability than central processing units (CPUs), the standard workhorses of scientific computing. With the recent release of generalpurpose GPUs and NVIDIA's GPU programming language, CUDA, graphics engines are being adopted widely in scientific computing applications, particularly in the fields of computational biology and bioinformatics. The goal of this article is to concisely present an introduction to GPU hardware and programming, aimed at the computational biologist or bioinformaticist. To this end, we discuss the primary differences between GPU and CPU architecture, introduce the basics of the CUDA programming language, and discuss important CUDA programming practices, such as the proper use of coalesced reads, data types, and memory hierarchies. We highlight each of these topics in the context of computing the all-pairs distance between instances in a dataset, a common procedure in numerous disciplines of scientific computing. We conclude with a runtime analysis of the GPU and CPU implementations of the all-pairs distance calculation. We show our final GPU implementation to outperform the CPU implementation by a factor of 1700.

  11. 2014 Runtime Systems Summit. Runtime Systems Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarkar, Vivek; Budimlic, Zoran; Kulkani, Milind

    2016-09-19

    This report summarizes runtime system challenges for exascale computing, that follow from the fundamental challenges for exascale systems that have been well studied in past reports, e.g., [6, 33, 34, 32, 24]. Some of the key exascale challenges that pertain to runtime systems include parallelism, energy efficiency, memory hierarchies, data movement, heterogeneous processors and memories, resilience, performance variability, dynamic resource allocation, performance portability, and interoperability with legacy code. In addition to summarizing these challenges, the report also outlines different approaches to addressing these significant challenges that have been pursued by research projects in the DOE-sponsored X-Stack and OS/R programs. Sincemore » there is often confusion as to what exactly the term “runtime system” refers to in the software stack, we include a section on taxonomy to clarify the terminology used by participants in these research projects. In addition, we include a section on deployment opportunities for vendors and government labs to build on the research results from these projects. Finally, this report is also intended to provide a framework for discussing future research and development investments for exascale runtime systems, and for clarifying the role of runtime systems in exascale software.« less

  12. Recursion Relations for Double Ramification Hierarchies

    NASA Astrophysics Data System (ADS)

    Buryak, Alexandr; Rossi, Paolo

    2016-03-01

    In this paper we study various properties of the double ramification hierarchy, an integrable hierarchy of hamiltonian PDEs introduced in Buryak (CommunMath Phys 336(3):1085-1107, 2015) using intersection theory of the double ramification cycle in the moduli space of stable curves. In particular, we prove a recursion formula that recovers the full hierarchy starting from just one of the Hamiltonians, the one associated to the first descendant of the unit of a cohomological field theory. Moreover, we introduce analogues of the topological recursion relations and the divisor equation both for the Hamiltonian densities and for the string solution of the double ramification hierarchy. This machinery is very efficient and we apply it to various computations for the trivial and Hodge cohomological field theories, and for the r -spin Witten's classes. Moreover, we prove the Miura equivalence between the double ramification hierarchy and the Dubrovin-Zhang hierarchy for the Gromov-Witten theory of the complex projective line (extended Toda hierarchy).

  13. 'If you are good, I get better': the role of social hierarchy in perceptual decision-making.

    PubMed

    Santamaría-García, Hernando; Pannunzi, Mario; Ayneto, Alba; Deco, Gustavo; Sebastián-Gallés, Nuria

    2014-10-01

    So far, it was unclear if social hierarchy could influence sensory or perceptual cognitive processes. We evaluated the effects of social hierarchy on these processes using a basic visual perceptual decision task. We constructed a social hierarchy where participants performed the perceptual task separately with two covertly simulated players (superior, inferior). Participants were faster (better) when performing the discrimination task with the superior player. We studied the time course when social hierarchy was processed using event-related potentials and observed hierarchical effects even in early stages of sensory-perceptual processing, suggesting early top-down modulation by social hierarchy. Moreover, in a parallel analysis, we fitted a drift-diffusion model (DDM) to the results to evaluate the decision making process of this perceptual task in the context of a social hierarchy. Consistently, the DDM pointed to nondecision time (probably perceptual encoding) as the principal period influenced by social hierarchy. © The Author (2013). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  14. Circadian modulation of consolidated memory retrieval following sleep deprivation in Drosophila.

    PubMed

    Le Glou, Eric; Seugnet, Laurent; Shaw, Paul J; Preat, Thomas; Goguel, Valérie

    2012-10-01

    Several lines of evidence indicate that sleep plays a critical role in learning and memory. The aim of this study was to evaluate anesthesia resistant memory following sleep deprivation in Drosophila. Four to 16 h after aversive olfactory training, flies were sleep deprived for 4 h. Memory was assessed 24 h after training. Training, sleep deprivation, and memory tests were performed at different times during the day to evaluate the importance of the time of day for memory formation. The role of circadian rhythms was further evaluated using circadian clock mutants. Memory was disrupted when flies were exposed to 4 h of sleep deprivation during the consolidation phase. Interestingly, normal memory was observed following sleep deprivation when the memory test was performed during the 2 h preceding lights-off, a period characterized by maximum wake in flies. We also show that anesthesia resistant memory was less sensitive to sleep deprivation in flies with disrupted circadian rhythms. Our results indicate that anesthesia resistant memory, a consolidated memory less costly than long-term memory, is sensitive to sleep deprivation. In addition, we provide evidence that circadian factors influence memory vulnerability to sleep deprivation and memory retrieval. Taken together, the data show that memories weakened by sleep deprivation can be retrieved if the animals are tested at the optimal circadian time.

  15. On the robustness of Herlihy's hierarchy

    NASA Technical Reports Server (NTRS)

    Jayanti, Prasad

    1993-01-01

    A wait-free hierarchy maps object types to levels in Z(+) U (infinity) and has the following property: if a type T is at level N, and T' is an arbitrary type, then there is a wait-free implementation of an object of type T', for N processes, using only registers and objects of type T. The infinite hierarchy defined by Herlihy is an example of a wait-free hierarchy. A wait-free hierarchy is robust if it has the following property: if T is at level N, and S is a finite set of types belonging to levels N - 1 or lower, then there is no wait-free implementation of an object of type T, for N processes, using any number and any combination of objects belonging to the types in S. Robustness implies that there are no clever ways of combining weak shared objects to obtain stronger ones. Contrary to what many researchers believe, we prove that Herlihy's hierarchy is not robust. We then define some natural variants of Herlihy's hierarchy, which are also infinite wait-free hierarchies. With the exception of one, which is still open, these are not robust either. We conclude with the open question of whether non-trivial robust wait-free hierarchies exist.

  16. The Optimization of In-Memory Space Partitioning Trees for Cache Utilization

    NASA Astrophysics Data System (ADS)

    Yeo, Myung Ho; Min, Young Soo; Bok, Kyoung Soo; Yoo, Jae Soo

    In this paper, a novel cache conscious indexing technique based on space partitioning trees is proposed. Many researchers investigated efficient cache conscious indexing techniques which improve retrieval performance of in-memory database management system recently. However, most studies considered data partitioning and targeted fast information retrieval. Existing data partitioning-based index structures significantly degrade performance due to the redundant accesses of overlapped spaces. Specially, R-tree-based index structures suffer from the propagation of MBR (Minimum Bounding Rectangle) information by updating data frequently. In this paper, we propose an in-memory space partitioning index structure for optimal cache utilization. The proposed index structure is compared with the existing index structures in terms of update performance, insertion performance and cache-utilization rate in a variety of environments. The results demonstrate that the proposed index structure offers better performance than existing index structures.

  17. Design of a Variational Multiscale Method for Turbulent Compressible Flows

    NASA Technical Reports Server (NTRS)

    Diosady, Laslo Tibor; Murman, Scott M.

    2013-01-01

    A spectral-element framework is presented for the simulation of subsonic compressible high-Reynolds-number flows. The focus of the work is maximizing the efficiency of the computational schemes to enable unsteady simulations with a large number of spatial and temporal degrees of freedom. A collocation scheme is combined with optimized computational kernels to provide a residual evaluation with computational cost independent of order of accuracy up to 16th order. The optimized residual routines are used to develop a low-memory implicit scheme based on a matrix-free Newton-Krylov method. A preconditioner based on the finite-difference diagonalized ADI scheme is developed which maintains the low memory of the matrix-free implicit solver, while providing improved convergence properties. Emphasis on low memory usage throughout the solver development is leveraged to implement a coupled space-time DG solver which may offer further efficiency gains through adaptivity in both space and time.

  18. ℓ1-Regularized full-waveform inversion with prior model information based on orthant-wise limited memory quasi-Newton method

    NASA Astrophysics Data System (ADS)

    Dai, Meng-Xue; Chen, Jing-Bo; Cao, Jian

    2017-07-01

    Full-waveform inversion (FWI) is an ill-posed optimization problem which is sensitive to noise and initial model. To alleviate the ill-posedness of the problem, regularization techniques are usually adopted. The ℓ1-norm penalty is a robust regularization method that preserves contrasts and edges. The Orthant-Wise Limited-Memory Quasi-Newton (OWL-QN) method extends the widely-used limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method to the ℓ1-regularized optimization problems and inherits the efficiency of L-BFGS. To take advantage of the ℓ1-regularized method and the prior model information obtained from sonic logs and geological information, we implement OWL-QN algorithm in ℓ1-regularized FWI with prior model information in this paper. Numerical experiments show that this method not only improve the inversion results but also has a strong anti-noise ability.

  19. Fusion PIC code performance analysis on the Cori KNL system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koskela, Tuomas S.; Deslippe, Jack; Friesen, Brian

    We study the attainable performance of Particle-In-Cell codes on the Cori KNL system by analyzing a miniature particle push application based on the fusion PIC code XGC1. We start from the most basic building blocks of a PIC code and build up the complexity to identify the kernels that cost the most in performance and focus optimization efforts there. Particle push kernels operate at high AI and are not likely to be memory bandwidth or even cache bandwidth bound on KNL. Therefore, we see only minor benefits from the high bandwidth memory available on KNL, and achieving good vectorization ismore » shown to be the most beneficial optimization path with theoretical yield of up to 8x speedup on KNL. In practice we are able to obtain up to a 4x gain from vectorization due to limitations set by the data layout and memory latency.« less

  20. Nonextensivity in a Dark Maximum Entropy Landscape

    NASA Astrophysics Data System (ADS)

    Leubner, M. P.

    2011-03-01

    Nonextensive statistics along with network science, an emerging branch of graph theory, are increasingly recognized as potential interdisciplinary frameworks whenever systems are subject to long-range interactions and memory. Such settings are characterized by non-local interactions evolving in a non-Euclidean fractal/multi-fractal space-time making their behavior nonextensive. After summarizing the theoretical foundations from first principles, along with a discussion of entropy bifurcation and duality in nonextensive systems, we focus on selected significant astrophysical consequences. Those include the gravitational equilibria of dark matter (DM) and hot gas in clustered structures, the dark energy(DE) negative pressure landscape governed by the highest degree of mutual correlations and the hierarchy of discrete cosmic structure scales, available upon extremizing the generalized nonextensive link entropy in a homogeneous growing network.

  1. Fractal based curves in musical creativity: A critical annotation

    NASA Astrophysics Data System (ADS)

    Georgaki, Anastasia; Tsolakis, Christos

    In this article we examine fractal curves and synthesis algorithms in musical composition and research. First we trace the evolution of different approaches for the use of fractals in music since the 80's by a literature review. Furthermore, we review representative fractal algorithms and platforms that implement them. Properties such as self-similarity (pink noise), correlation, memory (related to the notion of Brownian motion) or non correlation at multiple levels (white noise), can be used to develop hierarchy of criteria for analyzing different layers of musical structure. L-systems can be applied in the modelling of melody in different musical cultures as well as in the investigation of musical perception principles. Finally, we propose a critical investigation approach for the use of artificial or natural fractal curves in systematic musicology.

  2. Asynchronous Data Retrieval from an Object-Oriented Database

    NASA Astrophysics Data System (ADS)

    Gilbert, Jonathan P.; Bic, Lubomir

    We present an object-oriented semantic database model which, similar to other object-oriented systems, combines the virtues of four concepts: the functional data model, a property inheritance hierarchy, abstract data types and message-driven computation. The main emphasis is on the last of these four concepts. We describe generic procedures that permit queries to be processed in a purely message-driven manner. A database is represented as a network of nodes and directed arcs, in which each node is a logical processing element, capable of communicating with other nodes by exchanging messages. This eliminates the need for shared memory and for centralized control during query processing. Hence, the model is suitable for implementation on a multiprocessor computer architecture, consisting of large numbers of loosely coupled processing elements.

  3. Procedural Quantum Programming

    NASA Astrophysics Data System (ADS)

    Ömer, Bernhard

    2002-09-01

    While classical computing science has developed a variety of methods and programming languages around the concept of the universal computer, the typical description of quantum algorithms still uses a purely mathematical, non-constructive formalism which makes no difference between a hydrogen atom and a quantum computer. This paper investigates, how the concept of procedural programming languages, the most widely used classical formalism for describing and implementing algorithms, can be adopted to the field of quantum computing, and how non-classical features like the reversibility of unitary transformations, the non-observability of quantum states or the lack of copy and erase operations can be reflected semantically. It introduces the key concepts of procedural quantum programming (hybrid target architecture, operator hierarchy, quantum data types, memory management, etc.) and presents the experimental language QCL, which implements these principles.

  4. Set-Membership Identification for Robust Control Design

    DTIC Science & Technology

    1993-04-28

    system G can be regarded as having no memory in (18) in terms of G and 0, we get of events prior to t = 1, the initial time. Roughly, this means all...algorithm in [1]. Also in our application, the size of the matrices involved is quite large and special attention should be paid to the memory ...management and algorithmic implementation; otherwise huge amounts of memory will be required to perform the optimization even for modest values of M and N

  5. Cheetah: A Framework for Scalable Hierarchical Collective Operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graham, Richard L; Gorentla Venkata, Manjunath; Ladd, Joshua S

    2011-01-01

    Collective communication operations, used by many scientific applications, tend to limit overall parallel application performance and scalability. Computer systems are becoming more heterogeneous with increasing node and core-per-node counts. Also, a growing number of data-access mechanisms, of varying characteristics, are supported within a single computer system. We describe a new hierarchical collective communication framework that takes advantage of hardware-specific data-access mechanisms. It is flexible, with run-time hierarchy specification, and sharing of collective communication primitives between collective algorithms. Data buffers are shared between levels in the hierarchy reducing collective communication management overhead. We have implemented several versions of the Message Passingmore » Interface (MPI) collective operations, MPI Barrier() and MPI Bcast(), and run experiments using up to 49, 152 processes on a Cray XT5, and a small InfiniBand based cluster. At 49, 152 processes our barrier implementation outperforms the optimized native implementation by 75%. 32 Byte and one Mega-Byte broadcasts outperform it by 62% and 11%, respectively, with better scalability characteristics. Improvements relative to the default Open MPI implementation are much larger.« less

  6. Compression in Working Memory and Its Relationship With Fluid Intelligence.

    PubMed

    Chekaf, Mustapha; Gauvrit, Nicolas; Guida, Alessandro; Mathy, Fabien

    2018-06-01

    Working memory has been shown to be strongly related to fluid intelligence; however, our goal is to shed further light on the process of information compression in working memory as a determining factor of fluid intelligence. Our main hypothesis was that compression in working memory is an excellent indicator for studying the relationship between working-memory capacity and fluid intelligence because both depend on the optimization of storage capacity. Compressibility of memoranda was estimated using an algorithmic complexity metric. The results showed that compressibility can be used to predict working-memory performance and that fluid intelligence is well predicted by the ability to compress information. We conclude that the ability to compress information in working memory is the reason why both manipulation and retention of information are linked to intelligence. This result offers a new concept of intelligence based on the idea that compression and intelligence are equivalent problems. Copyright © 2018 Cognitive Science Society, Inc.

  7. Programmable stream prefetch with resource optimization

    DOEpatents

    Boyle, Peter; Christ, Norman; Gara, Alan; Mawhinney, Robert; Ohmacht, Martin; Sugavanam, Krishnan

    2013-01-08

    A stream prefetch engine performs data retrieval in a parallel computing system. The engine receives a load request from at least one processor. The engine evaluates whether a first memory address requested in the load request is present and valid in a table. The engine checks whether there exists valid data corresponding to the first memory address in an array if the first memory address is present and valid in the table. The engine increments a prefetching depth of a first stream that the first memory address belongs to and fetching a cache line associated with the first memory address from the at least one cache memory device if there is not yet valid data corresponding to the first memory address in the array. The engine determines whether prefetching of additional data is needed for the first stream within its prefetching depth. The engine prefetches the additional data if the prefetching is needed.

  8. Impaired familiarity with preserved recollection after anterior temporal-lobe resection that spares the hippocampus.

    PubMed

    Bowles, Ben; Crupi, Carina; Mirsattari, Seyed M; Pigott, Susan E; Parrent, Andrew G; Pruessner, Jens C; Yonelinas, Andrew P; Köhler, Stefan

    2007-10-09

    It is well established that the medial-temporal lobe (MTL) is critical for recognition memory. The MTL is known to be composed of distinct structures that are organized in a hierarchical manner. At present, it remains controversial whether lower structures in this hierarchy, such as perirhinal cortex, support memory functions that are distinct from those of higher structures, in particular the hippocampus. Perirhinal cortex has been proposed to play a specific role in the assessment of familiarity during recognition, which can be distinguished from the selective contributions of the hippocampus to the recollection of episodic detail. Some researchers have argued, however, that the distinction between familiarity and recollection cannot capture functional specialization within the MTL and have proposed single-process accounts. Evidence supporting the dual-process view comes from demonstrations that selective hippocampal damage can produce isolated recollection impairments. It is unclear, however, whether temporal-lobe lesions that spare the hippocampus can produce selective familiarity impairments. Without this demonstration, single-process accounts cannot be ruled out. We examined recognition memory in NB, an individual who underwent surgical resection of left anterior temporal-lobe structures for treatment of intractable epilepsy. Her resection included a large portion of perirhinal cortex but spared the hippocampus. The results of four experiments based on three different experimental procedures (remember-know paradigm, receiver operating characteristics, and response-deadline procedure) indicate that NB exhibits impaired familiarity with preserved recollection. The present findings thus provide a crucial missing piece of support for functional specialization in the MTL.

  9. Human memory retrieval as Lévy foraging

    NASA Astrophysics Data System (ADS)

    Rhodes, Theo; Turvey, Michael T.

    2007-11-01

    When people attempt to recall as many words as possible from a specific category (e.g., animal names) their retrievals occur sporadically over an extended temporal period. Retrievals decline as recall progresses, but short retrieval bursts can occur even after tens of minutes of performing the task. To date, efforts to gain insight into the nature of retrieval from this fundamental phenomenon of semantic memory have focused primarily upon the exponential growth rate of cumulative recall. Here we focus upon the time intervals between retrievals. We expected and found that, for each participant in our experiment, these intervals conformed to a Lévy distribution suggesting that the Lévy flight dynamics that characterize foraging behavior may also characterize retrieval from semantic memory. The closer the exponent on the inverse square power-law distribution of retrieval intervals approximated the optimal foraging value of 2, the more efficient was the retrieval. At an abstract dynamical level, foraging for particular foods in one's niche and searching for particular words in one's memory must be similar processes if particular foods and particular words are randomly and sparsely located in their respective spaces at sites that are not known a priori. We discuss whether Lévy dynamics imply that memory processes, like foraging, are optimized in an ecological way.

  10. Hierarchies and the Choice of Left Conjuncts (With Particular Attention to English).

    ERIC Educational Resources Information Center

    Allan, K.

    1987-01-01

    Hierarchies have been identified as determinants of constituent order. The set of such hierarchies is reviewed and ranked as determinants of NP sequencing in English. The effect of a hierarchy in other languages is compared to and contrasted with what is found in English. (Author/LMO)

  11. Self-Regulatory Strategies in Daily Life: Selection, Optimization, and Compensation and Everyday Memory Problems

    PubMed Central

    Stephanie, Robinson; Margie, Lachman; Elizabeth, Rickenbach

    2015-01-01

    The effective use of self-regulatory strategies, such as selection, optimization, and compensation (SOC) requires resources. However, it is theorized that SOC use is most advantageous for those experiencing losses and diminishing resources. The present study explored this seeming paradox within the context of limitations or constraints due to aging, low cognitive resources, and daily stress in relation to everyday memory problems. We examined whether SOC usage varied by age and level of constraints, and if the relationship between resources and memory problems was mitigated by SOC usage. A daily diary paradigm was used to explore day-to-day fluctuations in these relationships. Participants (n=145, ages 22 to 94) completed a baseline interview and a daily diary for seven consecutive days. Multilevel models examined between- and within-person relationships between daily SOC use, daily stressors, cognitive resources, and everyday memory problems. Middle-aged adults had the highest SOC usage, although older adults also showed high SOC use if they had high cognitive resources. More SOC strategies were used on high stress compared to low stress days. Moreover, the relationship between daily stress and memory problems was buffered by daily SOC use, such that on high-stress days, those who used more SOC strategies reported fewer memory problems than participants who used fewer SOC strategies. The paradox of resources and SOC use can be qualified by the type of resource-limitation. Deficits in global resources were not tied to SOC usage or benefits. Conversely, under daily constraints tied to stress, the use of SOC increased and led to fewer memory problems. PMID:26997686

  12. Fog computing job scheduling optimization based on bees swarm

    NASA Astrophysics Data System (ADS)

    Bitam, Salim; Zeadally, Sherali; Mellouk, Abdelhamid

    2018-04-01

    Fog computing is a new computing architecture, composed of a set of near-user edge devices called fog nodes, which collaborate together in order to perform computational services such as running applications, storing an important amount of data, and transmitting messages. Fog computing extends cloud computing by deploying digital resources at the premise of mobile users. In this new paradigm, management and operating functions, such as job scheduling aim at providing high-performance, cost-effective services requested by mobile users and executed by fog nodes. We propose a new bio-inspired optimization approach called Bees Life Algorithm (BLA) aimed at addressing the job scheduling problem in the fog computing environment. Our proposed approach is based on the optimized distribution of a set of tasks among all the fog computing nodes. The objective is to find an optimal tradeoff between CPU execution time and allocated memory required by fog computing services established by mobile users. Our empirical performance evaluation results demonstrate that the proposal outperforms the traditional particle swarm optimization and genetic algorithm in terms of CPU execution time and allocated memory.

  13. The Toda lattice hierarchy and deformation of conformal field theories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fukuma, M.; Takebe, T.

    In this paper, the authors point out that the Toda lattice hierarchy known in soliton theory is relevant for the description of the deformations of conformal field theories while the KP hierarchy describes unperturbed conformal theories. It is shown that the holomorphic parts of the conserved currents in the perturbed system (the Toda lattice hierarchy) coincide with the conserved currents in the KP hierarchy and can be written in terms of the W-algebraic currents. Furthermore, their anti-holomorphic counterparts are obtained.

  14. The concept of hierarchy in general systems theory.

    PubMed

    Gasparski, W

    1994-01-01

    The paper reviews main ideas related to the concept of hierarchy as they are discussed in contemporary general systems theory. After presenting a dictionary definition of the concept, the author examines the intuitive idea of hierarchy quoting Mario Bunge's notion of level structure. Then relationship between two other concepts: a system and a hierarchy is characterised on the bases of Bowler's, Bunge's again, Klir's, and the author's studies. Finally, the paper is concluded that hierarchy is not an otological concept but epistemological one.

  15. Precision Measurements of Long-Baseline Neutrino Oscillation at LBNF

    DOE PAGES

    Worcester, Elizabeth

    2015-08-06

    In a long-baseline neutrino oscillation experiment, the primary physics objectives are to determine the neutrino mass hierarchy, to determine the octant of the neutrino mixing angle θ 23, to search for CP violation in neutrino oscillation, and to precisely measure the size of any CP-violating effect that is discovered. This presentation provides a brief introduction to these measurements and reports on efforts to optimize the design of a long-baseline neutrino oscillation experiment, the status of LBNE, and the transition to an international collaboration at LBNF.

  16. Integration of Coastal Ocean Dynamics Application Radar (CODAR) and Short-Term Predictive System (STPS): Surface Current Estimates into the Search and Rescue Optimal Planning System (SAROPS)

    DTIC Science & Technology

    2005-11-01

    walk (Markovian in position) techniques to perform these simulations ( Breivik et al, 2004; Spaulding and Howlett, 1996; Spaulding and Jayko, 1991; ASA...studies. Model 1 is used in most search and rescue models to make trajectory predictions ( Breivik et al, 2004; Spaulding and Howlett, 1996; Spaulding...ocean gyres: Part II hierarchy of stochastic models, Journal of Physical Oceanography, Vol. 32, 797-830. March 2002. Breivik , O., A. Allen, C. Wettre

  17. Solutions for medical databases optimal exploitation.

    PubMed

    Branescu, I; Purcarea, V L; Dobrescu, R

    2014-03-15

    The paper discusses the methods to apply OLAP techniques for multidimensional databases that leverage the existing, performance-enhancing technique, known as practical pre-aggregation, by making this technique relevant to a much wider range of medical applications, as a logistic support to the data warehousing techniques. The transformations have practically low computational complexity and they may be implemented using standard relational database technology. The paper also describes how to integrate the transformed hierarchies in current OLAP systems, transparently to the user and proposes a flexible, "multimodel" federated system for extending OLAP querying to external object databases.

  18. Intelligent Control Systems Research

    NASA Technical Reports Server (NTRS)

    Loparo, Kenneth A.

    1994-01-01

    Results of a three phase research program into intelligent control systems are presented. The first phase looked at implementing the lowest or direct level of a hierarchical control scheme using a reinforcement learning approach assuming no a priori information about the system under control. The second phase involved the design of an adaptive/optimizing level of the hierarchy and its interaction with the direct control level. The third and final phase of the research was aimed at combining the results of the previous phases with some a priori information about the controlled system.

  19. The Benefit of Attention-to-Memory Depends on the Interplay of Memory Capacity and Memory Load

    PubMed Central

    Lim, Sung-Joo; Wöstmann, Malte; Geweke, Frederik; Obleser, Jonas

    2018-01-01

    Humans can be cued to attend to an item in memory, which facilitates and enhances the perceptual precision in recalling this item. Here, we demonstrate that this facilitating effect of attention-to-memory hinges on the overall degree of memory load. The benefit an individual draws from attention-to-memory depends on her overall working memory performance, measured as sensitivity (d′) in a retroactive cue (retro-cue) pitch discrimination task. While listeners maintained 2, 4, or 6 auditory syllables in memory, we provided valid or neutral retro-cues to direct listeners’ attention to one, to-be-probed syllable in memory. Participants’ overall memory performance (i.e., perceptual sensitivity d′) was relatively unaffected by the presence of valid retro-cues across memory loads. However, a more fine-grained analysis using psychophysical modeling shows that valid retro-cues elicited faster pitch-change judgments and improved perceptual precision. Importantly, as memory load increased, listeners’ overall working memory performance correlated with inter-individual differences in the degree to which precision improved (r = 0.39, p = 0.029). Under high load, individuals with low working memory profited least from attention-to-memory. Our results demonstrate that retrospective attention enhances perceptual precision of attended items in memory but listeners’ optimal use of informative cues depends on their overall memory abilities. PMID:29520246

  20. A genetic fuzzy analytical hierarchy process based projection pursuit method for selecting schemes of water transportation projects

    NASA Astrophysics Data System (ADS)

    Jin, Juliang; Li, Lei; Wang, Wensheng; Zhang, Ming

    2006-10-01

    The optimal selection of schemes of water transportation projects is a process of choosing a relatively optimal scheme from a number of schemes of water transportation programming and management projects, which is of importance in both theory and practice in water resource systems engineering. In order to achieve consistency and eliminate the dimensions of fuzzy qualitative and fuzzy quantitative evaluation indexes, to determine the weights of the indexes objectively, and to increase the differences among the comprehensive evaluation index values of water transportation project schemes, a projection pursuit method, named FPRM-PP for short, was developed in this work for selecting the optimal water transportation project scheme based on the fuzzy preference relation matrix. The research results show that FPRM-PP is intuitive and practical, the correction range of the fuzzy preference relation matrix A it produces is relatively small, and the result obtained is both stable and accurate; therefore FPRM-PP can be widely used in the optimal selection of different multi-factor decision-making schemes.

Top