Sample records for shared-memory sci node

  1. Performing an allreduce operation using shared memory

    DOEpatents

    Archer, Charles J [Rochester, MN; Dozsa, Gabor [Ardsley, NY; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN

    2012-04-17

    Methods, apparatus, and products are disclosed for performing an allreduce operation using shared memory that include: receiving, by at least one of a plurality of processing cores on a compute node, an instruction to perform an allreduce operation; establishing, by the core that received the instruction, a job status object for specifying a plurality of shared memory allreduce work units, the plurality of shared memory allreduce work units together performing the allreduce operation on the compute node; determining, by an available core on the compute node, a next shared memory allreduce work unit in the job status object; and performing, by that available core on the compute node, that next shared memory allreduce work unit.

  2. Performing an allreduce operation using shared memory

    DOEpatents

    Archer, Charles J; Dozsa, Gabor; Ratterman, Joseph D; Smith, Brian E

    2014-06-10

    Methods, apparatus, and products are disclosed for performing an allreduce operation using shared memory that include: receiving, by at least one of a plurality of processing cores on a compute node, an instruction to perform an allreduce operation; establishing, by the core that received the instruction, a job status object for specifying a plurality of shared memory allreduce work units, the plurality of shared memory allreduce work units together performing the allreduce operation on the compute node; determining, by an available core on the compute node, a next shared memory allreduce work unit in the job status object; and performing, by that available core on the compute node, that next shared memory allreduce work unit.

  3. Centrally managed unified shared virtual address space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilkes, John

    Systems, apparatuses, and methods for managing a unified shared virtual address space. A host may execute system software and manage a plurality of nodes coupled to the host. The host may send work tasks to the nodes, and for each node, the host may externally manage the node's view of the system's virtual address space. Each node may have a central processing unit (CPU) style memory management unit (MMU) with an internal translation lookaside buffer (TLB). In one embodiment, the host may be coupled to a given node via an input/output memory management unit (IOMMU) interface, where the IOMMU frontendmore » interface shares the TLB with the given node's MMU. In another embodiment, the host may control the given node's view of virtual address space via memory-mapped control registers.« less

  4. Hypercluster - Parallel processing for computational mechanics

    NASA Technical Reports Server (NTRS)

    Blech, Richard A.

    1988-01-01

    An account is given of the development status, performance capabilities and implications for further development of NASA-Lewis' testbed 'hypercluster' parallel computer network, in which multiple processors communicate through a shared memory. Processors have local as well as shared memory; the hypercluster is expanded in the same manner as the hypercube, with processor clusters replacing the normal single processor node. The NASA-Lewis machine has three nodes with a vector personality and one node with a scalar personality. Each of the vector nodes uses four board-level vector processors, while the scalar node uses four general-purpose microcomputer boards.

  5. A Massively Parallel Code for Polarization Calculations

    NASA Astrophysics Data System (ADS)

    Akiyama, Shizuka; Höflich, Peter

    2001-03-01

    We present an implementation of our Monte-Carlo radiation transport method for rapidly expanding, NLTE atmospheres for massively parallel computers which utilizes both the distributed and shared memory models. This allows us to take full advantage of the fast communication and low latency inherent to nodes with multiple CPUs, and to stretch the limits of scalability with the number of nodes compared to a version which is based on the shared memory model. Test calculations on a local 20-node Beowulf cluster with dual CPUs showed an improved scalability by about 40%.

  6. SciSpark: Highly Interactive and Scalable Model Evaluation and Climate Metrics

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Palamuttam, R. S.; Mogrovejo, R. M.; Whitehall, K. D.; Mattmann, C. A.; Verma, R.; Waliser, D. E.; Lee, H.

    2015-12-01

    Remote sensing data and climate model output are multi-dimensional arrays of massive sizes locked away in heterogeneous file formats (HDF5/4, NetCDF 3/4) and metadata models (HDF-EOS, CF) making it difficult to perform multi-stage, iterative science processing since each stage requires writing and reading data to and from disk. We are developing a lightning fast Big Data technology called SciSpark based on ApacheTM Spark under a NASA AIST grant (PI Mattmann). Spark implements the map-reduce paradigm for parallel computing on a cluster, but emphasizes in-memory computation, "spilling" to disk only as needed, and so outperforms the disk-based ApacheTM Hadoop by 100x in memory and by 10x on disk. SciSpark will enable scalable model evaluation by executing large-scale comparisons of A-Train satellite observations to model grids on a cluster of 10 to 1000 compute nodes. This 2nd generation capability for NASA's Regional Climate Model Evaluation System (RCMES) will compute simple climate metrics at interactive speeds, and extend to quite sophisticated iterative algorithms such as machine-learning based clustering of temperature PDFs, and even graph-based algorithms for searching for Mesocale Convective Complexes. We have implemented a parallel data ingest capability in which the user specifies desired variables (arrays) as several time-sorted lists of URL's (i.e. using OPeNDAP model.nc?varname, or local files). The specified variables are partitioned by time/space and then each Spark node pulls its bundle of arrays into memory to begin a computation pipeline. We also investigated the performance of several N-dim. array libraries (scala breeze, java jblas & netlib-java, and ND4J). We are currently developing science codes using ND4J and studying memory behavior on the JVM. On the pyspark side, many of our science codes already use the numpy and SciPy ecosystems. The talk will cover: the architecture of SciSpark, the design of the scientific RDD (sRDD) data structure, our efforts to integrate climate science algorithms in Python and Scala, parallel ingest and partitioning of A-Train satellite observations from HDF files and model grids from netCDF files, first parallel runs to compute comparison statistics and PDF's, and first metrics quantifying parallel speedups and memory & disk usage.

  7. DMA shared byte counters in a parallel computer

    DOEpatents

    Chen, Dong; Gara, Alan G.; Heidelberger, Philip; Vranas, Pavlos

    2010-04-06

    A parallel computer system is constructed as a network of interconnected compute nodes. Each of the compute nodes includes at least one processor, a memory and a DMA engine. The DMA engine includes a processor interface for interfacing with the at least one processor, DMA logic, a memory interface for interfacing with the memory, a DMA network interface for interfacing with the network, injection and reception byte counters, injection and reception FIFO metadata, and status registers and control registers. The injection FIFOs maintain memory locations of the injection FIFO metadata memory locations including its current head and tail, and the reception FIFOs maintain the reception FIFO metadata memory locations including its current head and tail. The injection byte counters and reception byte counters may be shared between messages.

  8. Scalable Triadic Analysis of Large-Scale Graphs: Multi-Core vs. Multi-Processor vs. Multi-Threaded Shared Memory Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chin, George; Marquez, Andres; Choudhury, Sutanay

    2012-09-01

    Triadic analysis encompasses a useful set of graph mining methods that is centered on the concept of a triad, which is a subgraph of three nodes and the configuration of directed edges across the nodes. Such methods are often applied in the social sciences as well as many other diverse fields. Triadic methods commonly operate on a triad census that counts the number of triads of every possible edge configuration in a graph. Like other graph algorithms, triadic census algorithms do not scale well when graphs reach tens of millions to billions of nodes. To enable the triadic analysis ofmore » large-scale graphs, we developed and optimized a triad census algorithm to efficiently execute on shared memory architectures. We will retrace the development and evolution of a parallel triad census algorithm. Over the course of several versions, we continually adapted the code’s data structures and program logic to expose more opportunities to exploit parallelism on shared memory that would translate into improved computational performance. We will recall the critical steps and modifications that occurred during code development and optimization. Furthermore, we will compare the performances of triad census algorithm versions on three specific systems: Cray XMT, HP Superdome, and AMD multi-core NUMA machine. These three systems have shared memory architectures but with markedly different hardware capabilities to manage parallelism.« less

  9. Peregrine System Configuration | High-Performance Computing | NREL

    Science.gov Websites

    nodes and storage are connected by a high speed InfiniBand network. Compute nodes are diskless with an directories are mounted on all nodes, along with a file system dedicated to shared projects. A brief processors with 64 GB of memory. All nodes are connected to the high speed Infiniband network and and a

  10. Distributed shared memory for roaming large volumes.

    PubMed

    Castanié, Laurent; Mion, Christophe; Cavin, Xavier; Lévy, Bruno

    2006-01-01

    We present a cluster-based volume rendering system for roaming very large volumes. This system allows to move a gigabyte-sized probe inside a total volume of several tens or hundreds of gigabytes in real-time. While the size of the probe is limited by the total amount of texture memory on the cluster, the size of the total data set has no theoretical limit. The cluster is used as a distributed graphics processing unit that both aggregates graphics power and graphics memory. A hardware-accelerated volume renderer runs in parallel on the cluster nodes and the final image compositing is implemented using a pipelined sort-last rendering algorithm. Meanwhile, volume bricking and volume paging allow efficient data caching. On each rendering node, a distributed hierarchical cache system implements a global software-based distributed shared memory on the cluster. In case of a cache miss, this system first checks page residency on the other cluster nodes instead of directly accessing local disks. Using two Gigabit Ethernet network interfaces per node, we accelerate data fetching by a factor of 4 compared to directly accessing local disks. The system also implements asynchronous disk access and texture loading, which makes it possible to overlap data loading, volume slicing and rendering for optimal volume roaming.

  11. Spinal Plasticity and Behavior: BDNF-Induced Neuromodulation in Uninjured and Injured Spinal Cord

    PubMed Central

    Huie, J. Russell

    2016-01-01

    Brain-derived neurotrophic factor (BDNF) is a member of the neurotrophic factor family of signaling molecules. Since its discovery over three decades ago, BDNF has been identified as an important regulator of neuronal development, synaptic transmission, and cellular and synaptic plasticity and has been shown to function in the formation and maintenance of certain forms of memory. Neural plasticity that underlies learning and memory in the hippocampus shares distinct characteristics with spinal cord nociceptive plasticity. Research examining the role BDNF plays in spinal nociception and pain overwhelmingly suggests that BDNF promotes pronociceptive effects. BDNF induces synaptic facilitation and engages central sensitization-like mechanisms. Also, peripheral injury-induced neuropathic pain is often accompanied with increased spinal expression of BDNF. Research has extended to examine how spinal cord injury (SCI) influences BDNF plasticity and the effects BDNF has on sensory and motor functions after SCI. Functional recovery and adaptive plasticity after SCI are typically associated with upregulation of BDNF. Although neuropathic pain is a common consequence of SCI, the relation between BDNF and pain after SCI remains elusive. This article reviews recent literature and discusses the diverse actions of BDNF. We also highlight similarities and differences in BDNF-induced nociceptive plasticity in naïve and SCI conditions. PMID:27721996

  12. The Mark III Hypercube-Ensemble Computers

    NASA Technical Reports Server (NTRS)

    Peterson, John C.; Tuazon, Jesus O.; Lieberman, Don; Pniel, Moshe

    1988-01-01

    Mark III Hypercube concept applied in development of series of increasingly powerful computers. Processor of each node of Mark III Hypercube ensemble is specialized computer containing three subprocessors and shared main memory. Solves problem quickly by simultaneously processing part of problem at each such node and passing combined results to host computer. Disciplines benefitting from speed and memory capacity include astrophysics, geophysics, chemistry, weather, high-energy physics, applied mechanics, image processing, oil exploration, aircraft design, and microcircuit design.

  13. SciSpark's SRDD : A Scientific Resilient Distributed Dataset for Multidimensional Data

    NASA Astrophysics Data System (ADS)

    Palamuttam, R. S.; Wilson, B. D.; Mogrovejo, R. M.; Whitehall, K. D.; Mattmann, C. A.; McGibbney, L. J.; Ramirez, P.

    2015-12-01

    Remote sensing data and climate model output are multi-dimensional arrays of massive sizes locked away in heterogeneous file formats (HDF5/4, NetCDF 3/4) and metadata models (HDF-EOS, CF) making it difficult to perform multi-stage, iterative science processing since each stage requires writing and reading data to and from disk. We have developed SciSpark, a robust Big Data framework, that extends ApacheTM Spark for scaling scientific computations. Apache Spark improves the map-reduce implementation in ApacheTM Hadoop for parallel computing on a cluster, by emphasizing in-memory computation, "spilling" to disk only as needed, and relying on lazy evaluation. Central to Spark is the Resilient Distributed Dataset (RDD), an in-memory distributed data structure that extends the functional paradigm provided by the Scala programming language. However, RDDs are ideal for tabular or unstructured data, and not for highly dimensional data. The SciSpark project introduces the Scientific Resilient Distributed Dataset (sRDD), a distributed-computing array structure which supports iterative scientific algorithms for multidimensional data. SciSpark processes data stored in NetCDF and HDF files by partitioning them across time or space and distributing the partitions among a cluster of compute nodes. We show usability and extensibility of SciSpark by implementing distributed algorithms for geospatial operations on large collections of multi-dimensional grids. In particular we address the problem of scaling an automated method for finding Mesoscale Convective Complexes. SciSpark provides a tensor interface to support the pluggability of different matrix libraries. We evaluate performance of the various matrix libraries in distributed pipelines, such as Nd4jTM and BreezeTM. We detail the architecture and design of SciSpark, our efforts to integrate climate science algorithms, parallel ingest and partitioning (sharding) of A-Train satellite observations from model grids. These solutions are encompassed in SciSpark, an open-source software framework for distributed computing on scientific data.

  14. The Simulation of Read-time Scalable Coherent Interface

    NASA Technical Reports Server (NTRS)

    Li, Qiang; Grant, Terry; Grover, Radhika S.

    1997-01-01

    Scalable Coherent Interface (SCI, IEEE/ANSI Std 1596-1992) (SCI1, SCI2) is a high performance interconnect for shared memory multiprocessor systems. In this project we investigate an SCI Real Time Protocols (RTSCI1) using Directed Flow Control Symbols. We studied the issues of efficient generation of control symbols, and created a simulation model of the protocol on a ring-based SCI system. This report presents the results of the study. The project has been implemented using SES/Workbench. The details that follow encompass aspects of both SCI and Flow Control Protocols, as well as the effect of realistic client/server processing delay. The report is organized as follows. Section 2 provides a description of the simulation model. Section 3 describes the protocol implementation details. The next three sections of the report elaborate on the workload, results and conclusions. Appended to the report is a description of the tool, SES/Workbench, used in our simulation, and internal details of our implementation of the protocol.

  15. Shared Memory Parallelism for 3D Cartesian Discrete Ordinates Solver

    NASA Astrophysics Data System (ADS)

    Moustafa, Salli; Dutka-Malen, Ivan; Plagne, Laurent; Ponçot, Angélique; Ramet, Pierre

    2014-06-01

    This paper describes the design and the performance of DOMINO, a 3D Cartesian SN solver that implements two nested levels of parallelism (multicore+SIMD) on shared memory computation nodes. DOMINO is written in C++, a multi-paradigm programming language that enables the use of powerful and generic parallel programming tools such as Intel TBB and Eigen. These two libraries allow us to combine multi-thread parallelism with vector operations in an efficient and yet portable way. As a result, DOMINO can exploit the full power of modern multi-core processors and is able to tackle very large simulations, that usually require large HPC clusters, using a single computing node. For example, DOMINO solves a 3D full core PWR eigenvalue problem involving 26 energy groups, 288 angular directions (S16), 46 × 106 spatial cells and 1 × 1012 DoFs within 11 hours on a single 32-core SMP node. This represents a sustained performance of 235 GFlops and 40:74% of the SMP node peak performance for the DOMINO sweep implementation. The very high Flops/Watt ratio of DOMINO makes it a very interesting building block for a future many-nodes nuclear simulation tool.

  16. Running ATLAS workloads within massively parallel distributed applications using Athena Multi-Process framework (AthenaMP)

    NASA Astrophysics Data System (ADS)

    Calafiura, Paolo; Leggett, Charles; Seuster, Rolf; Tsulaia, Vakhtang; Van Gemmeren, Peter

    2015-12-01

    AthenaMP is a multi-process version of the ATLAS reconstruction, simulation and data analysis framework Athena. By leveraging Linux fork and copy-on-write mechanisms, it allows for sharing of memory pages between event processors running on the same compute node with little to no change in the application code. Originally targeted to optimize the memory footprint of reconstruction jobs, AthenaMP has demonstrated that it can reduce the memory usage of certain configurations of ATLAS production jobs by a factor of 2. AthenaMP has also evolved to become the parallel event-processing core of the recently developed ATLAS infrastructure for fine-grained event processing (Event Service) which allows the running of AthenaMP inside massively parallel distributed applications on hundreds of compute nodes simultaneously. We present the architecture of AthenaMP, various strategies implemented by AthenaMP for scheduling workload to worker processes (for example: Shared Event Queue and Shared Distributor of Event Tokens) and the usage of AthenaMP in the diversity of ATLAS event processing workloads on various computing resources: Grid, opportunistic resources and HPC.

  17. Cooperative Data Sharing: Simple Support for Clusters of SMP Nodes

    NASA Technical Reports Server (NTRS)

    DiNucci, David C.; Balley, David H. (Technical Monitor)

    1997-01-01

    Libraries like PVM and MPI send typed messages to allow for heterogeneous cluster computing. Lower-level libraries, such as GAM, provide more efficient access to communication by removing the need to copy messages between the interface and user space in some cases. still lower-level interfaces, such as UNET, get right down to the hardware level to provide maximum performance. However, these are all still interfaces for passing messages from one process to another, and have limited utility in a shared-memory environment, due primarily to the fact that message passing is just another term for copying. This drawback is made more pertinent by today's hybrid architectures (e.g. clusters of SMPs), where it is difficult to know beforehand whether two communicating processes will share memory. As a result, even portable language tools (like HPF compilers) must either map all interprocess communication, into message passing with the accompanying performance degradation in shared memory environments, or they must check each communication at run-time and implement the shared-memory case separately for efficiency. Cooperative Data Sharing (CDS) is a single user-level API which abstracts all communication between processes into the sharing and access coordination of memory regions, in a model which might be described as "distributed shared messages" or "large-grain distributed shared memory". As a result, the user programs to a simple latency-tolerant abstract communication specification which can be mapped efficiently to either a shared-memory or message-passing based run-time system, depending upon the available architecture. Unlike some distributed shared memory interfaces, the user still has complete control over the assignment of data to processors, the forwarding of data to its next likely destination, and the queuing of data until it is needed, so even the relatively high latency present in clusters can be accomodated. CDS does not require special use of an MMU, which can add overhead to some DSM systems, and does not require an SPMD programming model. unlike some message-passing interfaces, CDS allows the user to implement efficient demand-driven applications where processes must "fight" over data, and does not perform copying if processes share memory and do not attempt concurrent writes. CDS also supports heterogeneous computing, dynamic process creation, handlers, and a very simple thread-arbitration mechanism. Additional support for array subsections is currently being considered. The CDS1 API, which forms the kernel of CDS, is built primarily upon only 2 communication primitives, one process initiation primitive, and some data translation (and marshalling) routines, memory allocation routines, and priority control routines. The entire current collection of 28 routines provides enough functionality to implement most (or all) of MPI 1 and 2, which has a much larger interface consisting of hundreds of routines. still, the API is small enough to consider integrating into standard os interfaces for handling inter-process communication in a network-independent way. This approach would also help to solve many of the problems plaguing other higher-level standards such as MPI and PVM which must, in some cases, "play OS" to adequately address progress and process control issues. The CDS2 API, a higher level of interface roughly equivalent in functionality to MPI and to be built entirely upon CDS1, is still being designed. It is intended to add support for the equivalent of communicators, reduction and other collective operations, process topologies, additional support for process creation, and some automatic memory management. CDS2 will not exactly match MPI, because the copy-free semantics of communication from CDS1 will be supported. CDS2 application programs will be free to carefully also use CDS1. CDS1 has been implemented on networks of workstations running unmodified Unix-based operating systems, using UDP/IP and vendor-supplied high- performance locks. Although its inter-node performance is currently unimpressive due to rudimentary implementation technique, it even now outperforms highly-optimized MPI implementation on intra-node communication due to its support for non-copy communication. The similarity of the CDS1 architecture to that of other projects such as UNET and TRAP suggests that the inter-node performance can be increased significantly to surpass MPI or PVM, and it may be possible to migrate some of its functionality to communication controllers.

  18. Gyrokinetic micro-turbulence simulations on the NERSC 16-way SMP IBM SP computer: experiences and performance results

    NASA Astrophysics Data System (ADS)

    Ethier, Stephane; Lin, Zhihong

    2001-10-01

    Earlier this year, the National Energy Research Scientific Computing center (NERSC) took delivery of the second most powerful computer in the world. With its 2,528 processors running at a peak performance of 1.5 GFlops, this IBM SP machine has a theoretical performance of almost 3.8 TFlops. To efficiently harness such computing power in one single code is not an easy task and requires a good knowledge of the computer's architecture. Here we present the steps that we followed to improve our gyrokinetic micro-turbulence code GTC in order to take advantage of the new 16-way shared memory nodes of the NERSC IBM SP. Performance results are shown as well as details about the improved mixed-mode MPI-OpenMP model that we use. The enhancements to the code allowed us to tackle much bigger problem sizes, getting closer to our goal of simulating an ITER-size tokamak with both kinetic ions and electrons.(This work is supported by DOE Contract No. DE-AC02-76CH03073 (PPPL), and in part by the DOE Fusion SciDAC Project.)

  19. Hybrid Parallelism for Volume Rendering on Large-, Multi-, and Many-Core Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Howison, Mark; Bethel, E. Wes; Childs, Hank

    2012-01-01

    With the computing industry trending towards multi- and many-core processors, we study how a standard visualization algorithm, ray-casting volume rendering, can benefit from a hybrid parallelism approach. Hybrid parallelism provides the best of both worlds: using distributed-memory parallelism across a large numbers of nodes increases available FLOPs and memory, while exploiting shared-memory parallelism among the cores within each node ensures that each node performs its portion of the larger calculation as efficiently as possible. We demonstrate results from weak and strong scaling studies, at levels of concurrency ranging up to 216,000, and with datasets as large as 12.2 trillion cells.more » The greatest benefit from hybrid parallelism lies in the communication portion of the algorithm, the dominant cost at higher levels of concurrency. We show that reducing the number of participants with a hybrid approach significantly improves performance.« less

  20. Multi-core processing and scheduling performance in CMS

    NASA Astrophysics Data System (ADS)

    Hernández, J. M.; Evans, D.; Foulkes, S.

    2012-12-01

    Commodity hardware is going many-core. We might soon not be able to satisfy the job memory needs per core in the current single-core processing model in High Energy Physics. In addition, an ever increasing number of independent and incoherent jobs running on the same physical hardware not sharing resources might significantly affect processing performance. It will be essential to effectively utilize the multi-core architecture. CMS has incorporated support for multi-core processing in the event processing framework and the workload management system. Multi-core processing jobs share common data in memory, such us the code libraries, detector geometry and conditions data, resulting in a much lower memory usage than standard single-core independent jobs. Exploiting this new processing model requires a new model in computing resource allocation, departing from the standard single-core allocation for a job. The experiment job management system needs to have control over a larger quantum of resource since multi-core aware jobs require the scheduling of multiples cores simultaneously. CMS is exploring the approach of using whole nodes as unit in the workload management system where all cores of a node are allocated to a multi-core job. Whole-node scheduling allows for optimization of the data/workflow management (e.g. I/O caching, local merging) but efficient utilization of all scheduled cores is challenging. Dedicated whole-node queues have been setup at all Tier-1 centers for exploring multi-core processing workflows in CMS. We present the evaluation of the performance scheduling and executing multi-core workflows in whole-node queues compared to the standard single-core processing workflows.

  1. OpenMP Performance on the Columbia Supercomputer

    NASA Technical Reports Server (NTRS)

    Haoqiang, Jin; Hood, Robert

    2005-01-01

    This presentation discusses Columbia World Class Supercomputer which is one of the world's fastest supercomputers providing 61 TFLOPs (10/20/04). Conceived, designed, built, and deployed in just 120 days. A 20-node supercomputer built on proven 512-processor nodes. The largest SGI system in the world with over 10,000 Intel Itanium 2 processors and provides the largest node size incorporating commodity parts (512) and the largest shared-memory environment (2048) with 88% efficiency tops the scalar systems on the Top500 list.

  2. Learnable Models for Information Diffusion and its Associated User Behavior in Micro-blogosphere

    DTIC Science & Technology

    2012-08-30

    According to the work of Even-Dar and Shapira (2007), we recall the definition of the ba- sic voter model on network G. In the model, each node of G...reason as follows. We started with the K distinct initial nodes and all the other nodes were neutral in the beginning. Recall that we set the average time... memory , running under Linux. Learning to predict opinion share and detect anti-majority opinionists in social networks 29 7 Conclusion Unlike the popular

  3. Holding a manual response sequence in memory can disrupt vocal responses that share semantic features with the manual response.

    PubMed

    Fournier, Lisa Renee; Wiediger, Matthew D; McMeans, Ryan; Mattson, Paul S; Kirkwood, Joy; Herzog, Theibot

    2010-07-01

    Holding an action plan in memory for later execution can delay execution of another action if the actions share a similar (compatible) feature. This compatibility interference (CI) occurs for actions that share the same response modality (e.g., manual response). We investigated whether CI can generalize to actions that utilize different response modalities (manual and vocal). In three experiments, participants planned and withheld a sequence of key-presses with the left- or right-hand based on the visual identity of the first stimulus, and then immediately executed a speeded, vocal response ('left' or 'right') to a second visual stimulus. The vocal response was based on discriminating stimulus color (Experiment 1), reading a written word (Experiment 2), or reporting the antonym of a written word (Experiment 3). Results showed that CI occurred when the manual response hand (e.g., left) was compatible with the identity of the vocal response (e.g., 'left') in Experiment 1 and 3, but not in Experiment 2. This suggests that partial overlap of semantic codes is sufficient to obtain CI unless the intervening action can be accessed automatically (Experiment 2). These findings are consistent with the code occupation hypothesis and the general framework of the theory of event coding (Behav Brain Sci 24:849-878, 2001a; Behav Brain Sci 24:910-937, 2001b).

  4. SciSpark: Highly Interactive and Scalable Model Evaluation and Climate Metrics

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Mattmann, C. A.; Waliser, D. E.; Kim, J.; Loikith, P.; Lee, H.; McGibbney, L. J.; Whitehall, K. D.

    2014-12-01

    Remote sensing data and climate model output are multi-dimensional arrays of massive sizes locked away in heterogeneous file formats (HDF5/4, NetCDF 3/4) and metadata models (HDF-EOS, CF) making it difficult to perform multi-stage, iterative science processing since each stage requires writing and reading data to and from disk. We are developing a lightning fast Big Data technology called SciSpark based on ApacheTM Spark. Spark implements the map-reduce paradigm for parallel computing on a cluster, but emphasizes in-memory computation, "spilling" to disk only as needed, and so outperforms the disk-based ApacheTM Hadoop by 100x in memory and by 10x on disk, and makes iterative algorithms feasible. SciSpark will enable scalable model evaluation by executing large-scale comparisons of A-Train satellite observations to model grids on a cluster of 100 to 1000 compute nodes. This 2nd generation capability for NASA's Regional Climate Model Evaluation System (RCMES) will compute simple climate metrics at interactive speeds, and extend to quite sophisticated iterative algorithms such as machine-learning (ML) based clustering of temperature PDFs, and even graph-based algorithms for searching for Mesocale Convective Complexes. The goals of SciSpark are to: (1) Decrease the time to compute comparison statistics and plots from minutes to seconds; (2) Allow for interactive exploration of time-series properties over seasons and years; (3) Decrease the time for satellite data ingestion into RCMES to hours; (4) Allow for Level-2 comparisons with higher-order statistics or PDF's in minutes to hours; and (5) Move RCMES into a near real time decision-making platform. We will report on: the architecture and design of SciSpark, our efforts to integrate climate science algorithms in Python and Scala, parallel ingest and partitioning (sharding) of A-Train satellite observations from HDF files and model grids from netCDF files, first parallel runs to compute comparison statistics and PDF's, and first metrics quantifying parallel speedups and memory & disk usage.

  5. Metadata based management and sharing of distributed biomedical data

    PubMed Central

    Vergara-Niedermayr, Cristobal; Liu, Peiya

    2014-01-01

    Biomedical research data sharing is becoming increasingly important for researchers to reuse experiments, pool expertise and validate approaches. However, there are many hurdles for data sharing, including the unwillingness to share, lack of flexible data model for providing context information, difficulty to share syntactically and semantically consistent data across distributed institutions, and high cost to provide tools to share the data. SciPort is a web-based collaborative biomedical data sharing platform to support data sharing across distributed organisations. SciPort provides a generic metadata model to flexibly customise and organise the data. To enable convenient data sharing, SciPort provides a central server based data sharing architecture with a one-click data sharing from a local server. To enable consistency, SciPort provides collaborative distributed schema management across distributed sites. To enable semantic consistency, SciPort provides semantic tagging through controlled vocabularies. SciPort is lightweight and can be easily deployed for building data sharing communities. PMID:24834105

  6. Authentication and Key Establishment in Dynamic Wireless Sensor Networks

    PubMed Central

    Qiu, Ying; Zhou, Jianying; Baek, Joonsang; Lopez, Javier

    2010-01-01

    When a sensor node roams within a very large and distributed wireless sensor network, which consists of numerous sensor nodes, its routing path and neighborhood keep changing. In order to provide a high level of security in this environment, the moving sensor node needs to be authenticated to new neighboring nodes and a key established for secure communication. The paper proposes an efficient and scalable protocol to establish and update the authentication key in a dynamic wireless sensor network environment. The protocol guarantees that two sensor nodes share at least one key with probability 1 (100%) with less memory and energy cost, while not causing considerable communication overhead. PMID:22319321

  7. Fencing data transfers in a parallel active messaging interface of a parallel computer

    DOEpatents

    Blocksome, Michael A.; Mamidala, Amith R.

    2015-06-02

    Fencing data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task; the compute nodes coupled for data communications through the PAMI and through data communications resources including at least one segment of shared random access memory; including initiating execution through the PAMI of an ordered sequence of active SEND instructions for SEND data transfers between two endpoints, effecting deterministic SEND data transfers through a segment of shared memory; and executing through the PAMI, with no FENCE accounting for SEND data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all SEND instructions initiated prior to execution of the FENCE instruction for SEND data transfers between the two endpoints.

  8. Fencing data transfers in a parallel active messaging interface of a parallel computer

    DOEpatents

    Blocksome, Michael A.; Mamidala, Amith R.

    2015-06-09

    Fencing data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task; the compute nodes coupled for data communications through the PAMI and through data communications resources including at least one segment of shared random access memory; including initiating execution through the PAMI of an ordered sequence of active SEND instructions for SEND data transfers between two endpoints, effecting deterministic SEND data transfers through a segment of shared memory; and executing through the PAMI, with no FENCE accounting for SEND data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all SEND instructions initiated prior to execution of the FENCE instruction for SEND data transfers between the two endpoints.

  9. Multi-core processing and scheduling performance in CMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hernandez, J. M.; Evans, D.; Foulkes, S.

    2012-01-01

    Commodity hardware is going many-core. We might soon not be able to satisfy the job memory needs per core in the current single-core processing model in High Energy Physics. In addition, an ever increasing number of independent and incoherent jobs running on the same physical hardware not sharing resources might significantly affect processing performance. It will be essential to effectively utilize the multi-core architecture. CMS has incorporated support for multi-core processing in the event processing framework and the workload management system. Multi-core processing jobs share common data in memory, such us the code libraries, detector geometry and conditions data, resultingmore » in a much lower memory usage than standard single-core independent jobs. Exploiting this new processing model requires a new model in computing resource allocation, departing from the standard single-core allocation for a job. The experiment job management system needs to have control over a larger quantum of resource since multi-core aware jobs require the scheduling of multiples cores simultaneously. CMS is exploring the approach of using whole nodes as unit in the workload management system where all cores of a node are allocated to a multi-core job. Whole-node scheduling allows for optimization of the data/workflow management (e.g. I/O caching, local merging) but efficient utilization of all scheduled cores is challenging. Dedicated whole-node queues have been setup at all Tier-1 centers for exploring multi-core processing workflows in CMS. We present the evaluation of the performance scheduling and executing multi-core workflows in whole-node queues compared to the standard single-core processing workflows.« less

  10. Large-Scale, Parallel, Multi-Sensor Atmospheric Data Fusion Using Cloud Computing

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Manipon, G.; Hua, H.; Fetzer, E. J.

    2013-12-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the 'A-Train' platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over decades. Moving to multi-sensor, long-duration analyses of important climate variables presents serious challenges for large-scale data mining and fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another (MODIS), and to a model (MERRA), stratify the comparisons using a classification of the 'cloud scenes' from CloudSat, and repeat the entire analysis over 10 years of data. To efficiently assemble such datasets, we are utilizing Elastic Computing in the Cloud and parallel map/reduce-based algorithms. However, these problems are Data Intensive computing so the data transfer times and storage costs (for caching) are key issues. SciReduce is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in the Cloud. Unlike Hadoop, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Figure 1 shows the architecture of the full computational system, with SciReduce at the core. Multi-year datasets are automatically 'sharded' by time and space across a cluster of nodes so that years of data (millions of files) can be processed in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP URLs or other subsetting services, thereby minimizing the size of the cached input and intermediate datasets. We are using SciReduce to automate the production of multiple versions of a ten-year A-Train water vapor climatology under a NASA MEASURES grant. We will present the architecture of SciReduce, describe the achieved 'clock time' speedups in fusing datasets on our own compute nodes and in the public Cloud, and discuss the Cloud cost tradeoffs for storage, compute, and data transfer. We will also present a concept/prototype for staging NASA's A-Train Atmospheric datasets (Levels 2 & 3) in the Amazon Cloud so that any number of compute jobs can be executed 'near' the multi-sensor data. Given such a system, multi-sensor climate studies over 10-20 years of data could be performed in an efficient way, with the researcher paying only his own Cloud compute bill. SciReduce Architecture

  11. Performing a local reduction operation on a parallel computer

    DOEpatents

    Blocksome, Michael A; Faraj, Daniel A

    2013-06-04

    A parallel computer including compute nodes, each including two reduction processing cores, a network write processing core, and a network read processing core, each processing core assigned an input buffer. Copying, in interleaved chunks by the reduction processing cores, contents of the reduction processing cores' input buffers to an interleaved buffer in shared memory; copying, by one of the reduction processing cores, contents of the network write processing core's input buffer to shared memory; copying, by another of the reduction processing cores, contents of the network read processing core's input buffer to shared memory; and locally reducing in parallel by the reduction processing cores: the contents of the reduction processing core's input buffer; every other interleaved chunk of the interleaved buffer; the copied contents of the network write processing core's input buffer; and the copied contents of the network read processing core's input buffer.

  12. Performing a local reduction operation on a parallel computer

    DOEpatents

    Blocksome, Michael A.; Faraj, Daniel A.

    2012-12-11

    A parallel computer including compute nodes, each including two reduction processing cores, a network write processing core, and a network read processing core, each processing core assigned an input buffer. Copying, in interleaved chunks by the reduction processing cores, contents of the reduction processing cores' input buffers to an interleaved buffer in shared memory; copying, by one of the reduction processing cores, contents of the network write processing core's input buffer to shared memory; copying, by another of the reduction processing cores, contents of the network read processing core's input buffer to shared memory; and locally reducing in parallel by the reduction processing cores: the contents of the reduction processing core's input buffer; every other interleaved chunk of the interleaved buffer; the copied contents of the network write processing core's input buffer; and the copied contents of the network read processing core's input buffer.

  13. Parallel discrete event simulation using shared memory

    NASA Technical Reports Server (NTRS)

    Reed, Daniel A.; Malony, Allen D.; Mccredie, Bradley D.

    1988-01-01

    With traditional event-list techniques, evaluating a detailed discrete-event simulation-model can often require hours or even days of computation time. By eliminating the event list and maintaining only sufficient synchronization to ensure causality, parallel simulation can potentially provide speedups that are linear in the numbers of processors. A set of shared-memory experiments, using the Chandy-Misra distributed-simulation algorithm, to simulate networks of queues is presented. Parameters of the study include queueing network topology and routing probabilities, number of processors, and assignment of network nodes to processors. These experiments show that Chandy-Misra distributed simulation is a questionable alternative to sequential-simulation of most queueing network models.

  14. ACCESS: A Communicating and Cooperating Expert Systems System.

    DTIC Science & Technology

    1988-01-31

    therefore more quickly accepted by programmers. This is in part due to the already familiar concepts of multi-processing environments (e.g. semaphores ...Di68] and monitors [Br75]) which can be viewed as a special case of synchronized shared memory models [Di6S]. Heterogeneous systems however, are by...locality of nodes is not possible and frequent access of memory is required. Synchronization of processes also suffers from a loss of efficiency in

  15. Large-Scale, Parallel, Multi-Sensor Atmospheric Data Fusion Using Cloud Computing

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Manipon, G.; Hua, H.; Fetzer, E.

    2013-05-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over decades. Moving to multi-sensor, long-duration analyses of important climate variables presents serious challenges for large-scale data mining and fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over 10 years of data. To efficiently assemble such datasets, we are utilizing Elastic Computing in the Cloud and parallel map/reduce-based algorithms. However, these problems are Data Intensive computing so the data transfer times and storage costs (for caching) are key issues. SciReduce is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in the Cloud. Unlike Hadoop, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Figure 1 shows the architecture of the full computational system, with SciReduce at the core. Multi-year datasets are automatically "sharded" by time and space across a cluster of nodes so that years of data (millions of files) can be processed in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP URLs or other subsetting services, thereby minimizing the size of the cached input and intermediate datasets. We are using SciReduce to automate the production of multiple versions of a ten-year A-Train water vapor climatology under a NASA MEASURES grant. We will present the architecture of SciReduce, describe the achieved "clock time" speedups in fusing datasets on our own nodes and in the Cloud, and discuss the Cloud cost tradeoffs for storage, compute, and data transfer. We will also present a concept/prototype for staging NASA's A-Train Atmospheric datasets (Levels 2 & 3) in the Amazon Cloud so that any number of compute jobs can be executed "near" the multi-sensor data. Given such a system, multi-sensor climate studies over 10-20 years of data could be performed in an efficient way, with the researcher paying only his own Cloud compute bill.; Figure 1 -- Architecture.

  16. Large-Scale, Multi-Sensor Atmospheric Data Fusion Using Hybrid Cloud Computing

    NASA Astrophysics Data System (ADS)

    Wilson, Brian; Manipon, Gerald; Hua, Hook; Fetzer, Eric

    2014-05-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over decades. Moving to multi-sensor, long-duration analyses of important climate variables presents serious challenges for large-scale data mining and fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over 10 years of data. To efficiently assemble such datasets, we are utilizing Elastic Computing in the Cloud and parallel map-reduce-based algorithms. However, these problems are Data Intensive computing so the data transfer times and storage costs (for caching) are key issues. SciReduce is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in a hybrid Cloud (private eucalyptus & public Amazon). Unlike Hadoop, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Multi-year datasets are automatically "sharded" by time and space across a cluster of nodes so that years of data (millions of files) can be processed in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP URLs or other subsetting services, thereby minimizing the size of the cached input and intermediate datasets. We are using SciReduce to automate the production of multiple versions of a ten-year A-Train water vapor climatology under a NASA MEASURES grant. We will present the architecture of SciReduce, describe the achieved "clock time" speedups in fusing datasets on our own nodes and in the Cloud, and discuss the Cloud cost tradeoffs for storage, compute, and data transfer. We will also present a concept and prototype for staging NASA's A-Train Atmospheric datasets (Levels 2 & 3) in the Amazon Cloud so that any number of compute jobs can be executed "near" the multi-sensor data. Given such a system, multi-sensor climate studies over 10-20 years of data could be perform

  17. Memory, executive, and multidomain subtle cognitive impairment: clinical and biomarker findings.

    PubMed

    Toledo, Jon B; Bjerke, Maria; Chen, Kewei; Rozycki, Martin; Jack, Clifford R; Weiner, Michael W; Arnold, Steven E; Reiman, Eric M; Davatzikos, Christos; Shaw, Leslie M; Trojanowski, John Q

    2015-07-14

    We studied the biomarker signatures and prognoses of 3 different subtle cognitive impairment (SCI) groups (executive, memory, and multidomain) as well as the subjective memory complaints (SMC) group. We studied 522 healthy controls in the Alzheimer's Disease Neuroimaging Initiative (ADNI). Cutoffs for executive, memory, and multidomain SCI were defined using participants who remained cognitively normal (CN) for 7 years. CSF Alzheimer disease (AD) biomarkers, composite and region-of-interest (ROI) MRI, and fluorodeoxyglucose-PET measures were compared in these participants. Using a stringent cutoff (fifth percentile), 27.6% of the ADNI participants were classified as SCI. Most single ROI or global-based measures were not sensitive to detect differences between groups. Only MRI-SPARE-AD (Spatial Pattern of Abnormalities for Recognition of Early AD), a quantitative MRI pattern-based global index, showed differences between all groups, excluding the executive SCI group. Atrophy patterns differed in memory SCI and SMC. The CN and the SMC groups presented a similar distribution of preclinical dementia stages. Fifty percent of the participants with executive, memory, and multidomain SCI progressed to mild cognitive impairment or dementia at 7, 5, and 2 years, respectively. Our results indicate that (1) the different SCI categories have different clinical prognoses and biomarker signatures, (2) longitudinally followed CN subjects are needed to establish clinical cutoffs, (3) subjects with SMC show a frontal pattern of brain atrophy, and (4) pattern-based analyses outperform commonly used single ROI-based neuroimaging biomarkers and are needed to detect initial stages of cognitive impairment. © 2015 American Academy of Neurology.

  18. Developing a data sharing community for spinal cord injury research.

    PubMed

    Callahan, Alison; Anderson, Kim D; Beattie, Michael S; Bixby, John L; Ferguson, Adam R; Fouad, Karim; Jakeman, Lyn B; Nielson, Jessica L; Popovich, Phillip G; Schwab, Jan M; Lemmon, Vance P

    2017-09-01

    The rapid growth in data sharing presents new opportunities across the spectrum of biomedical research. Global efforts are underway to develop practical guidance for implementation of data sharing and open data resources. These include the recent recommendation of 'FAIR Data Principles', which assert that if data is to have broad scientific value, then digital representations of that data should be Findable, Accessible, Interoperable and Reusable (FAIR). The spinal cord injury (SCI) research field has a long history of collaborative initiatives that include sharing of preclinical research models and outcome measures. In addition, new tools and resources are being developed by the SCI research community to enhance opportunities for data sharing and access. With this in mind, the National Institute of Neurological Disorders and Stroke (NINDS) at the National Institutes of Health (NIH) hosted a workshop on October 5-6, 2016 in Bethesda, MD, in collaboration with the Open Data Commons for Spinal Cord Injury (ODC-SCI) titled "Preclinical SCI Data: Creating a FAIR Share Community". Workshop invitees were nominated by the workshop steering committee (co-chairs: ARF and VPL; members: AC, KDA, MSB, KF, LBJ, PGP, JMS), to bring together junior and senior level experts including preclinical and basic SCI researchers from academia and industry, data science and bioinformatics experts, investigators with expertise in other neurological disease fields, clinical researchers, members of the SCI community, and program staff representing federal and private funding agencies. The workshop and ODC-SCI efforts were sponsored by the International Spinal Research Trust (ISRT), the Rick Hansen Institute, Wings for Life, the Craig H. Neilsen Foundation and NINDS. The number of attendees was limited to ensure active participation and feedback in small groups. The goals were to examine the current landscape for data sharing in SCI research and provide a path to its future. Below are highlights from the workshop, including perspectives on the value of data sharing in SCI research, workshop participant perspectives and concerns, descriptions of existing resources and actionable directions for further engaging the SCI research community in a model that may be applicable to many other areas of neuroscience. This manuscript is intended to share these initial findings with the broader research community, and to provide talking points for continued feedback from the SCI field, as it continues to move forward in the age of data sharing. Copyright © 2017. Published by Elsevier Inc.

  19. A multiarchitecture parallel-processing development environment

    NASA Technical Reports Server (NTRS)

    Townsend, Scott; Blech, Richard; Cole, Gary

    1993-01-01

    A description is given of the hardware and software of a multiprocessor test bed - the second generation Hypercluster system. The Hypercluster architecture consists of a standard hypercube distributed-memory topology, with multiprocessor shared-memory nodes. By using standard, off-the-shelf hardware, the system can be upgraded to use rapidly improving computer technology. The Hypercluster's multiarchitecture nature makes it suitable for researching parallel algorithms in computational field simulation applications (e.g., computational fluid dynamics). The dedicated test-bed environment of the Hypercluster and its custom-built software allows experiments with various parallel-processing concepts such as message passing algorithms, debugging tools, and computational 'steering'. Such research would be difficult, if not impossible, to achieve on shared, commercial systems.

  20. The Global File System

    NASA Technical Reports Server (NTRS)

    Soltis, Steven R.; Ruwart, Thomas M.; OKeefe, Matthew T.

    1996-01-01

    The global file system (GFS) is a prototype design for a distributed file system in which cluster nodes physically share storage devices connected via a network-like fiber channel. Networks and network-attached storage devices have advanced to a level of performance and extensibility so that the previous disadvantages of shared disk architectures are no longer valid. This shared storage architecture attempts to exploit the sophistication of storage device technologies whereas a server architecture diminishes a device's role to that of a simple component. GFS distributes the file system responsibilities across processing nodes, storage across the devices, and file system resources across the entire storage pool. GFS caches data on the storage devices instead of the main memories of the machines. Consistency is established by using a locking mechanism maintained by the storage devices to facilitate atomic read-modify-write operations. The locking mechanism is being prototyped in the Silicon Graphics IRIX operating system and is accessed using standard Unix commands and modules.

  1. Memory complaints in subjective cognitive impairment, amnestic mild cognitive impairment and mild Alzheimer's disease.

    PubMed

    Ryu, Seon Young; Lee, Sang Bong; Kim, Tae Woo; Lee, Taek Jun

    2016-12-01

    Memory complaints are a frequent phenomenon in elderly individuals and can lead to opportunistic help-seeking behavior. The aim of this study was to compare different aspects of memory complaints (i.e., prospective versus retrospective complaints) in individuals with subjective cognitive impairment (SCI), amnestic mild cognitive impairment (aMCI), and mild Alzheimer's disease (AD). The study included a total of 115 participants (mean age: 68.82 ± 8.83 years) with SCI (n = 34), aMCI (n = 46), and mild AD (n = 35). Memory complaints were assessed using the Prospective and Retrospective Memory Questionnaire (PRMQ), which consists of 16 items that describe everyday memory failure of both prospective memory (PM) and retrospective memory (RM). For aMCI and AD subjects, informants also completed an informant-rating of the PRMQ. All participants completed detailed neuropsychological tests. Results show that PM complaints were equivalent among the three groups. However, RM complaints differed. Specifically, RM complaints in aMCI were higher than SCI, but similar to AD. Informant-reported memory complaints were higher for AD than aMCI. Our study suggests that RM complaints of memory complaints may be helpful in discriminating between SCI and aMCI, but both PM and RM complaints are of limited value in differentiating aMCI from AD.

  2. Effect of resource constraints on intersimilar coupled networks.

    PubMed

    Shai, S; Dobson, S

    2012-12-01

    Most real-world networks do not live in isolation but are often coupled together within a larger system. Recent studies have shown that intersimilarity between coupled networks increases the connectivity of the overall system. However, unlike connected nodes in a single network, coupled nodes often share resources, like time, energy, and memory, which can impede flow processes through contention when intersimilarly coupled. We study a model of a constrained susceptible-infected-recovered (SIR) process on a system consisting of two random networks sharing the same set of nodes, where nodes are limited to interact with (and therefore infect) a maximum number of neighbors at each epidemic time step. We obtain that, in agreement with previous studies, when no limit exists (regular SIR model), positively correlated (intersimilar) coupling results in a lower epidemic threshold than negatively correlated (interdissimilar) coupling. However, in the case of the constrained SIR model, the obtained epidemic threshold is lower with negatively correlated coupling. The latter finding differentiates our work from previous studies and provides another step towards revealing the qualitative differences between single and coupled networks.

  3. Effect of resource constraints on intersimilar coupled networks

    NASA Astrophysics Data System (ADS)

    Shai, S.; Dobson, S.

    2012-12-01

    Most real-world networks do not live in isolation but are often coupled together within a larger system. Recent studies have shown that intersimilarity between coupled networks increases the connectivity of the overall system. However, unlike connected nodes in a single network, coupled nodes often share resources, like time, energy, and memory, which can impede flow processes through contention when intersimilarly coupled. We study a model of a constrained susceptible-infected-recovered (SIR) process on a system consisting of two random networks sharing the same set of nodes, where nodes are limited to interact with (and therefore infect) a maximum number of neighbors at each epidemic time step. We obtain that, in agreement with previous studies, when no limit exists (regular SIR model), positively correlated (intersimilar) coupling results in a lower epidemic threshold than negatively correlated (interdissimilar) coupling. However, in the case of the constrained SIR model, the obtained epidemic threshold is lower with negatively correlated coupling. The latter finding differentiates our work from previous studies and provides another step towards revealing the qualitative differences between single and coupled networks.

  4. Parallel discrete event simulation: A shared memory approach

    NASA Technical Reports Server (NTRS)

    Reed, Daniel A.; Malony, Allen D.; Mccredie, Bradley D.

    1987-01-01

    With traditional event list techniques, evaluating a detailed discrete event simulation model can often require hours or even days of computation time. Parallel simulation mimics the interacting servers and queues of a real system by assigning each simulated entity to a processor. By eliminating the event list and maintaining only sufficient synchronization to insure causality, parallel simulation can potentially provide speedups that are linear in the number of processors. A set of shared memory experiments is presented using the Chandy-Misra distributed simulation algorithm to simulate networks of queues. Parameters include queueing network topology and routing probabilities, number of processors, and assignment of network nodes to processors. These experiments show that Chandy-Misra distributed simulation is a questionable alternative to sequential simulation of most queueing network models.

  5. Using VirtualGL/TurboVNC Software on the Peregrine System |

    Science.gov Websites

    High-Performance Computing | NREL VirtualGL/TurboVNC Software on the Peregrine System Using , allowing users to access and share large-memory visualization nodes with high-end graphics processing units may be better than just using X11 forwarding when connecting from a remote site with low bandwidth and

  6. PANDA: A distributed multiprocessor operating system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chubb, P.

    1989-01-01

    PANDA is a design for a distributed multiprocessor and an operating system. PANDA is designed to allow easy expansion of both hardware and software. As such, the PANDA kernel provides only message passing and memory and process management. The other features needed for the system (device drivers, secondary storage management, etc.) are provided as replaceable user tasks. The thesis presents PANDA's design and implementation, both hardware and software. PANDA uses multiple 68010 processors sharing memory on a VME bus, each such node potentially connected to others via a high speed network. The machine is completely homogeneous: there are no differencesmore » between processors that are detectable by programs running on the machine. A single two-processor node has been constructed. Each processor contains memory management circuits designed to allow processors to share page tables safely. PANDA presents a programmers' model similar to the hardware model: a job is divided into multiple tasks, each having its own address space. Within each task, multiple processes share code and data. Tasks can send messages to each other, and set up virtual circuits between themselves. Peripheral devices such as disc drives are represented within PANDA by tasks. PANDA divides secondary storage into volumes, each volume being accessed by a volume access task, or VAT. All knowledge about the way that data is stored on a disc is kept in its volume's VAT. The design is such that PANDA should provide a useful testbed for file systems and device drivers, as these can be installed without recompiling PANDA itself, and without rebooting the machine.« less

  7. Paging memory from random access memory to backing storage in a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Inglett, Todd A; Ratterman, Joseph D; Smith, Brian E

    2013-05-21

    Paging memory from random access memory (`RAM`) to backing storage in a parallel computer that includes a plurality of compute nodes, including: executing a data processing application on a virtual machine operating system in a virtual machine on a first compute node; providing, by a second compute node, backing storage for the contents of RAM on the first compute node; and swapping, by the virtual machine operating system in the virtual machine on the first compute node, a page of memory from RAM on the first compute node to the backing storage on the second compute node.

  8. Exploring the use of I/O nodes for computation in a MIMD multiprocessor

    NASA Technical Reports Server (NTRS)

    Kotz, David; Cai, Ting

    1995-01-01

    As parallel systems move into the production scientific-computing world, the emphasis will be on cost-effective solutions that provide high throughput for a mix of applications. Cost effective solutions demand that a system make effective use of all of its resources. Many MIMD multiprocessors today, however, distinguish between 'compute' and 'I/O' nodes, the latter having attached disks and being dedicated to running the file-system server. This static division of responsibilities simplifies system management but does not necessarily lead to the best performance in workloads that need a different balance of computation and I/O. Of course, computational processes sharing a node with a file-system service may receive less CPU time, network bandwidth, and memory bandwidth than they would on a computation-only node. In this paper we begin to examine this issue experimentally. We found that high performance I/O does not necessarily require substantial CPU time, leaving plenty of time for application computation. There were some complex file-system requests, however, which left little CPU time available to the application. (The impact on network and memory bandwidth still needs to be determined.) For applications (or users) that cannot tolerate an occasional interruption, we recommend that they continue to use only compute nodes. For tolerant applications needing more cycles than those provided by the compute nodes, we recommend that they take full advantage of both compute and I/O nodes for computation, and that operating systems should make this possible.

  9. Intranode data communications in a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Miller, Douglas R; Ratterman, Joseph D; Smith, Brian E

    2014-01-07

    Intranode data communications in a parallel computer that includes compute nodes configured to execute processes, where the data communications include: allocating, upon initialization of a first process of a computer node, a region of shared memory; establishing, by the first process, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; sending, to a second process on the same compute node, a data communications message without determining whether the second process has been initialized, including storing the data communications message in the message buffer of the second process; and upon initialization of the second process: retrieving, by the second process, a pointer to the second process's message buffer; and retrieving, by the second process from the second process's message buffer in dependence upon the pointer, the data communications message sent by the first process.

  10. Intranode data communications in a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Miller, Douglas R; Ratterman, Joseph D; Smith, Brian E

    2013-07-23

    Intranode data communications in a parallel computer that includes compute nodes configured to execute processes, where the data communications include: allocating, upon initialization of a first process of a compute node, a region of shared memory; establishing, by the first process, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; sending, to a second process on the same compute node, a data communications message without determining whether the second process has been initialized, including storing the data communications message in the message buffer of the second process; and upon initialization of the second process: retrieving, by the second process, a pointer to the second process's message buffer; and retrieving, by the second process from the second process's message buffer in dependence upon the pointer, the data communications message sent by the first process.

  11. Large-Scale Network Analysis of Whole-Brain Resting-State Functional Connectivity in Spinal Cord Injury: A Comparative Study.

    PubMed

    Kaushal, Mayank; Oni-Orisan, Akinwunmi; Chen, Gang; Li, Wenjun; Leschke, Jack; Ward, Doug; Kalinosky, Benjamin; Budde, Matthew; Schmit, Brian; Li, Shi-Jiang; Muqeet, Vaishnavi; Kurpad, Shekar

    2017-09-01

    Network analysis based on graph theory depicts the brain as a complex network that allows inspection of overall brain connectivity pattern and calculation of quantifiable network metrics. To date, large-scale network analysis has not been applied to resting-state functional networks in complete spinal cord injury (SCI) patients. To characterize modular reorganization of whole brain into constituent nodes and compare network metrics between SCI and control subjects, fifteen subjects with chronic complete cervical SCI and 15 neurologically intact controls were scanned. The data were preprocessed followed by parcellation of the brain into 116 regions of interest (ROI). Correlation analysis was performed between every ROI pair to construct connectivity matrices and ROIs were categorized into distinct modules. Subsequently, local efficiency (LE) and global efficiency (GE) network metrics were calculated at incremental cost thresholds. The application of a modularity algorithm organized the whole-brain resting-state functional network of the SCI and the control subjects into nine and seven modules, respectively. The individual modules differed across groups in terms of the number and the composition of constituent nodes. LE demonstrated statistically significant decrease at multiple cost levels in SCI subjects. GE did not differ significantly between the two groups. The demonstration of modular architecture in both groups highlights the applicability of large-scale network analysis in studying complex brain networks. Comparing modules across groups revealed differences in number and membership of constituent nodes, indicating modular reorganization due to neural plasticity.

  12. Ensuring correct rollback recovery in distributed shared memory systems

    NASA Technical Reports Server (NTRS)

    Janssens, Bob; Fuchs, W. Kent

    1995-01-01

    Distributed shared memory (DSM) implemented on a cluster of workstations is an increasingly attractive platform for executing parallel scientific applications. Checkpointing and rollback techniques can be used in such a system to allow the computation to progress in spite of the temporary failure of one or more processing nodes. This paper presents the design of an independent checkpointing method for DSM that takes advantage of DSM's specific properties to reduce error-free and rollback overhead. The scheme reduces the dependencies that need to be considered for correct rollback to those resulting from transfers of pages. Furthermore, in-transit messages can be recovered without the use of logging. We extend the scheme to a DSM implementation using lazy release consistency, where the frequency of dependencies is further reduced.

  13. Merlin - Massively parallel heterogeneous computing

    NASA Technical Reports Server (NTRS)

    Wittie, Larry; Maples, Creve

    1989-01-01

    Hardware and software for Merlin, a new kind of massively parallel computing system, are described. Eight computers are linked as a 300-MIPS prototype to develop system software for a larger Merlin network with 16 to 64 nodes, totaling 600 to 3000 MIPS. These working prototypes help refine a mapped reflective memory technique that offers a new, very general way of linking many types of computer to form supercomputers. Processors share data selectively and rapidly on a word-by-word basis. Fast firmware virtual circuits are reconfigured to match topological needs of individual application programs. Merlin's low-latency memory-sharing interfaces solve many problems in the design of high-performance computing systems. The Merlin prototypes are intended to run parallel programs for scientific applications and to determine hardware and software needs for a future Teraflops Merlin network.

  14. A message passing kernel for the hypercluster parallel processing test bed

    NASA Technical Reports Server (NTRS)

    Blech, Richard A.; Quealy, Angela; Cole, Gary L.

    1989-01-01

    A Message-Passing Kernel (MPK) for the Hypercluster parallel-processing test bed is described. The Hypercluster is being developed at the NASA Lewis Research Center to support investigations of parallel algorithms and architectures for computational fluid and structural mechanics applications. The Hypercluster resembles the hypercube architecture except that each node consists of multiple processors communicating through shared memory. The MPK efficiently routes information through the Hypercluster, using a message-passing protocol when necessary and faster shared-memory communication whenever possible. The MPK also interfaces all of the processors with the Hypercluster operating system (HYCLOPS), which runs on a Front-End Processor (FEP). This approach distributes many of the I/O tasks to the Hypercluster processors and eliminates the need for a separate I/O support program on the FEP.

  15. A Parallel Saturation Algorithm on Shared Memory Architectures

    NASA Technical Reports Server (NTRS)

    Ezekiel, Jonathan; Siminiceanu

    2007-01-01

    Symbolic state-space generators are notoriously hard to parallelize. However, the Saturation algorithm implemented in the SMART verification tool differs from other sequential symbolic state-space generators in that it exploits the locality of ring events in asynchronous system models. This paper explores whether event locality can be utilized to efficiently parallelize Saturation on shared-memory architectures. Conceptually, we propose to parallelize the ring of events within a decision diagram node, which is technically realized via a thread pool. We discuss the challenges involved in our parallel design and conduct experimental studies on its prototypical implementation. On a dual-processor dual core PC, our studies show speed-ups for several example models, e.g., of up to 50% for a Kanban model, when compared to running our algorithm only on a single core.

  16. Long-distance quantum communication over noisy networks without long-time quantum memory

    NASA Astrophysics Data System (ADS)

    Mazurek, Paweł; Grudka, Andrzej; Horodecki, Michał; Horodecki, Paweł; Łodyga, Justyna; Pankowski, Łukasz; PrzysieŻna, Anna

    2014-12-01

    The problem of sharing entanglement over large distances is crucial for implementations of quantum cryptography. A possible scheme for long-distance entanglement sharing and quantum communication exploits networks whose nodes share Einstein-Podolsky-Rosen (EPR) pairs. In Perseguers et al. [Phys. Rev. A 78, 062324 (2008), 10.1103/PhysRevA.78.062324] the authors put forward an important isomorphism between storing quantum information in a dimension D and transmission of quantum information in a D +1 -dimensional network. We show that it is possible to obtain long-distance entanglement in a noisy two-dimensional (2D) network, even when taking into account that encoding and decoding of a state is exposed to an error. For 3D networks we propose a simple encoding and decoding scheme based solely on syndrome measurements on 2D Kitaev topological quantum memory. Our procedure constitutes an alternative scheme of state injection that can be used for universal quantum computation on 2D Kitaev code. It is shown that the encoding scheme is equivalent to teleporting the state, from a specific node into a whole two-dimensional network, through some virtual EPR pair existing within the rest of network qubits. We present an analytic lower bound on fidelity of the encoding and decoding procedure, using as our main tool a modified metric on space-time lattice, deviating from a taxicab metric at the first and the last time slices.

  17. Large-Scale, Parallel, Multi-Sensor Data Fusion in the Cloud

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Manipon, G.; Hua, H.

    2012-12-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over periods of years to decades. However, moving from predominantly single-instrument studies to a multi-sensor, measurement-based model for long-duration analysis of important climate variables presents serious challenges for large-scale data mining and data fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another instrument (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over years of AIRS data. To perform such an analysis, one must discover & access multiple datasets from remote sites, find the space/time "matchups" between instruments swaths and model grids, understand the quality flags and uncertainties for retrieved physical variables, assemble merged datasets, and compute fused products for further scientific and statistical analysis. To efficiently assemble such decade-scale datasets in a timely manner, we are utilizing Elastic Computing in the Cloud and parallel map/reduce-based algorithms. "SciReduce" is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in the Cloud. Unlike Hadoop, in which simple tuples (keys & values) are passed between the map and reduce functions, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Thus, SciReduce uses the native datatypes (geolocated grids, swaths, and points) that geo-scientists are familiar with. We are deploying within SciReduce a versatile set of python operators for data lookup, access, subsetting, co-registration, mining, fusion, and statistical analysis. All operators take in sets of geo-located arrays and generate more arrays. Large, multi-year satellite and model datasets are automatically "sharded" by time and space across a cluster of nodes so that years of data (millions of granules) can be compared or fused in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP or webification URLs, thereby minimizing the size of the stored input and intermediate datasets. A typical map function might assemble and quality control AIRS Level-2 water vapor profiles for a year of data in parallel, then a reduce function would average the profiles in lat/lon bins (again, in parallel), and a final reduce would aggregate the climatology and write it to output files. We are using SciReduce to automate the production of multiple versions of a multi-year water vapor climatology (AIRS & MODIS), stratified by Cloudsat cloud classification, and compare it to models (ECMWF & MERRA reanalysis). We will present the architecture of SciReduce, describe the achieved "clock time" speedups in fusing huge datasets on our own nodes and in the Amazon Cloud, and discuss the Cloud cost tradeoffs for storage, compute, and data transfer.

  18. Large-Scale, Parallel, Multi-Sensor Data Fusion in the Cloud

    NASA Astrophysics Data System (ADS)

    Wilson, B.; Manipon, G.; Hua, H.

    2012-04-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over periods of years to decades. However, moving from predominantly single-instrument studies to a multi-sensor, measurement-based model for long-duration analysis of important climate variables presents serious challenges for large-scale data mining and data fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another instrument (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over years of AIRS data. To perform such an analysis, one must discover & access multiple datasets from remote sites, find the space/time "matchups" between instruments swaths and model grids, understand the quality flags and uncertainties for retrieved physical variables, assemble merged datasets, and compute fused products for further scientific and statistical analysis. To efficiently assemble such decade-scale datasets in a timely manner, we are utilizing Elastic Computing in the Cloud and parallel map/reduce-based algorithms. "SciReduce" is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in the Cloud. Unlike Hadoop, in which simple tuples (keys & values) are passed between the map and reduce functions, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Thus, SciReduce uses the native datatypes (geolocated grids, swaths, and points) that geo-scientists are familiar with. We are deploying within SciReduce a versatile set of python operators for data lookup, access, subsetting, co-registration, mining, fusion, and statistical analysis. All operators take in sets of geo-arrays and generate more arrays. Large, multi-year satellite and model datasets are automatically "sharded" by time and space across a cluster of nodes so that years of data (millions of granules) can be compared or fused in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP or webification URLs, thereby minimizing the size of the stored input and intermediate datasets. A typical map function might assemble and quality control AIRS Level-2 water vapor profiles for a year of data in parallel, then a reduce function would average the profiles in bins (again, in parallel), and a final reduce would aggregate the climatology and write it to output files. We are using SciReduce to automate the production of multiple versions of a multi-year water vapor climatology (AIRS & MODIS), stratified by Cloudsat cloud classification, and compare it to models (ECMWF & MERRA reanalysis). We will present the architecture of SciReduce, describe the achieved "clock time" speedups in fusing huge datasets on our own nodes and in the Amazon Cloud, and discuss the Cloud cost tradeoffs for storage, compute, and data transfer.

  19. New scaling relation for information transfer in biological networks

    PubMed Central

    Kim, Hyunju; Davies, Paul; Walker, Sara Imari

    2015-01-01

    We quantify characteristics of the informational architecture of two representative biological networks: the Boolean network model for the cell-cycle regulatory network of the fission yeast Schizosaccharomyces pombe (Davidich et al. 2008 PLoS ONE 3, e1672 (doi:10.1371/journal.pone.0001672)) and that of the budding yeast Saccharomyces cerevisiae (Li et al. 2004 Proc. Natl Acad. Sci. USA 101, 4781–4786 (doi:10.1073/pnas.0305937101)). We compare our results for these biological networks with the same analysis performed on ensembles of two different types of random networks: Erdös–Rényi and scale-free. We show that both biological networks share features in common that are not shared by either random network ensemble. In particular, the biological networks in our study process more information than the random networks on average. Both biological networks also exhibit a scaling relation in information transferred between nodes that distinguishes them from random, where the biological networks stand out as distinct even when compared with random networks that share important topological properties, such as degree distribution, with the biological network. We show that the most biologically distinct regime of this scaling relation is associated with a subset of control nodes that regulate the dynamics and function of each respective biological network. Information processing in biological networks is therefore interpreted as an emergent property of topology (causal structure) and dynamics (function). Our results demonstrate quantitatively how the informational architecture of biologically evolved networks can distinguish them from other classes of network architecture that do not share the same informational properties. PMID:26701883

  20. WinHPC System Configuration | High-Performance Computing | NREL

    Science.gov Websites

    CPUs with 48GB of memory. Node 04 has dual Intel Xeon E5530 CPUs with 24GB of memory. Nodes 05-20 have dual AMD Opteron 2374 HE CPUs with 16GB of memory. Nodes 21-30 have been decommissioned. Nodes 31-35 have dual Intel Xeon X5675 CPUs with 48GB of memory. Nodes 36-37 have dual Intel Xeon E5-2680 CPUs with

  1. Retention of Ag-specific memory CD4+ T cells in the draining lymph node indicates lymphoid tissue resident memory populations.

    PubMed

    Marriott, Clare L; Dutton, Emma E; Tomura, Michio; Withers, David R

    2017-05-01

    Several different memory T-cell populations have now been described based upon surface receptor expression and migratory capabilities. Here we have assessed murine endogenous memory CD4 + T cells generated within a draining lymph node and their subsequent migration to other secondary lymphoid tissues. Having established a model response targeting a specific peripheral lymph node, we temporally labelled all the cells within draining lymph node using photoconversion. Tracking of photoconverted and non-photoconverted Ag-specific CD4 + T cells revealed the rapid establishment of a circulating memory population in all lymph nodes within days of immunisation. Strikingly, a resident memory CD4 + T cell population became established in the draining lymph node and persisted for several months in the absence of detectable migration to other lymphoid tissue. These cells most closely resembled effector memory T cells, usually associated with circulation through non-lymphoid tissue, but here, these cells were retained in the draining lymph node. These data indicate that lymphoid tissue resident memory CD4 + T-cell populations are generated in peripheral lymph nodes following immunisation. © 2017 The Authors. European Journal of Immunology published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Asynchronous Data Retrieval from an Object-Oriented Database

    NASA Astrophysics Data System (ADS)

    Gilbert, Jonathan P.; Bic, Lubomir

    We present an object-oriented semantic database model which, similar to other object-oriented systems, combines the virtues of four concepts: the functional data model, a property inheritance hierarchy, abstract data types and message-driven computation. The main emphasis is on the last of these four concepts. We describe generic procedures that permit queries to be processed in a purely message-driven manner. A database is represented as a network of nodes and directed arcs, in which each node is a logical processing element, capable of communicating with other nodes by exchanging messages. This eliminates the need for shared memory and for centralized control during query processing. Hence, the model is suitable for implementation on a multiprocessor computer architecture, consisting of large numbers of loosely coupled processing elements.

  3. An efficient 3-dim FFT for plane wave electronic structure calculations on massively parallel machines composed of multiprocessor nodes

    NASA Astrophysics Data System (ADS)

    Goedecker, Stefan; Boulet, Mireille; Deutsch, Thierry

    2003-08-01

    Three-dimensional Fast Fourier Transforms (FFTs) are the main computational task in plane wave electronic structure calculations. Obtaining a high performance on a large numbers of processors is non-trivial on the latest generation of parallel computers that consist of nodes made up of a shared memory multiprocessors. A non-dogmatic method for obtaining high performance for such 3-dim FFTs in a combined MPI/OpenMP programming paradigm will be presented. Exploiting the peculiarities of plane wave electronic structure calculations, speedups of up to 160 and speeds of up to 130 Gflops were obtained on 256 processors.

  4. Cognitive performance in hypotensive persons with spinal cord injury.

    PubMed

    Jegede, Adejoke B; Rosado-Rivera, Dwindally; Bauman, William A; Cardozo, Christopher P; Sano, Mary; Moyer, Jeremy M; Brooks, Monifa; Wecht, Jill Maria

    2010-02-01

    Due to sympathetic de-centralization, individuals with spinal cord injury (SCI), especially those with tetraplegia, often present with hypotension, worsened with upright posture. Several investigations in the non-SCI population have noted a relationship between chronic hypotension and deficits in memory, attention and processing speed and delayed reaction times. To determine cognitive function in persons with SCI who were normotensive or hypotensive over a 24-h observation period while maintaining their routine activities. Subjects included 20 individuals with chronic SCI (2-39 years), 13 with tetraplegia (C4-8) and 7 with paraplegia (T2-11). Individuals with hypotension were defined as having a mean 24-h systolic blood pressure (SBP) below 110 mmHg for males and 100 mmHg for females, and having spent >or=50% of the total time below these gender-specific thresholds. The cognitive battery used included assessment of memory (CVLT), attention and processing speed (Digit Span, Stroop word and color and Oral Trails A), language (COWAT) and executive function (Oral Trails B and Stroop color-word). Demographic parameters did not differ among the hypotensive and normotensive groups; the proportion of individuals with tetraplegia (82%) was higher in the hypotensive group. Memory was significantly impaired (P < 0.05) and there was a trend toward slowed attention and processing speed (P < 0.06) in the hypotensive compared to the normotensive group. These preliminary data suggest that chronic hypotension in persons with SCI is associated with deficits in memory and possibly attention and processing speed, as previously reported in the non-SCI population.

  5. A revised limbic system model for memory, emotion and behaviour.

    PubMed

    Catani, Marco; Dell'acqua, Flavio; Thiebaut de Schotten, Michel

    2013-09-01

    Emotion, memories and behaviour emerge from the coordinated activities of regions connected by the limbic system. Here, we propose an update of the limbic model based on the seminal work of Papez, Yakovlev and MacLean. In the revised model we identify three distinct but partially overlapping networks: (i) the Hippocampal-diencephalic and parahippocampal-retrosplenial network dedicated to memory and spatial orientation; (ii) The temporo-amygdala-orbitofrontal network for the integration of visceral sensation and emotion with semantic memory and behaviour; (iii) the default-mode network involved in autobiographical memories and introspective self-directed thinking. The three networks share cortical nodes that are emerging as principal hubs in connectomic analysis. This revised network model of the limbic system reconciles recent functional imaging findings with anatomical accounts of clinical disorders commonly associated with limbic pathology. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Internode data communications in a parallel computer

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Miller, Douglas R.; Parker, Jeffrey J.; Ratterman, Joseph D.; Smith, Brian E.

    2013-09-03

    Internode data communications in a parallel computer that includes compute nodes that each include main memory and a messaging unit, the messaging unit including computer memory and coupling compute nodes for data communications, in which, for each compute node at compute node boot time: a messaging unit allocates, in the messaging unit's computer memory, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; receives, prior to initialization of a particular process on the compute node, a data communications message intended for the particular process; and stores the data communications message in the message buffer associated with the particular process. Upon initialization of the particular process, the process establishes a messaging buffer in main memory of the compute node and copies the data communications message from the message buffer of the messaging unit into the message buffer of main memory.

  7. Internode data communications in a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Miller, Douglas R; Parker, Jeffrey J; Ratterman, Joseph D; Smith, Brian E

    2014-02-11

    Internode data communications in a parallel computer that includes compute nodes that each include main memory and a messaging unit, the messaging unit including computer memory and coupling compute nodes for data communications, in which, for each compute node at compute node boot time: a messaging unit allocates, in the messaging unit's computer memory, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; receives, prior to initialization of a particular process on the compute node, a data communications message intended for the particular process; and stores the data communications message in the message buffer associated with the particular process. Upon initialization of the particular process, the process establishes a messaging buffer in main memory of the compute node and copies the data communications message from the message buffer of the messaging unit into the message buffer of main memory.

  8. Development of an Implicit, Charge and Energy Conserving 2D Electromagnetic PIC Code on Advanced Architectures

    NASA Astrophysics Data System (ADS)

    Payne, Joshua; Taitano, William; Knoll, Dana; Liebs, Chris; Murthy, Karthik; Feltman, Nicolas; Wang, Yijie; McCarthy, Colleen; Cieren, Emanuel

    2012-10-01

    In order to solve problems such as the ion coalescence and slow MHD shocks fully kinetically we developed a fully implicit 2D energy and charge conserving electromagnetic PIC code, PlasmaApp2D. PlasmaApp2D differs from previous implicit PIC implementations in that it will utilize advanced architectures such as GPUs and shared memory CPU systems, with problems too large to fit into cache. PlasmaApp2D will be a hybrid CPU-GPU code developed primarily to run on the DARWIN cluster at LANL utilizing four 12-core AMD Opteron CPUs and two NVIDIA Tesla GPUs per node. MPI will be used for cross-node communication, OpenMP will be used for on-node parallelism, and CUDA will be used for the GPUs. Development progress and initial results will be presented.

  9. Interconnect Performance Evaluation of SGI Altix 3700 BX2, Cray X1, Cray Opteron Cluster, and Dell PowerEdge

    NASA Technical Reports Server (NTRS)

    Fatoohi, Rod; Saini, Subbash; Ciotti, Robert

    2006-01-01

    We study the performance of inter-process communication on four high-speed multiprocessor systems using a set of communication benchmarks. The goal is to identify certain limiting factors and bottlenecks with the interconnect of these systems as well as to compare these interconnects. We measured network bandwidth using different number of communicating processors and communication patterns, such as point-to-point communication, collective communication, and dense communication patterns. The four platforms are: a 512-processor SGI Altix 3700 BX2 shared-memory machine with 3.2 GB/s links; a 64-processor (single-streaming) Cray XI shared-memory machine with 32 1.6 GB/s links; a 128-processor Cray Opteron cluster using a Myrinet network; and a 1280-node Dell PowerEdge cluster with an InfiniBand network. Our, results show the impact of the network bandwidth and topology on the overall performance of each interconnect.

  10. SciServer: An Online Collaborative Environment for Big Data in Research and Education

    NASA Astrophysics Data System (ADS)

    Raddick, Jordan; Souter, Barbara; Lemson, Gerard; Taghizadeh-Popp, Manuchehr

    2017-01-01

    For the past year, SciServer Compute (http://compute.sciserver.org) has offered access to big data resources running within server-side Docker containers. Compute has allowed thousands of researchers to bring advanced analysis to big datasets like the Sloan Digital Sky Survey and others, while keeping the analysis close to the data for better performance and easier read/write access. SciServer Compute is just one part of the SciServer system being developed at Johns Hopkins University, which provides an easy-to-use collaborative research environment for astronomy and many other sciences.SciServer enables these collaborative research strategies using Jupyter notebooks, in which users can write their own Python and R scripts and execute them on the same server as the data. We have written special-purpose libraries for querying, reading, and writing data. Intermediate results can be stored in large scratch space (hundreds of TBs) and analyzed directly from within Python or R with state-of-the-art visualization and machine learning libraries. Users can store science-ready results in their permanent allocation on SciDrive, a Dropbox-like system for sharing and publishing files.SciServer Compute’s virtual research environment has grown with the addition of task management and access control functions, allowing collaborators to share both data and analysis scripts securely across the world. These features also open up new possibilities for education, allowing instructors to share datasets with students and students to write analysis scripts to share with their instructors. We are leveraging these features into a new system called “SciServer Courseware,” which will allow instructors to share assignments with their students, allowing students to engage with big data in new ways.SciServer has also expanded to include more datasets beyond the Sloan Digital Sky Survey. A part of that growth has been the addition of the SkyQuery component, which allows for simple, fast cross-matching between very large astronomical datasets.Demos, documentation, and more information about all these resources can be found at www.sciserver.org.

  11. An Innovative Infrastructure with a Universal Geo-spatiotemporal Data Representation Supporting Cost-effective Integration of Diverse Earth Science Data

    NASA Astrophysics Data System (ADS)

    Kuo, K. S.; Rilee, M. L.

    2017-12-01

    Existing pathways for bringing together massive, diverse Earth Science datasets for integrated analyses burden end users with data packaging and management details irrelevant to their domain goals. The major data repositories focus on archival, discovery, and dissemination of products (files) in a standardized manner. End-users must download and then adapt these files using local resources and custom methods before analysis can proceed. This reduces scientific or other domain productivity, as scarce resources and expertise must be diverted to data processing. The Spatio-Temporal Adaptive Resolution Encoding (STARE) is a unifying scheme encoding geospatial and temporal information for organizing data on scalable computing/storage resources, minimizing expensive data transfers. STARE provides a compact representation that turns set-logic functions, e.g. conditional subsetting, into integer operations, that takes into account representative spatiotemporal resolutions of the data in the datasets, which is needed for data placement alignment of geo-spatiotemporally diverse data on massive parallel resources. Automating important scientific functions (e.g. regridding) and computational functions (e.g. data placement) allows scientists to focus on domain specific questions instead of expending their expertise on data processing. While STARE is not tied to any particular computing technology, we have used STARE for visualization and the SciDB array database to analyze Earth Science data on a 28-node compute cluster. STARE's automatic data placement and coupling of geometric and array indexing allows complicated data comparisons to be realized as straightforward database operations like "join." With STARE-enabled automation, SciDB+STARE provides a database interface, reducing costly data preparation, increasing the volume and variety of integrable data, and easing result sharing. Using SciDB+STARE as part of an integrated analysis infrastructure, we demonstrate the dramatic ease of combining diametrically different datasets, i.e. gridded (NMQ radar) vs. spacecraft swath (TRMM). SciDB+STARE is an important step towards a computational infrastructure for integrating and sharing diverse, complex Earth Science data and science products derived from them.

  12. Performance and scalability evaluation of "Big Memory" on Blue Gene Linux.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoshii, K.; Iskra, K.; Naik, H.

    2011-05-01

    We address memory performance issues observed in Blue Gene Linux and discuss the design and implementation of 'Big Memory' - an alternative, transparent memory space introduced to eliminate the memory performance issues. We evaluate the performance of Big Memory using custom memory benchmarks, NAS Parallel Benchmarks, and the Parallel Ocean Program, at a scale of up to 4,096 nodes. We find that Big Memory successfully resolves the performance issues normally encountered in Blue Gene Linux. For the ocean simulation program, we even find that Linux with Big Memory provides better scalability than does the lightweight compute node kernel designed solelymore » for high-performance applications. Originally intended exclusively for compute node tasks, our new memory subsystem dramatically improves the performance of certain I/O node applications as well. We demonstrate this performance using the central processor of the LOw Frequency ARray radio telescope as an example.« less

  13. Hybrid-optimization strategy for the communication of large-scale Kinetic Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Wu, Baodong; Li, Shigang; Zhang, Yunquan; Nie, Ningming

    2017-02-01

    The parallel Kinetic Monte Carlo (KMC) algorithm based on domain decomposition has been widely used in large-scale physical simulations. However, the communication overhead of the parallel KMC algorithm is critical, and severely degrades the overall performance and scalability. In this paper, we present a hybrid optimization strategy to reduce the communication overhead for the parallel KMC simulations. We first propose a communication aggregation algorithm to reduce the total number of messages and eliminate the communication redundancy. Then, we utilize the shared memory to reduce the memory copy overhead of the intra-node communication. Finally, we optimize the communication scheduling using the neighborhood collective operations. We demonstrate the scalability and high performance of our hybrid optimization strategy by both theoretical and experimental analysis. Results show that the optimized KMC algorithm exhibits better performance and scalability than the well-known open-source library-SPPARKS. On 32-node Xeon E5-2680 cluster (total 640 cores), the optimized algorithm reduces the communication time by 24.8% compared with SPPARKS.

  14. Reducing power consumption during execution of an application on a plurality of compute nodes

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Peters, Amanda E.; Ratterman, Joseph D.; Smith, Brian E.

    2013-09-10

    Methods, apparatus, and products are disclosed for reducing power consumption during execution of an application on a plurality of compute nodes that include: powering up, during compute node initialization, only a portion of computer memory of the compute node, including configuring an operating system for the compute node in the powered up portion of computer memory; receiving, by the operating system, an instruction to load an application for execution; allocating, by the operating system, additional portions of computer memory to the application for use during execution; powering up the additional portions of computer memory allocated for use by the application during execution; and loading, by the operating system, the application into the powered up additional portions of computer memory.

  15. Wearable Intrinsically Soft, Stretchable, Flexible Devices for Memories and Computing.

    PubMed

    Rajan, Krishna; Garofalo, Erik; Chiolerio, Alessandro

    2018-01-27

    A recent trend in the development of high mass consumption electron devices is towards electronic textiles (e-textiles), smart wearable devices, smart clothes, and flexible or printable electronics. Intrinsically soft, stretchable, flexible, Wearable Memories and Computing devices (WMCs) bring us closer to sci-fi scenarios, where future electronic systems are totally integrated in our everyday outfits and help us in achieving a higher comfort level, interacting for us with other digital devices such as smartphones and domotics, or with analog devices, such as our brain/peripheral nervous system. WMC will enable each of us to contribute to open and big data systems as individual nodes, providing real-time information about physical and environmental parameters (including air pollution monitoring, sound and light pollution, chemical or radioactive fallout alert, network availability, and so on). Furthermore, WMC could be directly connected to human brain and enable extremely fast operation and unprecedented interface complexity, directly mapping the continuous states available to biological systems. This review focuses on recent advances in nanotechnology and materials science and pays particular attention to any result and promising technology to enable intrinsically soft, stretchable, flexible WMC.

  16. Wearable Intrinsically Soft, Stretchable, Flexible Devices for Memories and Computing

    PubMed Central

    Rajan, Krishna; Garofalo, Erik

    2018-01-01

    A recent trend in the development of high mass consumption electron devices is towards electronic textiles (e-textiles), smart wearable devices, smart clothes, and flexible or printable electronics. Intrinsically soft, stretchable, flexible, Wearable Memories and Computing devices (WMCs) bring us closer to sci-fi scenarios, where future electronic systems are totally integrated in our everyday outfits and help us in achieving a higher comfort level, interacting for us with other digital devices such as smartphones and domotics, or with analog devices, such as our brain/peripheral nervous system. WMC will enable each of us to contribute to open and big data systems as individual nodes, providing real-time information about physical and environmental parameters (including air pollution monitoring, sound and light pollution, chemical or radioactive fallout alert, network availability, and so on). Furthermore, WMC could be directly connected to human brain and enable extremely fast operation and unprecedented interface complexity, directly mapping the continuous states available to biological systems. This review focuses on recent advances in nanotechnology and materials science and pays particular attention to any result and promising technology to enable intrinsically soft, stretchable, flexible WMC. PMID:29382050

  17. Asynchronous Communication Scheme For Hypercube Computer

    NASA Technical Reports Server (NTRS)

    Madan, Herb S.

    1988-01-01

    Scheme devised for asynchronous-message communication system for Mark III hypercube concurrent-processor network. Network consists of up to 1,024 processing elements connected electrically as though were at corners of 10-dimensional cube. Each node contains two Motorola 68020 processors along with Motorola 68881 floating-point processor utilizing up to 4 megabytes of shared dynamic random-access memory. Scheme intended to support applications requiring passage of both polled or solicited and unsolicited messages.

  18. Understanding the I/O Performance Gap Between Cori KNL and Haswell

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Jialin; Koziol, Quincey; Tang, Houjun

    2017-05-01

    The Cori system at NERSC has two compute partitions with different CPU architectures: a 2,004 node Haswell partition and a 9,688 node KNL partition, which ranked as the 5th most powerful and fastest supercomputer on the November 2016 Top 500 list. The compute partitions share a common storage configuration, and understanding the IO performance gap between them is important, impacting not only to NERSC/LBNL users and other national labs, but also to the relevant hardware vendors and software developers. In this paper, we have analyzed performance of single core and single node IO comprehensively on the Haswell and KNL partitions,more » and have discovered the major bottlenecks, which include CPU frequencies and memory copy performance. We have also extended our performance tests to multi-node IO and revealed the IO cost difference caused by network latency, buffer size, and communication cost. Overall, we have developed a strong understanding of the IO gap between Haswell and KNL nodes and the lessons learned from this exploration will guide us in designing optimal IO solutions in many-core era.« less

  19. An adaptable XML based approach for scientific data management and integration

    NASA Astrophysics Data System (ADS)

    Wang, Fusheng; Thiel, Florian; Furrer, Daniel; Vergara-Niedermayr, Cristobal; Qin, Chen; Hackenberg, Georg; Bourgue, Pierre-Emmanuel; Kaltschmidt, David; Wang, Mo

    2008-03-01

    Increased complexity of scientific research poses new challenges to scientific data management. Meanwhile, scientific collaboration is becoming increasing important, which relies on integrating and sharing data from distributed institutions. We develop SciPort, a Web-based platform on supporting scientific data management and integration based on a central server based distributed architecture, where researchers can easily collect, publish, and share their complex scientific data across multi-institutions. SciPort provides an XML based general approach to model complex scientific data by representing them as XML documents. The documents capture not only hierarchical structured data, but also images and raw data through references. In addition, SciPort provides an XML based hierarchical organization of the overall data space to make it convenient for quick browsing. To provide generalization, schemas and hierarchies are customizable with XML-based definitions, thus it is possible to quickly adapt the system to different applications. While each institution can manage documents on a Local SciPort Server independently, selected documents can be published to a Central Server to form a global view of shared data across all sites. By storing documents in a native XML database, SciPort provides high schema extensibility and supports comprehensive queries through XQuery. By providing a unified and effective means for data modeling, data access and customization with XML, SciPort provides a flexible and powerful platform for sharing scientific data for scientific research communities, and has been successfully used in both biomedical research and clinical trials.

  20. An Adaptable XML Based Approach for Scientific Data Management and Integration.

    PubMed

    Wang, Fusheng; Thiel, Florian; Furrer, Daniel; Vergara-Niedermayr, Cristobal; Qin, Chen; Hackenberg, Georg; Bourgue, Pierre-Emmanuel; Kaltschmidt, David; Wang, Mo

    2008-02-20

    Increased complexity of scientific research poses new challenges to scientific data management. Meanwhile, scientific collaboration is becoming increasing important, which relies on integrating and sharing data from distributed institutions. We develop SciPort, a Web-based platform on supporting scientific data management and integration based on a central server based distributed architecture, where researchers can easily collect, publish, and share their complex scientific data across multi-institutions. SciPort provides an XML based general approach to model complex scientific data by representing them as XML documents. The documents capture not only hierarchical structured data, but also images and raw data through references. In addition, SciPort provides an XML based hierarchical organization of the overall data space to make it convenient for quick browsing. To provide generalization, schemas and hierarchies are customizable with XML-based definitions, thus it is possible to quickly adapt the system to different applications. While each institution can manage documents on a Local SciPort Server independently, selected documents can be published to a Central Server to form a global view of shared data across all sites. By storing documents in a native XML database, SciPort provides high schema extensibility and supports comprehensive queries through XQuery. By providing a unified and effective means for data modeling, data access and customization with XML, SciPort provides a flexible and powerful platform for sharing scientific data for scientific research communities, and has been successfully used in both biomedical research and clinical trials.

  1. Parallel compression of data chunks of a shared data object using a log-structured file system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2016-10-25

    Techniques are provided for parallel compression of data chunks being written to a shared object. A client executing on a compute node or a burst buffer node in a parallel computing system stores a data chunk generated by the parallel computing system to a shared data object on a storage node by compressing the data chunk; and providing the data compressed data chunk to the storage node that stores the shared object. The client and storage node may employ Log-Structured File techniques. The compressed data chunk can be de-compressed by the client when the data chunk is read. A storagemore » node stores a data chunk as part of a shared object by receiving a compressed version of the data chunk from a compute node; and storing the compressed version of the data chunk to the shared data object on the storage node.« less

  2. MELOC - Memory and Location Optimized Caching for Mobile Ad Hoc Networks

    DTIC Science & Technology

    2011-01-01

    required for such environments. Moreover, nodes located at centre have to be chosen as cache location, since it reduces the chance of being attacked...Figure 1.1. MANET Formed by Armed Forces 47 Example 3: Sharing of music and videos are famous among mobile users. Instead of downloading...The two tier caching scheme discussed in this paper is acoustic . The characteristics of two-tier caching are as follows, the content of data to be

  3. Community Detection on the GPU

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naim, Md; Manne, Fredrik; Halappanavar, Mahantesh

    We present and evaluate a new GPU algorithm based on the Louvain method for community detection. Our algorithm is the first for this problem that parallelizes the access to individual edges. In this way we can fine tune the load balance when processing networks with nodes of highly varying degrees. This is achieved by scaling the number of threads assigned to each node according to its degree. Extensive experiments show that we obtain speedups up to a factor of 270 compared to the sequential algorithm. The algorithm consistently outperforms other recent shared memory implementations and is only one order ofmore » magnitude slower than the current fastest parallel Louvain method running on a Blue Gene/Q supercomputer using more than 500K threads.« less

  4. MHD Code Optimizations and Jets in Dense Gaseous Halos

    NASA Astrophysics Data System (ADS)

    Gaibler, Volker; Vigelius, Matthias; Krause, Martin; Camenzind, Max

    We have further optimized and extended the 3D-MHD-code NIRVANA. The magnetized part runs in parallel, reaching 19 Gflops per SX-6 node, and has a passively advected particle population. In addition, the code is MPI-parallel now - on top of the shared memory parallelization. On a 512^3 grid, we reach 561 Gflops with 32 nodes on the SX-8. Also, we have successfully used FLASH on the Opteron cluster. Scientific results are preliminary so far. We report one computation of highly resolved cocoon turbulence. While we find some similarities to earlier 2D work by us and others, we note a strange reluctancy of cold material to enter the low density cocoon, which has to be investigated further.

  5. An accurate, compact and computationally efficient representation of orbitals for quantum Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Luo, Ye; Esler, Kenneth; Kent, Paul; Shulenburger, Luke

    Quantum Monte Carlo (QMC) calculations of giant molecules, surface and defect properties of solids have been feasible recently due to drastically expanding computational resources. However, with the most computationally efficient basis set, B-splines, these calculations are severely restricted by the memory capacity of compute nodes. The B-spline coefficients are shared on a node but not distributed among nodes, to ensure fast evaluation. A hybrid representation which incorporates atomic orbitals near the ions and B-spline ones in the interstitial regions offers a more accurate and less memory demanding description of the orbitals because they are naturally more atomic like near ions and much smoother in between, thus allowing coarser B-spline grids. We will demonstrate the advantage of hybrid representation over pure B-spline and Gaussian basis sets and also show significant speed-up like computing the non-local pseudopotentials with our new scheme. Moreover, we discuss a new algorithm for atomic orbital initialization which used to require an extra workflow step taking a few days. With this work, the highly efficient hybrid representation paves the way to simulate large size even in-homogeneous systems using QMC. This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Computational Materials Sciences Program.

  6. Network resiliency through memory health monitoring and proactive management

    DOEpatents

    Andrade Costa, Carlos H.; Cher, Chen-Yong; Park, Yoonho; Rosenburg, Bryan S.; Ryu, Kyung D.

    2017-11-21

    A method for managing a network queue memory includes receiving sensor information about the network queue memory, predicting a memory failure in the network queue memory based on the sensor information, and outputting a notification through a plurality of nodes forming a network and using the network queue memory, the notification configuring communications between the nodes.

  7. Bad data packet capture device

    DOEpatents

    Chen, Dong; Gara, Alan; Heidelberger, Philip; Vranas, Pavlos

    2010-04-20

    An apparatus and method for capturing data packets for analysis on a network computing system includes a sending node and a receiving node connected by a bi-directional communication link. The sending node sends a data transmission to the receiving node on the bi-directional communication link, and the receiving node receives the data transmission and verifies the data transmission to determine valid data and invalid data and verify retransmissions of invalid data as corresponding valid data. A memory device communicates with the receiving node for storing the invalid data and the corresponding valid data. A computing node communicates with the memory device and receives and performs an analysis of the invalid data and the corresponding valid data received from the memory device.

  8. SKIRT: Hybrid parallelization of radiative transfer simulations

    NASA Astrophysics Data System (ADS)

    Verstocken, S.; Van De Putte, D.; Camps, P.; Baes, M.

    2017-07-01

    We describe the design, implementation and performance of the new hybrid parallelization scheme in our Monte Carlo radiative transfer code SKIRT, which has been used extensively for modelling the continuum radiation of dusty astrophysical systems including late-type galaxies and dusty tori. The hybrid scheme combines distributed memory parallelization, using the standard Message Passing Interface (MPI) to communicate between processes, and shared memory parallelization, providing multiple execution threads within each process to avoid duplication of data structures. The synchronization between multiple threads is accomplished through atomic operations without high-level locking (also called lock-free programming). This improves the scaling behaviour of the code and substantially simplifies the implementation of the hybrid scheme. The result is an extremely flexible solution that adjusts to the number of available nodes, processors and memory, and consequently performs well on a wide variety of computing architectures.

  9. Multi-processor including data flow accelerator module

    DOEpatents

    Davidson, George S.; Pierce, Paul E.

    1990-01-01

    An accelerator module for a data flow computer includes an intelligent memory. The module is added to a multiprocessor arrangement and uses a shared tagged memory architecture in the data flow computer. The intelligent memory module assigns locations for holding data values in correspondence with arcs leading to a node in a data dependency graph. Each primitive computation is associated with a corresponding memory cell, including a number of slots for operands needed to execute a primitive computation, a primitive identifying pointer, and linking slots for distributing the result of the cell computation to other cells requiring that result as an operand. Circuitry is provided for utilizing tag bits to determine automatically when all operands required by a processor are available and for scheduling the primitive for execution in a queue. Each memory cell of the module may be associated with any of the primitives, and the particular primitive to be executed by the processor associated with the cell is identified by providing an index, such as the cell number for the primitive, to the primitive lookup table of starting addresses. The module thus serves to perform functions previously performed by a number of sections of data flow architectures and coexists with conventional shared memory therein. A multiprocessing system including the module operates in a hybrid mode, wherein the same processing modules are used to perform some processing in a sequential mode, under immediate control of an operating system, while performing other processing in a data flow mode.

  10. Social Networking Adapted for Distributed Scientific Collaboration

    NASA Technical Reports Server (NTRS)

    Karimabadi, Homa

    2012-01-01

    Share is a social networking site with novel, specially designed feature sets to enable simultaneous remote collaboration and sharing of large data sets among scientists. The site will include not only the standard features found on popular consumer-oriented social networking sites such as Facebook and Myspace, but also a number of powerful tools to extend its functionality to a science collaboration site. A Virtual Observatory is a promising technology for making data accessible from various missions and instruments through a Web browser. Sci-Share augments services provided by Virtual Observatories by enabling distributed collaboration and sharing of downloaded and/or processed data among scientists. This will, in turn, increase science returns from NASA missions. Sci-Share also enables better utilization of NASA s high-performance computing resources by providing an easy and central mechanism to access and share large files on users space or those saved on mass storage. The most common means of remote scientific collaboration today remains the trio of e-mail for electronic communication, FTP for file sharing, and personalized Web sites for dissemination of papers and research results. Each of these tools has well-known limitations. Sci-Share transforms the social networking paradigm into a scientific collaboration environment by offering powerful tools for cooperative discourse and digital content sharing. Sci-Share differentiates itself by serving as an online repository for users digital content with the following unique features: a) Sharing of any file type, any size, from anywhere; b) Creation of projects and groups for controlled sharing; c) Module for sharing files on HPC (High Performance Computing) sites; d) Universal accessibility of staged files as embedded links on other sites (e.g. Facebook) and tools (e.g. e-mail); e) Drag-and-drop transfer of large files, replacing awkward e-mail attachments (and file size limitations); f) Enterprise-level data and messaging encryption; and g) Easy-to-use intuitive workflow.

  11. Peregrine System | High-Performance Computing | NREL

    Science.gov Websites

    ) and longer-term (/projects) storage. These file systems are mounted on all nodes. Peregrine has three -2670 Xeon processors and 64 GB of memory. In addition to mounting the /home, /nopt, /projects and # cores/node Memory/node Peak (DP) performance per node 88 Intel Xeon E5-2670 "Sandy Bridge" 8

  12. Implementation of bipartite or remote unitary gates with repeater nodes

    NASA Astrophysics Data System (ADS)

    Yu, Li; Nemoto, Kae

    2016-08-01

    We propose some protocols to implement various classes of bipartite unitary operations on two remote parties with the help of repeater nodes in-between. We also present a protocol to implement a single-qubit unitary with parameters determined by a remote party with the help of up to three repeater nodes. It is assumed that the neighboring nodes are connected by noisy photonic channels, and the local gates can be performed quite accurately, while the decoherence of memories is significant. A unitary is often a part of a larger computation or communication task in a quantum network, and to reduce the amount of decoherence in other systems of the network, we focus on the goal of saving the total time for implementing a unitary including the time for entanglement preparation. We review some previously studied protocols that implement bipartite unitaries using local operations and classical communication and prior shared entanglement, and apply them to the situation with repeater nodes without prior entanglement. We find that the protocols using piecewise entanglement between neighboring nodes often require less total time compared to preparing entanglement between the two end nodes first and then performing the previously known protocols. For a generic bipartite unitary, as the number of repeater nodes increases, the total time could approach the time cost for direct signal transfer from one end node to the other. We also prove some lower bounds of the total time when there are a small number of repeater nodes. The application to position-based cryptography is discussed.

  13. An OpenACC-Based Unified Programming Model for Multi-accelerator Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Jungwon; Lee, Seyong; Vetter, Jeffrey S

    2015-01-01

    This paper proposes a novel SPMD programming model of OpenACC. Our model integrates the different granularities of parallelism from vector-level parallelism to node-level parallelism into a single, unified model based on OpenACC. It allows programmers to write programs for multiple accelerators using a uniform programming model whether they are in shared or distributed memory systems. We implement a prototype of our model and evaluate its performance with a GPU-based supercomputer using three benchmark applications.

  14. CitSci.org: A New Model for Managing, Documenting, and Sharing Citizen Science Data.

    PubMed

    Wang, Yiwei; Kaplan, Nicole; Newman, Greg; Scarpino, Russell

    2015-10-01

    Citizen science projects have the potential to advance science by increasing the volume and variety of data, as well as innovation. Yet this potential has not been fully realized, in part because citizen science data are typically not widely shared and reused. To address this and related challenges, we built CitSci.org (see www.citsci.org), a customizable platform that allows users to collect and generate diverse datasets. We hope that CitSci.org will ultimately increase discoverability and confidence in citizen science observations, encouraging scientists to use such data in their own scientific research.

  15. Scaling Irregular Applications through Data Aggregation and Software Multithreading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morari, Alessandro; Tumeo, Antonino; Chavarría-Miranda, Daniel

    Bioinformatics, data analytics, semantic databases, knowledge discovery are emerging high performance application areas that exploit dynamic, linked data structures such as graphs, unbalanced trees or unstructured grids. These data structures usually are very large, requiring significantly more memory than available on single shared memory systems. Additionally, these data structures are difficult to partition on distributed memory systems. They also present poor spatial and temporal locality, thus generating unpredictable memory and network accesses. The Partitioned Global Address Space (PGAS) programming model seems suitable for these applications, because it allows using a shared memory abstraction across distributed-memory clusters. However, current PGAS languagesmore » and libraries are built to target regular remote data accesses and block transfers. Furthermore, they usually rely on the Single Program Multiple Data (SPMD) parallel control model, which is not well suited to the fine grained, dynamic and unbalanced parallelism of irregular applications. In this paper we present {\\bf GMT} (Global Memory and Threading library), a custom runtime library that enables efficient execution of irregular applications on commodity clusters. GMT integrates a PGAS data substrate with simple fork/join parallelism and provides automatic load balancing on a per node basis. It implements multi-level aggregation and lightweight multithreading to maximize memory and network bandwidth with fine-grained data accesses and tolerate long data access latencies. A key innovation in the GMT runtime is its thread specialization (workers, helpers and communication threads) that realize the overall functionality. We compare our approach with other PGAS models, such as UPC running using GASNet, and hand-optimized MPI code on a set of typical large-scale irregular applications, demonstrating speedups of an order of magnitude.« less

  16. Implementing Connected Component Labeling as a User Defined Operator for SciDB

    NASA Technical Reports Server (NTRS)

    Oloso, Amidu; Kuo, Kwo-Sen; Clune, Thomas; Brown, Paul; Poliakov, Alex; Yu, Hongfeng

    2016-01-01

    We have implemented a flexible User Defined Operator (UDO) for labeling connected components of a binary mask expressed as an array in SciDB, a parallel distributed database management system based on the array data model. This UDO is able to process very large multidimensional arrays by exploiting SciDB's memory management mechanism that efficiently manipulates arrays whose memory requirements far exceed available physical memory. The UDO takes as primary inputs a binary mask array and a binary stencil array that specifies the connectivity of a given cell to its neighbors. The UDO returns an array of the same shape as the input mask array with each foreground cell containing the label of the component it belongs to. By default, dimensions are treated as non-periodic, but the UDO also accepts optional input parameters to specify periodicity in any of the array dimensions. The UDO requires four stages to completely label connected components. In the first stage, labels are computed for each subarray or chunk of the mask array in parallel across SciDB instances using the weighted quick union (WQU) with half-path compression algorithm. In the second stage, labels around chunk boundaries from the first stage are stored in a temporary SciDB array that is then replicated across all SciDB instances. Equivalences are resolved by again applying the WQU algorithm to these boundary labels. In the third stage, relabeling is done for each chunk using the resolved equivalences. In the fourth stage, the resolved labels, which so far are "flattened" coordinates of the original binary mask array, are renamed with sequential integers for legibility. The UDO is demonstrated on a 3-D mask of O(1011) elements, with O(108) foreground cells and O(106) connected components. The operator completes in 19 minutes using 84 SciDB instances.

  17. Performances of multiprocessor multidisk architectures for continuous media storage

    NASA Astrophysics Data System (ADS)

    Gennart, Benoit A.; Messerli, Vincent; Hersch, Roger D.

    1996-03-01

    Multimedia interfaces increase the need for large image databases, capable of storing and reading streams of data with strict synchronicity and isochronicity requirements. In order to fulfill these requirements, we consider a parallel image server architecture which relies on arrays of intelligent disk nodes, each disk node being composed of one processor and one or more disks. This contribution analyzes through bottleneck performance evaluation and simulation the behavior of two multi-processor multi-disk architectures: a point-to-point architecture and a shared-bus architecture similar to current multiprocessor workstation architectures. We compare the two architectures on the basis of two multimedia algorithms: the compute-bound frame resizing by resampling and the data-bound disk-to-client stream transfer. The results suggest that the shared bus is a potential bottleneck despite its very high hardware throughput (400Mbytes/s) and that an architecture with addressable local memories located closely to their respective processors could partially remove this bottleneck. The point- to-point architecture is scalable and able to sustain high throughputs for simultaneous compute- bound and data-bound operations.

  18. Diagnosable structured logic array

    NASA Technical Reports Server (NTRS)

    Whitaker, Sterling (Inventor); Miles, Lowell (Inventor); Gambles, Jody (Inventor); Maki, Gary K. (Inventor)

    2009-01-01

    A diagnosable structured logic array and associated process is provided. A base cell structure is provided comprising a logic unit comprising a plurality of input nodes, a plurality of selection nodes, and an output node, a plurality of switches coupled to the selection nodes, where the switches comprises a plurality of input lines, a selection line and an output line, a memory cell coupled to the output node, and a test address bus and a program control bus coupled to the plurality of input lines and the selection line of the plurality of switches. A state on each of the plurality of input nodes is verifiably loaded and read from the memory cell. A trusted memory block is provided. The associated process is provided for testing and verifying a plurality of truth table inputs of the logic unit.

  19. CLOMP v1.5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gyllenhaal, J.

    CLOMP is the C version of the Livermore OpenMP benchmark developed to measure OpenMP overheads and other performance impacts due to threading. For simplicity, it does not use MPI by default but it is expected to be run on the resources a threaded MPI task would use (e.g., a portion of a shared memory compute node). Compiling with -DWITH_MPI allows packing one or more nodes with CLOMP tasks and having CLOMP report OpenMP performance for the slowest MPI task. On current systems, the strong scaling performance results for 4, 8, or 16 threads are of the most interest. Suggested weakmore » scaling inputs are provided for evaluating future systems. Since MPI is often used to place at least one MPI task per coherence or NUMA domain, it is recommended to focus OpenMP runtime measurements on a subset of node hardware where it is most possible to have low OpenMP overheads (e.g., within one coherence domain or NUMA domain).« less

  20. CitSci.org: A New Model for Managing, Documenting, and Sharing Citizen Science Data

    PubMed Central

    Wang, Yiwei; Kaplan, Nicole; Newman, Greg; Scarpino, Russell

    2015-01-01

    Citizen science projects have the potential to advance science by increasing the volume and variety of data, as well as innovation. Yet this potential has not been fully realized, in part because citizen science data are typically not widely shared and reused. To address this and related challenges, we built CitSci.org (see www.citsci.org), a customizable platform that allows users to collect and generate diverse datasets. We hope that CitSci.org will ultimately increase discoverability and confidence in citizen science observations, encouraging scientists to use such data in their own scientific research. PMID:26492521

  1. 3D Kirchhoff depth migration algorithm: A new scalable approach for parallelization on multicore CPU based cluster

    NASA Astrophysics Data System (ADS)

    Rastogi, Richa; Londhe, Ashutosh; Srivastava, Abhishek; Sirasala, Kirannmayi M.; Khonde, Kiran

    2017-03-01

    In this article, a new scalable 3D Kirchhoff depth migration algorithm is presented on state of the art multicore CPU based cluster. Parallelization of 3D Kirchhoff depth migration is challenging due to its high demand of compute time, memory, storage and I/O along with the need of their effective management. The most resource intensive modules of the algorithm are traveltime calculations and migration summation which exhibit an inherent trade off between compute time and other resources. The parallelization strategy of the algorithm largely depends on the storage of calculated traveltimes and its feeding mechanism to the migration process. The presented work is an extension of our previous work, wherein a 3D Kirchhoff depth migration application for multicore CPU based parallel system had been developed. Recently, we have worked on improving parallel performance of this application by re-designing the parallelization approach. The new algorithm is capable to efficiently migrate both prestack and poststack 3D data. It exhibits flexibility for migrating large number of traces within the available node memory and with minimal requirement of storage, I/O and inter-node communication. The resultant application is tested using 3D Overthrust data on PARAM Yuva II, which is a Xeon E5-2670 based multicore CPU cluster with 16 cores/node and 64 GB shared memory. Parallel performance of the algorithm is studied using different numerical experiments and the scalability results show striking improvement over its previous version. An impressive 49.05X speedup with 76.64% efficiency is achieved for 3D prestack data and 32.00X speedup with 50.00% efficiency for 3D poststack data, using 64 nodes. The results also demonstrate the effectiveness and robustness of the improved algorithm with high scalability and efficiency on a multicore CPU cluster.

  2. Secure Peer-to-Peer Networks for Scientific Information Sharing

    NASA Technical Reports Server (NTRS)

    Karimabadi, Homa

    2012-01-01

    The most common means of remote scientific collaboration today includes the trio of e-mail for electronic communication, FTP for file sharing, and personalized Web sites for dissemination of papers and research results. With the growth of broadband Internet, there has been a desire to share large files (movies, files, scientific data files) over the Internet. Email has limits on the size of files that can be attached and transmitted. FTP is often used to share large files, but this requires the user to set up an FTP site for which it is hard to set group privileges, it is not straightforward for everyone, and the content is not searchable. Peer-to-peer technology (P2P), which has been overwhelmingly successful in popular content distribution, is the basis for development of a scientific collaboratory called Scientific Peer Network (SciPerNet). This technology combines social networking with P2P file sharing. SciPerNet will be a standalone application, written in Java and Swing, thus insuring portability to a number of different platforms. Some of the features include user authentication, search capability, seamless integration with a data center, the ability to create groups and social networks, and on-line chat. In contrast to P2P networks such as Gnutella, Bit Torrent, and others, SciPerNet incorporates three design elements that are critical to application of P2P for scientific purposes: User authentication, Data integrity validation, Reliable searching SciPerNet also provides a complementary solution to virtual observatories by enabling distributed collaboration and sharing of downloaded and/or processed data among scientists. This will, in turn, increase scientific returns from NASA missions. As such, SciPerNet can serve a two-fold purpose for NASA: a cost-savings software as well as a productivity tool for scientists working with data from NASA missions.

  3. A New Random Walk for Replica Detection in WSNs.

    PubMed

    Aalsalem, Mohammed Y; Khan, Wazir Zada; Saad, N M; Hossain, Md Shohrab; Atiquzzaman, Mohammed; Khan, Muhammad Khurram

    2016-01-01

    Wireless Sensor Networks (WSNs) are vulnerable to Node Replication attacks or Clone attacks. Among all the existing clone detection protocols in WSNs, RAWL shows the most promising results by employing Simple Random Walk (SRW). More recently, RAND outperforms RAWL by incorporating Network Division with SRW. Both RAND and RAWL have used SRW for random selection of witness nodes which is problematic because of frequently revisiting the previously passed nodes that leads to longer delays, high expenditures of energy with lower probability that witness nodes intersect. To circumvent this problem, we propose to employ a new kind of constrained random walk, namely Single Stage Memory Random Walk and present a distributed technique called SSRWND (Single Stage Memory Random Walk with Network Division). In SSRWND, single stage memory random walk is combined with network division aiming to decrease the communication and memory costs while keeping the detection probability higher. Through intensive simulations it is verified that SSRWND guarantees higher witness node security with moderate communication and memory overheads. SSRWND is expedient for security oriented application fields of WSNs like military and medical.

  4. A New Random Walk for Replica Detection in WSNs

    PubMed Central

    Aalsalem, Mohammed Y.; Saad, N. M.; Hossain, Md. Shohrab; Atiquzzaman, Mohammed; Khan, Muhammad Khurram

    2016-01-01

    Wireless Sensor Networks (WSNs) are vulnerable to Node Replication attacks or Clone attacks. Among all the existing clone detection protocols in WSNs, RAWL shows the most promising results by employing Simple Random Walk (SRW). More recently, RAND outperforms RAWL by incorporating Network Division with SRW. Both RAND and RAWL have used SRW for random selection of witness nodes which is problematic because of frequently revisiting the previously passed nodes that leads to longer delays, high expenditures of energy with lower probability that witness nodes intersect. To circumvent this problem, we propose to employ a new kind of constrained random walk, namely Single Stage Memory Random Walk and present a distributed technique called SSRWND (Single Stage Memory Random Walk with Network Division). In SSRWND, single stage memory random walk is combined with network division aiming to decrease the communication and memory costs while keeping the detection probability higher. Through intensive simulations it is verified that SSRWND guarantees higher witness node security with moderate communication and memory overheads. SSRWND is expedient for security oriented application fields of WSNs like military and medical. PMID:27409082

  5. Sharing programming resources between Bio* projects through remote procedure call and native call stack strategies.

    PubMed

    Prins, Pjotr; Goto, Naohisa; Yates, Andrew; Gautier, Laurent; Willis, Scooter; Fields, Christopher; Katayama, Toshiaki

    2012-01-01

    Open-source software (OSS) encourages computer programmers to reuse software components written by others. In evolutionary bioinformatics, OSS comes in a broad range of programming languages, including C/C++, Perl, Python, Ruby, Java, and R. To avoid writing the same functionality multiple times for different languages, it is possible to share components by bridging computer languages and Bio* projects, such as BioPerl, Biopython, BioRuby, BioJava, and R/Bioconductor. In this chapter, we compare the two principal approaches for sharing software between different programming languages: either by remote procedure call (RPC) or by sharing a local call stack. RPC provides a language-independent protocol over a network interface; examples are RSOAP and Rserve. The local call stack provides a between-language mapping not over the network interface, but directly in computer memory; examples are R bindings, RPy, and languages sharing the Java Virtual Machine stack. This functionality provides strategies for sharing of software between Bio* projects, which can be exploited more often. Here, we present cross-language examples for sequence translation, and measure throughput of the different options. We compare calling into R through native R, RSOAP, Rserve, and RPy interfaces, with the performance of native BioPerl, Biopython, BioJava, and BioRuby implementations, and with call stack bindings to BioJava and the European Molecular Biology Open Software Suite. In general, call stack approaches outperform native Bio* implementations and these, in turn, outperform RPC-based approaches. To test and compare strategies, we provide a downloadable BioNode image with all examples, tools, and libraries included. The BioNode image can be run on VirtualBox-supported operating systems, including Windows, OSX, and Linux.

  6. Methods and apparatus using commutative error detection values for fault isolation in multiple node computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Almasi, Gheorghe; Blumrich, Matthias Augustin; Chen, Dong

    Methods and apparatus perform fault isolation in multiple node computing systems using commutative error detection values for--example, checksums--to identify and to isolate faulty nodes. When information associated with a reproducible portion of a computer program is injected into a network by a node, a commutative error detection value is calculated. At intervals, node fault detection apparatus associated with the multiple node computer system retrieve commutative error detection values associated with the node and stores them in memory. When the computer program is executed again by the multiple node computer system, new commutative error detection values are created and stored inmore » memory. The node fault detection apparatus identifies faulty nodes by comparing commutative error detection values associated with reproducible portions of the application program generated by a particular node from different runs of the application program. Differences in values indicate a possible faulty node.« less

  7. Induction of CD8 T Cell Heterologous Protection by a Single Dose of Single-Cycle Infectious Influenza Virus

    PubMed Central

    Baker, Steven F.; Martínez-Sobrido, Luis

    2014-01-01

    ABSTRACT The effector functions of specific CD8 T cells are crucial in mediating influenza heterologous protection. However, new approaches for influenza vaccines that can trigger effective CD8 T cell responses have not been extensively explored. We report here the generation of single-cycle infectious influenza virus that lacks a functional hemagglutinin (HA) gene on an X31 genetic background and demonstrate its potential for triggering protective CD8 T cell immunity against heterologous influenza virus challenge. In vitro, X31-sciIV can infect MDCK cells, but infectious virions are not produced unless HA is transcomplemented. In vivo, intranasal immunization with X31-sciIV does not cause any clinical symptoms in mice but generates influenza-specific CD8 T cells in lymphoid (mediastinal lymph nodes and spleen) and nonlymphoid tissues, including lung and bronchoalveolar lavage fluid, as measured by H2-Db NP366 and PA224 tetramer staining. In addition, a significant proportion of X31-sciIV-induced antigen-specific respiratory CD8 T cells expressed VLA-1, a marker that is associated with heterologous influenza protection. Further, these influenza-specific CD8 T cells produce antiviral cytokines when stimulated with NP366 and PA224 peptides, indicating that CD8 T cells triggered by X31-sciIV are functional. When challenged with a lethal dose of heterologous PR8 virus, X31-sciIV-primed mice were fully protected from death. However, when CD8 T cells were depleted after priming or before priming, mice could not effectively control virus replication or survive the lethal challenge, indicating that X31-sciIV-induced memory CD8 T cells mediate the heterologous protection. Thus, our results demonstrate the potential for sciIV as a CD8 T cell-inducing vaccine. IMPORTANCE One of the challenges for influenza prevention is the existence of multiple influenza virus subtypes and variants and the fact that new strains can emerge yearly. Numerous studies have indicated that the effector functions of specific CD8 T cells are crucial in mediating influenza heterologous protection. However, influenza vaccines that can trigger effective CD8 T cell responses for heterologous protection have not been developed. We report here the generation of an X31 (H3N2) virus-derived single-cycle infectious influenza virus, X31-sciIV. A one-dose immunization with X31-sciIV is capable of inducing functional influenza virus-specific CD8 T cells that can be recruited into respiratory tissues and provide protection against lethal heterologous challenge. Without these cells, protection against lethal challenge was essentially lost. Our data indicate that an influenza vaccine that primarily relies on CD8 T cells for protection could be developed. PMID:25100831

  8. SCI peer health coach influence on self-management with peers: a qualitative analysis.

    PubMed

    Skeels, S E; Pernigotti, D; Houlihan, B V; Belliveau, T; Brody, M; Zazula, J; Hasiotis, S; Seetharama, S; Rosenblum, D; Jette, A

    2017-11-01

    A process evaluation of a clinical trial. To describe the roles fulfilled by peer health coaches (PHCs) with spinal cord injury (SCI) during a randomized controlled trial research study called 'My Care My Call', a novel telephone-based, peer-led self-management intervention for adults with chronic SCI 1+ years after injury. Connecticut and Greater Boston Area, MA, USA. Directed content analysis was used to qualitatively examine information from 504 tele-coaching calls, conducted with 42 participants with SCI, by two trained SCI PHCs. Self-management was the focus of each 6-month PHC-peer relationship. PHCs documented how and when they used the communication tools (CTs) and information delivery strategies (IDSs) they developed for the intervention. Interaction data were coded and analyzed to determine PHC roles in relation to CT and IDS utilization and application. PHCs performed three principal roles: Role Model, Supporter, and Advisor. Role Model interactions included CTs and IDSs that allowed PHCs to share personal experiences of managing and living with an SCI, including sharing their opinions and advice when appropriate. As Supporters, PHCs used CTs and IDSs to build credible relationships based on dependability and reassuring encouragement. PHCs fulfilled the unique role of Advisor using CTs and IDSs to teach and strategize with peers about SCI self-management. The SCI PHC performs a powerful, flexible role in promoting SCI self-management among peers. Analysis of PHC roles can inform the design of peer-led interventions and highlights the importance for the provision of peer mentor training.

  9. Low latency, high bandwidth data communications between compute nodes in a parallel computer

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2010-11-02

    Methods, parallel computers, and computer program products are disclosed for low latency, high bandwidth data communications between compute nodes in a parallel computer. Embodiments include receiving, by an origin direct memory access (`DMA`) engine of an origin compute node, data for transfer to a target compute node; sending, by the origin DMA engine of the origin compute node to a target DMA engine on the target compute node, a request to send (`RTS`) message; transferring, by the origin DMA engine, a predetermined portion of the data to the target compute node using memory FIFO operation; determining, by the origin DMA engine whether an acknowledgement of the RTS message has been received from the target DMA engine; if the an acknowledgement of the RTS message has not been received, transferring, by the origin DMA engine, another predetermined portion of the data to the target compute node using a memory FIFO operation; and if the acknowledgement of the RTS message has been received by the origin DMA engine, transferring, by the origin DMA engine, any remaining portion of the data to the target compute node using a direct put operation.

  10. Resting-state synchrony between the retrosplenial cortex and anterior medial cortical structures relates to memory complaints in subjective cognitive impairment.

    PubMed

    Yasuno, Fumihiko; Kazui, Hiroaki; Yamamoto, Akihide; Morita, Naomi; Kajimoto, Katsufumi; Ihara, Masafumi; Taguchi, Akihiko; Matsuoka, Kiwamu; Kosaka, Jun; Tanaka, Toshihisa; Kudo, Takashi; Takeda, Masatoshi; Nagatsuka, Kazuyuki; Iida, Hidehiro; Kishimoto, Toshifumi

    2015-06-01

    Subjective cognitive impairment (SCI) is a clinical state characterized by subjective cognitive deficits without cognitive impairment. To test the hypothesis that this state might involve dysfunction of self-referential processing mediated by cortical midline structures, we investigated abnormalities of functional connectivity in these structures in individuals with SCI using resting-state functional magnetic resonance imaging. We performed functional connectivity analysis for 23 individuals with SCI and 30 individuals without SCI. To reveal the pathophysiological basis of the functional connectivity change, we performed magnetic resonance-diffusion tensor imaging. Positron emission tomography-amyloid imaging was conducted in 13 SCI and 15 nonSCI subjects. Individuals with SCI showed reduced functional connectivity in cortical midline structures. Reduction in white matter connections was related to reduced functional connectivity, but we found no amyloid deposition in individuals with SCI. The results do not necessarily contradict the possibility that SCI indicates initial cognitive decrements, but imply that reduced functional connectivity in cortical midline structures contributes to overestimation of the experience of forgetfulness. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Building gene co-expression networks using transcriptomics data for systems biology investigations: Comparison of methods using microarray data

    PubMed Central

    Kadarmideen, Haja N; Watson-haigh, Nathan S

    2012-01-01

    Gene co-expression networks (GCN), built using high-throughput gene expression data are fundamental aspects of systems biology. The main aims of this study were to compare two popular approaches to building and analysing GCN. We use real ovine microarray transcriptomics datasets representing four different treatments with Metyrapone, an inhibitor of cortisol biosynthesis. We conducted several microarray quality control checks before applying GCN methods to filtered datasets. Then we compared the outputs of two methods using connectivity as a criterion, as it measures how well a node (gene) is connected within a network. The two GCN construction methods used were, Weighted Gene Co-expression Network Analysis (WGCNA) and Partial Correlation and Information Theory (PCIT) methods. Nodes were ranked based on their connectivity measures in each of the four different networks created by WGCNA and PCIT and node ranks in two methods were compared to identify those nodes which are highly differentially ranked (HDR). A total of 1,017 HDR nodes were identified across one or more of four networks. We investigated HDR nodes by gene enrichment analyses in relation to their biological relevance to phenotypes. We observed that, in contrast to WGCNA method, PCIT algorithm removes many of the edges of the most highly interconnected nodes. Removal of edges of most highly connected nodes or hub genes will have consequences for downstream analyses and biological interpretations. In general, for large GCN construction (with > 20000 genes) access to large computer clusters, particularly those with larger amounts of shared memory is recommended. PMID:23144540

  12. Managing internode data communications for an uninitialized process in a parallel computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Archer, Charles J; Blocksome, Michael A; Miller, Douglas R

    2014-05-20

    A parallel computer includes nodes, each having main memory and a messaging unit (MU). Each MU includes computer memory, which in turn includes, MU message buffers. Each MU message buffer is associated with an uninitialized process on the compute node. In the parallel computer, managing internode data communications for an uninitialized process includes: receiving, by an MU of a compute node, one or more data communications messages in an MU message buffer associated with an uninitialized process on the compute node; determining, by an application agent, that the MU message buffer associated with the uninitialized process is full prior tomore » initialization of the uninitialized process; establishing, by the application agent, a temporary message buffer for the uninitialized process in main computer memory; and moving, by the application agent, data communications messages from the MU message buffer associated with the uninitialized process to the temporary message buffer in main computer memory.« less

  13. Managing internode data communications for an uninitialized process in a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Miller, Douglas R; Parker, Jeffrey J; Ratterman, Joseph D; Smith, Brian E

    2014-05-20

    A parallel computer includes nodes, each having main memory and a messaging unit (MU). Each MU includes computer memory, which in turn includes, MU message buffers. Each MU message buffer is associated with an uninitialized process on the compute node. In the parallel computer, managing internode data communications for an uninitialized process includes: receiving, by an MU of a compute node, one or more data communications messages in an MU message buffer associated with an uninitialized process on the compute node; determining, by an application agent, that the MU message buffer associated with the uninitialized process is full prior to initialization of the uninitialized process; establishing, by the application agent, a temporary message buffer for the uninitialized process in main computer memory; and moving, by the application agent, data communications messages from the MU message buffer associated with the uninitialized process to the temporary message buffer in main computer memory.

  14. Parallel file system with metadata distributed across partitioned key-value store c

    DOEpatents

    Bent, John M.; Faibish, Sorin; Grider, Gary; Torres, Aaron

    2017-09-19

    Improved techniques are provided for storing metadata associated with a plurality of sub-files associated with a single shared file in a parallel file system. The shared file is generated by a plurality of applications executing on a plurality of compute nodes. A compute node implements a Parallel Log Structured File System (PLFS) library to store at least one portion of the shared file generated by an application executing on the compute node and metadata for the at least one portion of the shared file on one or more object storage servers. The compute node is also configured to implement a partitioned data store for storing a partition of the metadata for the shared file, wherein the partitioned data store communicates with partitioned data stores on other compute nodes using a message passing interface. The partitioned data store can be implemented, for example, using Multidimensional Data Hashing Indexing Middleware (MDHIM).

  15. Brain Network Changes and Memory Decline in Aging

    PubMed Central

    Beason-Held, Lori L.; Hohman, Timothy J.; Venkatraman, Vijay; An, Yang; Resnick, Susan M.

    2016-01-01

    One theory of age-related cognitive decline proposes that changes within the default mode network (DMN) of the brain impact the ability to successfully perform cognitive operations. To investigate this theory, we examined functional covariance within brain networks using regional cerebral blood flow data, measured by 15O-water PET, from 99 participants (mean baseline age 68.6 ±7.5) in the Baltimore Longitudinal Study of Aging collected over a 7.4 year period. The sample was divided in tertiles based on longitudinal performance on a verbal recognition memory task administered during scanning, and functional covariance was compared between the upper (improvers) and lower (decliners) tertile groups. The DMN and verbal memory networks (VMN) were then examined during the verbal memory scan condition. For each network, group differences in node-to-network coherence and individual node-to-node covariance relationships were assessed at baseline and in change over time. Compared with improvers, decliners showed differences in node-to-network coherence and in node-to-node relationships in the DMN but not the VMN during verbal memory. These DMN differences reflected greater covariance with better task performance at baseline and both increasing and declining covariance with declining task performance over time for decliners. When examined during the resting state alone, the direction of change in DMN covariance was similar to that seen during task performance, but node-to-node relationships differed from those observed during the task condition. These results suggest that disengagement of DMN components during task performance is not essential for successful cognitive performance as previously proposed. Instead, a proper balance in network processes may be needed to support optimal task performance. PMID:27319002

  16. Parallel checksumming of data chunks of a shared data object using a log-structured file system

    DOEpatents

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2016-09-06

    Checksum values are generated and used to verify the data integrity. A client executing in a parallel computing system stores a data chunk to a shared data object on a storage node in the parallel computing system. The client determines a checksum value for the data chunk; and provides the checksum value with the data chunk to the storage node that stores the shared object. The data chunk can be stored on the storage node with the corresponding checksum value as part of the shared object. The storage node may be part of a Parallel Log-Structured File System (PLFS), and the client may comprise, for example, a Log-Structured File System client on a compute node or burst buffer. The checksum value can be evaluated when the data chunk is read from the storage node to verify the integrity of the data that is read.

  17. Unraveling Network-induced Memory Contention: Deeper Insights with Machine Learning

    DOE PAGES

    Groves, Taylor Liles; Grant, Ryan; Gonzales, Aaron; ...

    2017-11-21

    Remote Direct Memory Access (RDMA) is expected to be an integral communication mechanism for future exascale systems enabling asynchronous data transfers, so that applications may fully utilize CPU resources while simultaneously sharing data amongst remote nodes. We examine Network-induced Memory Contention (NiMC) on Infiniband networks. We expose the interactions between RDMA, main-memory and cache, when applications and out-of-band services compete for memory resources. We then explore NiMCs resulting impact on application-level performance. For a range of hardware technologies and HPC workloads, we quantify NiMC and show that NiMCs impact grows with scale resulting in up to 3X performance degradation atmore » scales as small as 8K processes even in applications that previously have been shown to be performance resilient in the presence of noise. In addition, this work examines the problem of predicting NiMC's impact on applications by leveraging machine learning and easily accessible performance counters. This approach provides additional insights about the root cause of NiMC and facilitates dynamic selection of potential solutions. Finally, we evaluated three potential techniques to reduce NiMCs impact, namely hardware offloading, core reservation and network throttling.« less

  18. Unraveling Network-induced Memory Contention: Deeper Insights with Machine Learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Groves, Taylor Liles; Grant, Ryan; Gonzales, Aaron

    Remote Direct Memory Access (RDMA) is expected to be an integral communication mechanism for future exascale systems enabling asynchronous data transfers, so that applications may fully utilize CPU resources while simultaneously sharing data amongst remote nodes. We examine Network-induced Memory Contention (NiMC) on Infiniband networks. We expose the interactions between RDMA, main-memory and cache, when applications and out-of-band services compete for memory resources. We then explore NiMCs resulting impact on application-level performance. For a range of hardware technologies and HPC workloads, we quantify NiMC and show that NiMCs impact grows with scale resulting in up to 3X performance degradation atmore » scales as small as 8K processes even in applications that previously have been shown to be performance resilient in the presence of noise. In addition, this work examines the problem of predicting NiMC's impact on applications by leveraging machine learning and easily accessible performance counters. This approach provides additional insights about the root cause of NiMC and facilitates dynamic selection of potential solutions. Finally, we evaluated three potential techniques to reduce NiMCs impact, namely hardware offloading, core reservation and network throttling.« less

  19. Spaceborne Processor Array

    NASA Technical Reports Server (NTRS)

    Chow, Edward T.; Schatzel, Donald V.; Whitaker, William D.; Sterling, Thomas

    2008-01-01

    A Spaceborne Processor Array in Multifunctional Structure (SPAMS) can lower the total mass of the electronic and structural overhead of spacecraft, resulting in reduced launch costs, while increasing the science return through dynamic onboard computing. SPAMS integrates the multifunctional structure (MFS) and the Gilgamesh Memory, Intelligence, and Network Device (MIND) multi-core in-memory computer architecture into a single-system super-architecture. This transforms every inch of a spacecraft into a sharable, interconnected, smart computing element to increase computing performance while simultaneously reducing mass. The MIND in-memory architecture provides a foundation for high-performance, low-power, and fault-tolerant computing. The MIND chip has an internal structure that includes memory, processing, and communication functionality. The Gilgamesh is a scalable system comprising multiple MIND chips interconnected to operate as a single, tightly coupled, parallel computer. The array of MIND components shares a global, virtual name space for program variables and tasks that are allocated at run time to the distributed physical memory and processing resources. Individual processor- memory nodes can be activated or powered down at run time to provide active power management and to configure around faults. A SPAMS system is comprised of a distributed Gilgamesh array built into MFS, interfaces into instrument and communication subsystems, a mass storage interface, and a radiation-hardened flight computer.

  20. Accelerating large-scale simulation of seismic wave propagation by multi-GPUs and three-dimensional domain decomposition

    NASA Astrophysics Data System (ADS)

    Okamoto, Taro; Takenaka, Hiroshi; Nakamura, Takeshi; Aoki, Takayuki

    2010-12-01

    We adopted the GPU (graphics processing unit) to accelerate the large-scale finite-difference simulation of seismic wave propagation. The simulation can benefit from the high-memory bandwidth of GPU because it is a "memory intensive" problem. In a single-GPU case we achieved a performance of about 56 GFlops, which was about 45-fold faster than that achieved by a single core of the host central processing unit (CPU). We confirmed that the optimized use of fast shared memory and registers were essential for performance. In the multi-GPU case with three-dimensional domain decomposition, the non-contiguous memory alignment in the ghost zones was found to impose quite long time in data transfer between GPU and the host node. This problem was solved by using contiguous memory buffers for ghost zones. We achieved a performance of about 2.2 TFlops by using 120 GPUs and 330 GB of total memory: nearly (or more than) 2200 cores of host CPUs would be required to achieve the same performance. The weak scaling was nearly proportional to the number of GPUs. We therefore conclude that GPU computing for large-scale simulation of seismic wave propagation is a promising approach as a faster simulation is possible with reduced computational resources compared to CPUs.

  1. Memory factors in Rey AVLT: Implications for early staging of cognitive decline.

    PubMed

    Fernaeus, Sven-Erik; Ostberg, Per; Wahlund, Lars-Olof; Hellström, Ake

    2014-12-01

    Supraspan verbal list learning is widely used to assess dementia and related cognitive disorders where declarative memory deficits are a major clinical sign. While the overall learning rate is important for diagnosis, serial position patterns may give insight into more specific memory processes in patients with cognitive impairment. This study explored these patterns in a memory clinic clientele. One hundred eighty three participants took the Rey Auditory-Verbal Learning Test (RAVLT). The major groups were patients with Alzheimer's disease (AD), Vascular Dementia (VD), Mild Cognitive Impairment (MCI), and Subjective Cognitive Impairment (SCI) as well as healthy controls (HC). Raw scores for the five trials and five serial partitions were factor analysed. Three memory factors were found and interpreted as Primacy, Recency, and Resistance to Interference. AD and MCI patients had impaired scores in all factors. SCI patients were significantly impaired in the Resistance to Interference factor, and in the Recency factor at the first trial. The main conclusion is that serial position data from word list testing reflect specific memory capacities which vary with levels of cognitive impairment. © 2014 Scandinavian Psychological Associations and John Wiley & Sons Ltd.

  2. Improvement of multiprocessing performance by using optical centralized shared bus

    NASA Astrophysics Data System (ADS)

    Han, Xuliang; Chen, Ray T.

    2004-06-01

    With the ever-increasing need to solve larger and more complex problems, multiprocessing is attracting more and more research efforts. One of the challenges facing the multiprocessor designers is to fulfill in an effective manner the communications among the processes running in parallel on multiple multiprocessors. The conventional electrical backplane bus provides narrow bandwidth as restricted by the physical limitations of electrical interconnects. In the electrical domain, in order to operate at high frequency, the backplane topology has been changed from the simple shared bus to the complicated switched medium. However, the switched medium is an indirect network. It cannot support multicast/broadcast as effectively as the shared bus. Besides the additional latency of going through the intermediate switching nodes, signal routing introduces substantial delay and considerable system complexity. Alternatively, optics has been well known for its interconnect capability. Therefore, it has become imperative to investigate how to improve multiprocessing performance by utilizing optical interconnects. From the implementation standpoint, the existing optical technologies still cannot fulfill the intelligent functions that a switch fabric should provide as effectively as their electronic counterparts. Thus, an innovative optical technology that can provide sufficient bandwidth capacity, while at the same time, retaining the essential merits of the shared bus topology, is highly desirable for the multiprocessing performance improvement. In this paper, the optical centralized shared bus is proposed for use in the multiprocessing systems. This novel optical interconnect architecture not only utilizes the beneficial characteristics of optics, but also retains the desirable properties of the shared bus topology. Meanwhile, from the architecture standpoint, it fits well in the centralized shared-memory multiprocessing scheme. Therefore, a smooth migration with substantial multiprocessing performance improvement is expected. To prove the technical feasibility from the architecture standpoint, a conceptual emulation of the centralized shared-memory multiprocessing scheme is demonstrated on a generic PCI subsystem with an optical centralized shared bus.

  3. SciSpark: In-Memory Map-Reduce for Earth Science Algorithms

    NASA Astrophysics Data System (ADS)

    Ramirez, P.; Wilson, B. D.; Whitehall, K. D.; Palamuttam, R. S.; Mattmann, C. A.; Shah, S.; Goodman, A.; Burke, W.

    2016-12-01

    We are developing a lightning fast Big Data technology called SciSpark based on ApacheTM Spark under a NASA AIST grant (PI Mattmann). Spark implements the map-reduce paradigm for parallel computing on a cluster, but emphasizes in-memory computation, "spilling" to disk only as needed, and so outperforms the disk-based Apache Hadoop by 100x in memory and by 10x on disk. SciSpark extends Spark to support Earth Science use in three ways: Efficient ingest of N-dimensional geo-located arrays (physical variables) from netCDF3/4, HDF4/5, and/or OPeNDAP URLS; Array operations for dense arrays in scala and Java using the ND4S/ND4J or Breeze libraries; Operations to "split" datasets across a Spark cluster by time or space or both. For example, a decade-long time-series of geo-variables can be split across time to enable parallel "speedups" of analysis by day, month, or season. Similarly, very high-resolution climate grids can be partitioned into spatial tiles for parallel operations across rows, columns, or blocks. In addition, using Spark's gateway into python, PySpark, one can utilize the entire ecosystem of numpy, scipy, etc. Finally, SciSpark Notebooks provide a modern eNotebook technology in which scala, python, or spark-sql codes are entered into cells in the Notebook and executed on the cluster, with results, plots, or graph visualizations displayed in "live widgets". We have exercised SciSpark by implementing three complex Use Cases: discovery and evolution of Mesoscale Convective Complexes (MCCs) in storms, yielding a graph of connected components; PDF Clustering of atmospheric state using parallel K-Means; and statistical "rollups" of geo-variables or model-to-obs. differences (i.e. mean, stddev, skewness, & kurtosis) by day, month, season, year, and multi-year. Geo-variables are ingested and split across the cluster using methods on the sciSparkContext object including netCDFVariables() for spatial decomposition and wholeNetCDFVariables() for time-series. The presentation will cover the architecture of SciSpark, the design of the scientific RDD (sRDD) data structures for N-dim. arrays, results from the three science Use Cases, example Notebooks, lessons learned from the algorithm implementations, and parallel performance metrics.

  4. Modern gyrokinetic particle-in-cell simulation of fusion plasmas on top supercomputers

    DOE PAGES

    Wang, Bei; Ethier, Stephane; Tang, William; ...

    2017-06-29

    The Gyrokinetic Toroidal Code at Princeton (GTC-P) is a highly scalable and portable particle-in-cell (PIC) code. It solves the 5D Vlasov-Poisson equation featuring efficient utilization of modern parallel computer architectures at the petascale and beyond. Motivated by the goal of developing a modern code capable of dealing with the physics challenge of increasing problem size with sufficient resolution, new thread-level optimizations have been introduced as well as a key additional domain decomposition. GTC-P's multiple levels of parallelism, including inter-node 2D domain decomposition and particle decomposition, as well as intra-node shared memory partition and vectorization have enabled pushing the scalability ofmore » the PIC method to extreme computational scales. In this paper, we describe the methods developed to build a highly parallelized PIC code across a broad range of supercomputer designs. This particularly includes implementations on heterogeneous systems using NVIDIA GPU accelerators and Intel Xeon Phi (MIC) co-processors and performance comparisons with state-of-the-art homogeneous HPC systems such as Blue Gene/Q. New discovery science capabilities in the magnetic fusion energy application domain are enabled, including investigations of Ion-Temperature-Gradient (ITG) driven turbulence simulations with unprecedented spatial resolution and long temporal duration. Performance studies with realistic fusion experimental parameters are carried out on multiple supercomputing systems spanning a wide range of cache capacities, cache-sharing configurations, memory bandwidth, interconnects and network topologies. These performance comparisons using a realistic discovery-science-capable domain application code provide valuable insights on optimization techniques across one of the broadest sets of current high-end computing platforms worldwide.« less

  5. Modern gyrokinetic particle-in-cell simulation of fusion plasmas on top supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Bei; Ethier, Stephane; Tang, William

    The Gyrokinetic Toroidal Code at Princeton (GTC-P) is a highly scalable and portable particle-in-cell (PIC) code. It solves the 5D Vlasov-Poisson equation featuring efficient utilization of modern parallel computer architectures at the petascale and beyond. Motivated by the goal of developing a modern code capable of dealing with the physics challenge of increasing problem size with sufficient resolution, new thread-level optimizations have been introduced as well as a key additional domain decomposition. GTC-P's multiple levels of parallelism, including inter-node 2D domain decomposition and particle decomposition, as well as intra-node shared memory partition and vectorization have enabled pushing the scalability ofmore » the PIC method to extreme computational scales. In this paper, we describe the methods developed to build a highly parallelized PIC code across a broad range of supercomputer designs. This particularly includes implementations on heterogeneous systems using NVIDIA GPU accelerators and Intel Xeon Phi (MIC) co-processors and performance comparisons with state-of-the-art homogeneous HPC systems such as Blue Gene/Q. New discovery science capabilities in the magnetic fusion energy application domain are enabled, including investigations of Ion-Temperature-Gradient (ITG) driven turbulence simulations with unprecedented spatial resolution and long temporal duration. Performance studies with realistic fusion experimental parameters are carried out on multiple supercomputing systems spanning a wide range of cache capacities, cache-sharing configurations, memory bandwidth, interconnects and network topologies. These performance comparisons using a realistic discovery-science-capable domain application code provide valuable insights on optimization techniques across one of the broadest sets of current high-end computing platforms worldwide.« less

  6. Perceptions of Shared Decision Making Among Patients with Spinal Cord Injuries/Disorders.

    PubMed

    Locatelli, Sara M; Etingen, Bella; Heinemann, Allen; Neumann, Holly DeMark; Miskovic, Ana; Chen, David; LaVela, Sherri L

    2016-01-01

    Background: Individuals with spinal cord injuries/disorders (SCI/D) are interested in, and benefit from, shared decision making (SDM). Objective: To explore SDM among individuals with SCI/D and how demographics and health and SCI/D characteristics are related to SDM. Method: Individuals with SCI/D who were at least 1 year post injury, resided in the Chicago metropolitan area, and received SCI care at a Veterans Affairs (VA; n = 124) or an SCI Model Systems facility ( n = 326) completed a mailed survey measuring demographics, health and SCI/D characteristics, physical and mental health status, and perceptions of care, including SDM, using the Combined Outcome Measure for Risk Communication and Treatment Decision-Making Effectiveness (COMRADE) that assesses decision-making effectiveness (effectiveness) and risk communication (communication). Bivariate analyses and multiple linear regression were used to identify variables associated with SDM. Results: Participants were mostly male (83%) and White (70%) and were an average age of 54 years ( SD = 14.3). Most had traumatic etiology, 44% paraplegia, and 49% complete injury. Veteran/civilian status and demographics were unrelated to scores. Bivariate analyses showed that individuals with tetraplegia had better effectiveness scores than those with paraplegia. Better effectiveness was correlated with better physical and mental health; better communication was correlated with better mental health. Multiple linear regressions showed that tetraplegia, better physical health, and better mental health were associated with better effectiveness, and better mental health was associated with better communication. Conclusion: SCI/D and health characteristics were the only variables associated with SDM. Interventions to increase engagement in SDM and provider attention to SDM may be beneficial, especially for individuals with paraplegia or in poorer physical and mental health.

  7. Domain Wall Fermion Inverter on Pentium 4

    NASA Astrophysics Data System (ADS)

    Pochinsky, Andrew

    2005-03-01

    A highly optimized domain wall fermion inverter has been developed as part of the SciDAC lattice initiative. By designing the code to minimize memory bus traffic, it achieves high cache reuse and performance in excess of 2 GFlops for out of L2 cache problem sizes on a GigE cluster with 2.66 GHz Xeon processors. The code uses the SciDAC QMP communication library.

  8. A scalable approach to solving dense linear algebra problems on hybrid CPU-GPU systems

    DOE PAGES

    Song, Fengguang; Dongarra, Jack

    2014-10-01

    Aiming to fully exploit the computing power of all CPUs and all graphics processing units (GPUs) on hybrid CPU-GPU systems to solve dense linear algebra problems, in this paper we design a class of heterogeneous tile algorithms to maximize the degree of parallelism, to minimize the communication volume, and to accommodate the heterogeneity between CPUs and GPUs. The new heterogeneous tile algorithms are executed upon our decentralized dynamic scheduling runtime system, which schedules a task graph dynamically and transfers data between compute nodes automatically. The runtime system uses a new distributed task assignment protocol to solve data dependencies between tasksmore » without any coordination between processing units. By overlapping computation and communication through dynamic scheduling, we are able to attain scalable performance for the double-precision Cholesky factorization and QR factorization. Finally, our approach demonstrates a performance comparable to Intel MKL on shared-memory multicore systems and better performance than both vendor (e.g., Intel MKL) and open source libraries (e.g., StarPU) in the following three environments: heterogeneous clusters with GPUs, conventional clusters without GPUs, and shared-memory systems with multiple GPUs.« less

  9. A scalable approach to solving dense linear algebra problems on hybrid CPU-GPU systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Fengguang; Dongarra, Jack

    Aiming to fully exploit the computing power of all CPUs and all graphics processing units (GPUs) on hybrid CPU-GPU systems to solve dense linear algebra problems, in this paper we design a class of heterogeneous tile algorithms to maximize the degree of parallelism, to minimize the communication volume, and to accommodate the heterogeneity between CPUs and GPUs. The new heterogeneous tile algorithms are executed upon our decentralized dynamic scheduling runtime system, which schedules a task graph dynamically and transfers data between compute nodes automatically. The runtime system uses a new distributed task assignment protocol to solve data dependencies between tasksmore » without any coordination between processing units. By overlapping computation and communication through dynamic scheduling, we are able to attain scalable performance for the double-precision Cholesky factorization and QR factorization. Finally, our approach demonstrates a performance comparable to Intel MKL on shared-memory multicore systems and better performance than both vendor (e.g., Intel MKL) and open source libraries (e.g., StarPU) in the following three environments: heterogeneous clusters with GPUs, conventional clusters without GPUs, and shared-memory systems with multiple GPUs.« less

  10. Default Mode Network Interference in Mild Traumatic Brain Injury – A Pilot Resting State Study

    PubMed Central

    Sours, Chandler; Zhuo, Jiachen; Janowich, Jacqueline; Aarabi, Bizhan; Shanmuganathan, Kathirkamanthan; Gullapalli, Rao P

    2013-01-01

    In this study we investigated the functional connectivity in 23 Mild TBI (mTBI) patients with and without memory complaints using resting state fMRI in the sub-acute stage of injury as well as a group of control participants. Results indicate that mTBI patients with memory complaints performed significantly worse than patients without memory complaints on tests assessing memory from the Automated Neuropsychological Assessment Metrics (ANAM). Altered functional connectivity was observed between the three groups between the default mode network (DMN) and the nodes of the task positive network (TPN). Altered functional connectivity was also observed between both the TPN and DMN and nodes associated with the Salience Network (SN). Following mTBI there is a reduction in anti-correlated networks for both those with and without memory complaints for the DMN, but only a reduction in the anti-correlated network in mTBI patients with memory complaints for the TPN. Furthermore, an increased functional connectivity between the TPN and SN appears to be associated with reduced performance on memory assessments. Overall the results suggest that a disruption in the segregation of the DMN and the TPN at rest may be mediated through both a direct pathway of increased FC between various nodes of the TPN and DMN, and through an indirect pathway that links the TPN and DMN through nodes of the SN. This disruption between networks may cause a detrimental impact on memory functioning following mTBI, supporting the Default Mode Interference Hypothesis in the context of mTBI related memory deficits. PMID:23994210

  11. Default mode network interference in mild traumatic brain injury - a pilot resting state study.

    PubMed

    Sours, Chandler; Zhuo, Jiachen; Janowich, Jacqueline; Aarabi, Bizhan; Shanmuganathan, Kathirkamanthan; Gullapalli, Rao P

    2013-11-06

    In this study we investigated the functional connectivity in 23 Mild TBI (mTBI) patients with and without memory complaints using resting state fMRI in the sub-acute stage of injury as well as a group of control participants. Results indicate that mTBI patients with memory complaints performed significantly worse than patients without memory complaints on tests assessing memory from the Automated Neuropsychological Assessment Metrics (ANAM). Altered functional connectivity was observed between the three groups between the default mode network (DMN) and the nodes of the task positive network (TPN). Altered functional connectivity was also observed between both the TPN and DMN and nodes associated with the Salience Network (SN). Following mTBI there is a reduction in anti-correlated networks for both those with and without memory complaints for the DMN, but only a reduction in the anti-correlated network in mTBI patients with memory complaints for the TPN. Furthermore, an increased functional connectivity between the TPN and SN appears to be associated with reduced performance on memory assessments. Overall the results suggest that a disruption in the segregation of the DMN and the TPN at rest may be mediated through both a direct pathway of increased FC between various nodes of the TPN and DMN, and through an indirect pathway that links the TPN and DMN through nodes of the SN. This disruption between networks may cause a detrimental impact on memory functioning following mTBI, supporting the Default Mode Interference Hypothesis in the context of mTBI related memory deficits. © 2013 Elsevier B.V. All rights reserved.

  12. Memory and Energy Optimization Strategies for Multithreaded Operating System on the Resource-Constrained Wireless Sensor Node

    PubMed Central

    Liu, Xing; Hou, Kun Mean; de Vaulx, Christophe; Xu, Jun; Yang, Jianfeng; Zhou, Haiying; Shi, Hongling; Zhou, Peng

    2015-01-01

    Memory and energy optimization strategies are essential for the resource-constrained wireless sensor network (WSN) nodes. In this article, a new memory-optimized and energy-optimized multithreaded WSN operating system (OS) LiveOS is designed and implemented. Memory cost of LiveOS is optimized by using the stack-shifting hybrid scheduling approach. Different from the traditional multithreaded OS in which thread stacks are allocated statically by the pre-reservation, thread stacks in LiveOS are allocated dynamically by using the stack-shifting technique. As a result, memory waste problems caused by the static pre-reservation can be avoided. In addition to the stack-shifting dynamic allocation approach, the hybrid scheduling mechanism which can decrease both the thread scheduling overhead and the thread stack number is also implemented in LiveOS. With these mechanisms, the stack memory cost of LiveOS can be reduced more than 50% if compared to that of a traditional multithreaded OS. Not is memory cost optimized, but also the energy cost is optimized in LiveOS, and this is achieved by using the multi-core “context aware” and multi-core “power-off/wakeup” energy conservation approaches. By using these approaches, energy cost of LiveOS can be reduced more than 30% when compared to the single-core WSN system. Memory and energy optimization strategies in LiveOS not only prolong the lifetime of WSN nodes, but also make the multithreaded OS feasible to run on the memory-constrained WSN nodes. PMID:25545264

  13. Multi-petascale highly efficient parallel supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaflop-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC). The ASIC nodes are interconnected by a five dimensional torus network that optimally maximize the throughput of packet communications between nodes and minimize latency. The network implements collective network and a global asynchronous network that provides global barrier and notification functions. Integrated in the node design include a list-based prefetcher. The memory system implements transaction memory, thread level speculation, and multiversioning cache that improves soft error rate at the same time andmore » supports DMA functionality allowing for parallel processing message-passing.« less

  14. Subjective cognitive impairment: functional MRI during a divided attention task.

    PubMed

    Rodda, J; Dannhauser, T; Cutinha, D J; Shergill, S S; Walker, Z

    2011-10-01

    Individuals with subjective cognitive impairment (SCI) have persistent memory complaints but normal neurocognitive performance. For some, this may represent a pre-mild cognitive impairment (MCI) stage of Alzheimer's disease (AD). Given that attentional deficits and associated brain activation changes are present early in the course of AD, we aimed to determine whether SCI is associated with brain activation changes during attentional processing. Eleven SCI subjects and 10 controls completed a divided attention task during functional magnetic resonance imaging. SCI and control groups did not differ in sociodemographic, neurocognitive or behavioural measures. When group activation during the divided attention task was compared, the SCI group demonstrated increased activation in left medial temporal lobe, bilateral thalamus, posterior cingulate and caudate. This pattern of increased activation is similar to the pattern of decreased activation reported during divided attention in AD and may indicate compensatory changes. These findings suggest the presence of early functional changes in SCI; longitudinal studies will help to further elucidate the relationship between SCI and AD. Copyright © 2010 Elsevier Masson SAS. All rights reserved.

  15. Spinal-Cord-Injured Individual's Experiences of Having a Partner: A Phenomenological-Hermeneutic Study.

    PubMed

    Angel, Sanne

    2015-06-01

    Having a partner is a strong factor in adaptation to the new life situation with a spinal cord injury (SCI). Still, more knowledge in detail about the partner's influences according to the experiences of individuals with SCI could contribute to the understanding of the situation after an injury. The aim of this phenomenological-hermeneutic article is to achieve a deeper understanding of nine individuals' experiences the first 2 years after SCI. In rehabilitation after SCI, the partner supported the SCI individual's life spirit by not giving up and by still seeing possibilities in the future. The partner reinforced the SCI individual's commitment to life by sharing experiences; providing love, trust, and hope; and giving priority to the best things in life for the SCI individual. This implied cohabitation providing concrete help and an intimacy that helped to cope with problems and anxieties and allowed SCI individuals the ability to self-realize. This promoted feelings of profound gratitude but also dependency. Thus, the SCI individual benefitted from the partner's support mentally and physically, which enabled a life that would not otherwise be possible.

  16. Finding influential nodes for integration in brain networks using optimal percolation theory.

    PubMed

    Del Ferraro, Gino; Moreno, Andrea; Min, Byungjoon; Morone, Flaviano; Pérez-Ramírez, Úrsula; Pérez-Cervera, Laura; Parra, Lucas C; Holodny, Andrei; Canals, Santiago; Makse, Hernán A

    2018-06-11

    Global integration of information in the brain results from complex interactions of segregated brain networks. Identifying the most influential neuronal populations that efficiently bind these networks is a fundamental problem of systems neuroscience. Here, we apply optimal percolation theory and pharmacogenetic interventions in vivo to predict and subsequently target nodes that are essential for global integration of a memory network in rodents. The theory predicts that integration in the memory network is mediated by a set of low-degree nodes located in the nucleus accumbens. This result is confirmed with pharmacogenetic inactivation of the nucleus accumbens, which eliminates the formation of the memory network, while inactivations of other brain areas leave the network intact. Thus, optimal percolation theory predicts essential nodes in brain networks. This could be used to identify targets of interventions to modulate brain function.

  17. Exploring Manycore Multinode Systems for Irregular Applications with FPGA Prototyping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ceriani, Marco; Palermo, Gianluca; Secchi, Simone

    We present a prototype of a multi-core architecture implemented on FPGA, designed to enable efficient execution of irregular applications on distributed shared memory machines, while maintaining high performance on regular workloads. The architecture is composed of off-the-shelf soft-core cores, local interconnection and memory interface, integrated with custom components that optimize it for irregular applications. It relies on three key elements: a global address space, multithreading, and fine-grained synchronization. Global addresses are scrambled to reduce the formation of network hot-spots, while the latency of the transactions is covered by integrating an hardware scheduler within the custom load/store buffers to take advantagemore » from the availability of multiple executions threads, increasing the efficiency in a transparent way to the application. We evaluated a dual node system irregular kernels showing scalability in the number of cores and threads.« less

  18. Subjective memory complaints, depressive symptoms and cognition in patients attending a memory outpatient clinic.

    PubMed

    Lehrner, J; Moser, D; Klug, S; Gleiß, A; Auff, E; Dal-Bianco, P; Pusswald, G

    2014-03-01

    The goals of this study were to establish prevalence of subjective memory complaints (SMC) and depressive symptoms (DS) and their relation to cognitive functioning and cognitive status in an outpatient memory clinic cohort. Two hundred forty-eight cognitively healthy controls and 581 consecutive patients with cognitive complaints who fulfilled the inclusion criteria were included in the study. A statistically significant difference (p < 0.001) between control group and patient group regarding mean SMC was detected. 7.7% of controls reported a considerable degree of SMC, whereas 35.8% of patients reported considerable SMC. Additionally, a statistically significant difference (p < 0.001) between controls and patient group regarding Beck depression score was detected. 16.6% of controls showed a clinical relevant degree of DS, whereas 48.5% of patients showed DS. An analysis of variance revealed a statistically significant difference across all four groups (control group, SCI group, naMCI group, aMCI group) (p < 0.001). Whereas 8% of controls reported a considerable degree of SMC, 34% of the SCI group, 31% of the naMCI group, and 54% of the aMCI group reported considerable SMC. A two-factor analysis of variance with the factors cognitive status (controls, SCI group, naMCI group, aMCI group) and depressive status (depressed vs. not depressed) and SMC as dependent variable revealed that both factors were significant (p < 0.001), whereas the interaction was not (p = 0.820). A large proportion of patients seeking help in a memory outpatient clinic report considerable SMC, with an increasing degree from cognitively healthy elderly to aMCI. Depressive status increases SMC consistently across groups with different cognitive status.

  19. High Prevalence of Stress and Low Prevalence of Alzheimer Disease CSF Biomarkers in a Clinical Sample with Subjective Cognitive Impairment.

    PubMed

    Eckerström, Marie; Berg, Anne Ingeborg; Nordlund, Arto; Rolstad, Sindre; Sacuiu, Simona; Wallin, Anders

    2016-01-01

    Subjective cognitive impairment (SCI) is a trigger for seeking health care in a possible preclinical phase of Alzheimer's disease (AD), although the characteristics of SCI need clarification. We investigated the prevalence of psychosocial stress, depressive symptoms and CSF AD biomarkers in SCI and MCI (mild cognitive impairment). Memory clinic patients (SCI: n = 90; age: 59.8 ± 7.6 years; MCI: n = 160; age: 63.7 ± 7.0 years) included in the Gothenburg MCI study were examined at baseline. Variables were analyzed using logistic regression with SCI as dependent variable. Stress was more prevalent in SCI (51.1%) than MCI (23.1%); p < 0.0005. SCI patients had more previous depressive symptoms (p = 0.006), but showed no difference compared to MCI patients considering current depressive symptoms. A positive CSF AD profile was present in 14.4% of SCI patients and 35.0% of MCI patients (p = 0.001). Stress (p = 0.002), previous stress/depressive symptoms (p = 0.006) and a negative CSF AD profile (p = 0.036) predicted allocation to the SCI group. Psychosocial stress is more prevalent in SCI than previously acknowledged. The high prevalence and long-term occurrence of stress/depressive symptoms in SCI in combination with a low prevalence of altered CSF AD biomarkers strengthens the notion that AD is not the most likely etiology of SCI. © 2016 S. Karger AG, Basel.

  20. No Evidence for an Item Limit in Change Detection (Open Access)

    DTIC Science & Technology

    2013-02-28

    memory : a reconsideration of mental storage capacity. Behav Brain Sci 24: 87–114. 17. Eng HY, Chen D, Jiang Y (2005) Visual working memory for simple...working memory can hold no more than a fixed number of items (‘‘item-limit models’’). Recent findings force us to consider the alternative view that working... memory is limited by the precision in stimulus encoding, with mean precision decreasing with increasing set size (‘‘continuous-resource models

  1. Nanophotonic rare-earth quantum memory with optically controlled retrieval

    NASA Astrophysics Data System (ADS)

    Zhong, Tian; Kindem, Jonathan M.; Bartholomew, John G.; Rochman, Jake; Craiciu, Ioana; Miyazono, Evan; Bettinelli, Marco; Cavalli, Enrico; Verma, Varun; Nam, Sae Woo; Marsili, Francesco; Shaw, Matthew D.; Beyer, Andrew D.; Faraon, Andrei

    2017-09-01

    Optical quantum memories are essential elements in quantum networks for long-distance distribution of quantum entanglement. Scalable development of quantum network nodes requires on-chip qubit storage functionality with control of the readout time. We demonstrate a high-fidelity nanophotonic quantum memory based on a mesoscopic neodymium ensemble coupled to a photonic crystal cavity. The nanocavity enables >95% spin polarization for efficient initialization of the atomic frequency comb memory and time bin-selective readout through an enhanced optical Stark shift of the comb frequencies. Our solid-state memory is integrable with other chip-scale photon source and detector devices for multiplexed quantum and classical information processing at the network nodes.

  2. Development of Low Parasitic Light Sensitivity and Low Dark Current 2.8 μm Global Shutter Pixel †

    PubMed Central

    Yokoyama, Toshifumi; Tsutsui, Masafumi; Suzuki, Masakatsu; Nishi, Yoshiaki; Mizuno, Ikuo; Lahav, Assaf

    2018-01-01

    We developed a low parasitic light sensitivity (PLS) and low dark current 2.8 μm global shutter pixel. We propose a new inner lens design concept to realize both low PLS and high quantum efficiency (QE). 1/PLS is 7700 and QE is 62% at a wavelength of 530 nm. We also propose a new storage-gate based memory node for low dark current. P-type implants and negative gate biasing are introduced to suppress dark current at the surface of the memory node. This memory node structure shows the world smallest dark current of 9.5 e−/s at 60 °C. PMID:29370146

  3. Development of Low Parasitic Light Sensitivity and Low Dark Current 2.8 μm Global Shutter Pixel.

    PubMed

    Yokoyama, Toshifumi; Tsutsui, Masafumi; Suzuki, Masakatsu; Nishi, Yoshiaki; Mizuno, Ikuo; Lahav, Assaf

    2018-01-25

    Abstract : We developed a low parasitic light sensitivity (PLS) and low dark current 2.8 μm global shutter pixel. We propose a new inner lens design concept to realize both low PLS and high quantum efficiency (QE). 1/PLS is 7700 and QE is 62% at a wavelength of 530 nm. We also propose a new storage-gate based memory node for low dark current. P-type implants and negative gate biasing are introduced to suppress dark current at the surface of the memory node. This memory node structure shows the world smallest dark current of 9.5 e - /s at 60 °C.

  4. Cooperative Learning for Distributed In-Network Traffic Classification

    NASA Astrophysics Data System (ADS)

    Joseph, S. B.; Loo, H. R.; Ismail, I.; Andromeda, T.; Marsono, M. N.

    2017-04-01

    Inspired by the concept of autonomic distributed/decentralized network management schemes, we consider the issue of information exchange among distributed network nodes to network performance and promote scalability for in-network monitoring. In this paper, we propose a cooperative learning algorithm for propagation and synchronization of network information among autonomic distributed network nodes for online traffic classification. The results show that network nodes with sharing capability perform better with a higher average accuracy of 89.21% (sharing data) and 88.37% (sharing clusters) compared to 88.06% for nodes without cooperative learning capability. The overall performance indicates that cooperative learning is promising for distributed in-network traffic classification.

  5. Multinode reconfigurable pipeline computer

    NASA Technical Reports Server (NTRS)

    Nosenchuck, Daniel M. (Inventor); Littman, Michael G. (Inventor)

    1989-01-01

    A multinode parallel-processing computer is made up of a plurality of innerconnected, large capacity nodes each including a reconfigurable pipeline of functional units such as Integer Arithmetic Logic Processors, Floating Point Arithmetic Processors, Special Purpose Processors, etc. The reconfigurable pipeline of each node is connected to a multiplane memory by a Memory-ALU switch NETwork (MASNET). The reconfigurable pipeline includes three (3) basic substructures formed from functional units which have been found to be sufficient to perform the bulk of all calculations. The MASNET controls the flow of signals from the memory planes to the reconfigurable pipeline and vice versa. the nodes are connectable together by an internode data router (hyperspace router) so as to form a hypercube configuration. The capability of the nodes to conditionally configure the pipeline at each tick of the clock, without requiring a pipeline flush, permits many powerful algorithms to be implemented directly.

  6. Mathematical Analysis of Vehicle Delivery Scale of Bike-Sharing Rental Nodes

    NASA Astrophysics Data System (ADS)

    Zhai, Y.; Liu, J.; Liu, L.

    2018-04-01

    Aiming at the lack of scientific and reasonable judgment of vehicles delivery scale and insufficient optimization of scheduling decision, based on features of the bike-sharing usage, this paper analyses the applicability of the discrete time and state of the Markov chain, and proves its properties to be irreducible, aperiodic and positive recurrent. Based on above analysis, the paper has reached to the conclusion that limit state (steady state) probability of the bike-sharing Markov chain only exists and is independent of the initial probability distribution. Then this paper analyses the difficulty of the transition probability matrix parameter statistics and the linear equations group solution in the traditional solving algorithm of the bike-sharing Markov chain. In order to improve the feasibility, this paper proposes a "virtual two-node vehicle scale solution" algorithm which considered the all the nodes beside the node to be solved as a virtual node, offered the transition probability matrix, steady state linear equations group and the computational methods related to the steady state scale, steady state arrival time and scheduling decision of the node to be solved. Finally, the paper evaluates the rationality and accuracy of the steady state probability of the proposed algorithm by comparing with the traditional algorithm. By solving the steady state scale of the nodes one by one, the proposed algorithm is proved to have strong feasibility because it lowers the level of computational difficulty and reduces the number of statistic, which will help the bike-sharing companies to optimize the scale and scheduling of nodes.

  7. Scaling properties in time-varying networks with memory

    NASA Astrophysics Data System (ADS)

    Kim, Hyewon; Ha, Meesoon; Jeong, Hawoong

    2015-12-01

    The formation of network structure is mainly influenced by an individual node's activity and its memory, where activity can usually be interpreted as the individual inherent property and memory can be represented by the interaction strength between nodes. In our study, we define the activity through the appearance pattern in the time-aggregated network representation, and quantify the memory through the contact pattern of empirical temporal networks. To address the role of activity and memory in epidemics on time-varying networks, we propose temporal-pattern coarsening of activity-driven growing networks with memory. In particular, we focus on the relation between time-scale coarsening and spreading dynamics in the context of dynamic scaling and finite-size scaling. Finally, we discuss the universality issue of spreading dynamics on time-varying networks for various memory-causality tests.

  8. Multi-GPU and multi-CPU accelerated FDTD scheme for vibroacoustic applications

    NASA Astrophysics Data System (ADS)

    Francés, J.; Otero, B.; Bleda, S.; Gallego, S.; Neipp, C.; Márquez, A.; Beléndez, A.

    2015-06-01

    The Finite-Difference Time-Domain (FDTD) method is applied to the analysis of vibroacoustic problems and to study the propagation of longitudinal and transversal waves in a stratified media. The potential of the scheme and the relevance of each acceleration strategy for massively computations in FDTD are demonstrated in this work. In this paper, we propose two new specific implementations of the bi-dimensional scheme of the FDTD method using multi-CPU and multi-GPU, respectively. In the first implementation, an open source message passing interface (OMPI) has been included in order to massively exploit the resources of a biprocessor station with two Intel Xeon processors. Moreover, regarding CPU code version, the streaming SIMD extensions (SSE) and also the advanced vectorial extensions (AVX) have been included with shared memory approaches that take advantage of the multi-core platforms. On the other hand, the second implementation called the multi-GPU code version is based on Peer-to-Peer communications available in CUDA on two GPUs (NVIDIA GTX 670). Subsequently, this paper presents an accurate analysis of the influence of the different code versions including shared memory approaches, vector instructions and multi-processors (both CPU and GPU) and compares them in order to delimit the degree of improvement of using distributed solutions based on multi-CPU and multi-GPU. The performance of both approaches was analysed and it has been demonstrated that the addition of shared memory schemes to CPU computing improves substantially the performance of vector instructions enlarging the simulation sizes that use efficiently the cache memory of CPUs. In this case GPU computing is slightly twice times faster than the fine tuned CPU version in both cases one and two nodes. However, for massively computations explicit vector instructions do not worth it since the memory bandwidth is the limiting factor and the performance tends to be the same than the sequential version with auto-vectorisation and also shared memory approach. In this scenario GPU computing is the best option since it provides a homogeneous behaviour. More specifically, the speedup of GPU computing achieves an upper limit of 12 for both one and two GPUs, whereas the performance reaches peak values of 80 GFlops and 146 GFlops for the performance for one GPU and two GPUs respectively. Finally, the method is applied to an earth crust profile in order to demonstrate the potential of our approach and the necessity of applying acceleration strategies in these type of applications.

  9. Development of the Large-Scale Statistical Analysis System of Satellites Observations Data with Grid Datafarm Architecture

    NASA Astrophysics Data System (ADS)

    Yamamoto, K.; Murata, K.; Kimura, E.; Honda, R.

    2006-12-01

    In the Solar-Terrestrial Physics (STP) field, the amount of satellite observation data has been increasing every year. It is necessary to solve the following three problems to achieve large-scale statistical analyses of plenty of data. (i) More CPU power and larger memory and disk size are required. However, total powers of personal computers are not enough to analyze such amount of data. Super-computers provide a high performance CPU and rich memory area, but they are usually separated from the Internet or connected only for the purpose of programming or data file transfer. (ii) Most of the observation data files are managed at distributed data sites over the Internet. Users have to know where the data files are located. (iii) Since no common data format in the STP field is available now, users have to prepare reading program for each data by themselves. To overcome the problems (i) and (ii), we constructed a parallel and distributed data analysis environment based on the Gfarm reference implementation of the Grid Datafarm architecture. The Gfarm shares both computational resources and perform parallel distributed processings. In addition, the Gfarm provides the Gfarm filesystem which can be as virtual directory tree among nodes. The Gfarm environment is composed of three parts; a metadata server to manage distributed files information, filesystem nodes to provide computational resources and a client to throw a job into metadata server and manages data processing schedulings. In the present study, both data files and data processes are parallelized on the Gfarm with 6 file system nodes: CPU clock frequency of each node is Pentium V 1GHz, 256MB memory and40GB disk. To evaluate performances of the present Gfarm system, we scanned plenty of data files, the size of which is about 300MB for each, in three processing methods: sequential processing in one node, sequential processing by each node and parallel processing by each node. As a result, in comparison between the number of files and the elapsed time, parallel and distributed processing shorten the elapsed time to 1/5 than sequential processing. On the other hand, sequential processing times were shortened in another experiment, whose file size is smaller than 100KB. In this case, the elapsed time to scan one file is within one second. It implies that disk swap took place in case of parallel processing by each node. We note that the operation became unstable when the number of the files exceeded 1000. To overcome the problem (iii), we developed an original data class. This class supports our reading of data files with various data formats since it converts them into an original data format since it defines schemata for every type of data and encapsulates the structure of data files. In addition, since this class provides a function of time re-sampling, users can easily convert multiple data (array) with different time resolution into the same time resolution array. Finally, using the Gfarm, we achieved a high performance environment for large-scale statistical data analyses. It should be noted that the present method is effective only when one data file size is large enough. At present, we are restructuring the new Gfarm environment with 8 nodes: CPU is Athlon 64 x2 Dual Core 2GHz, 2GB memory and 1.2TB disk (using RAID0) for each node. Our original class is to be implemented on the new Gfarm environment. In the present talk, we show the latest results with applying the present system for data analyses with huge number of satellite observation data files.

  10. Shared address collectives using counter mechanisms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blocksome, Michael; Dozsa, Gabor; Gooding, Thomas M

    A shared address space on a compute node stores data received from a network and data to transmit to the network. The shared address space includes an application buffer that can be directly operated upon by a plurality of processes, for instance, running on different cores on the compute node. A shared counter is used for one or more of signaling arrival of the data across the plurality of processes running on the compute node, signaling completion of an operation performed by one or more of the plurality of processes, obtaining reservation slots by one or more of the pluralitymore » of processes, or combinations thereof.« less

  11. An elementary quantum network using robust nuclear spin qubits in diamond

    NASA Astrophysics Data System (ADS)

    Kalb, Norbert; Reiserer, Andreas; Humphreys, Peter; Blok, Machiel; van Bemmelen, Koen; Twitchen, Daniel; Markham, Matthew; Taminiau, Tim; Hanson, Ronald

    Quantum registers containing multiple robust qubits can form the nodes of future quantum networks for computation and communication. Information storage within such nodes must be resilient to any type of local operation. Here we demonstrate multiple robust memories by employing five nuclear spins adjacent to a nitrogen-vacancy defect centre in diamond. We characterize the storage of quantum superpositions and their resilience to entangling attempts with the electron spin of the defect centre. The storage fidelity is found to be limited by the probabilistic electron spin reset after failed entangling attempts. Control over multiple memories is then utilized to encode states in decoherence protected subspaces with increased robustness. Furthermore we demonstrate memory control in two optically linked network nodes and characterize the storage capabilities of both memories in terms of the process fidelity with the identity. These results pave the way towards multi-qubit quantum algorithms in a remote network setting.

  12. Resting connectivity between salience nodes predicts recognition memory.

    PubMed

    Andreano, Joseph M; Touroutoglou, Alexandra; Dickerson, Bradford C; Barrett, Lisa F

    2017-06-01

    The resting connectivity of the brain's salience network, particularly the ventral subsystem of the salience network, has been previously associated with various measures of affective reactivity. Numerous studies have demonstrated that increased affective arousal leads to enhanced consolidation of memory. This suggests that individuals with greater ventral salience network connectivity will exhibit greater responses to affective experience, leading to a greater enhancement of memory by affect. To test this hypothesis, resting ventral salience connectivity was measured in 41 young adults, who were then exposed to neutral and negative affect inductions during a paired associate memory test. Memory performance for material learned under both negative and neutral induction was tested for correlation with resting connectivity between major ventral salience nodes. The results showed a significant interaction between mood induction (negative vs neutral) and connectivity between ventral anterior insula and pregenual anterior cingulate cortex, indicating that salience node connectivity predicted memory for material encoded under negative, but not neutral induction. These findings suggest that the network state of the perceiver, measured prior to affective experience, meaningfully influences the extent to which affect modulates memory. Implications of these findings for individuals with affective disorder, who show alterations in both connectivity and memory, are considered. © The Author (2017). Published by Oxford University Press.

  13. IGA-ADS: Isogeometric analysis FEM using ADS solver

    NASA Astrophysics Data System (ADS)

    Łoś, Marcin M.; Woźniak, Maciej; Paszyński, Maciej; Lenharth, Andrew; Hassaan, Muhamm Amber; Pingali, Keshav

    2017-08-01

    In this paper we present a fast explicit solver for solution of non-stationary problems using L2 projections with isogeometric finite element method. The solver has been implemented within GALOIS framework. It enables parallel multi-core simulations of different time-dependent problems, in 1D, 2D, or 3D. We have prepared the solver framework in a way that enables direct implementation of the selected PDE and corresponding boundary conditions. In this paper we describe the installation, implementation of exemplary three PDEs, and execution of the simulations on multi-core Linux cluster nodes. We consider three case studies, including heat transfer, linear elasticity, as well as non-linear flow in heterogeneous media. The presented package generates output suitable for interfacing with Gnuplot and ParaView visualization software. The exemplary simulations show near perfect scalability on Gilbert shared-memory node with four Intel® Xeon® CPU E7-4860 processors, each possessing 10 physical cores (for a total of 40 cores).

  14. Nonvolatile memory with Co-SiO2 core-shell nanocrystals as charge storage nodes in floating gate

    NASA Astrophysics Data System (ADS)

    Liu, Hai; Ferrer, Domingo A.; Ferdousi, Fahmida; Banerjee, Sanjay K.

    2009-11-01

    In this letter, we reported nanocrystal floating gate memory with Co-SiO2 core-shell nanocrystal charge storage nodes. By using a water-in-oil microemulsion scheme, Co-SiO2 core-shell nanocrystals were synthesized and closely packed to achieve high density matrix in the floating gate without aggregation. The insulator shell also can help to increase the thermal stability of the nanocrystal metal core during the fabrication process to improve memory performance.

  15. Announcing Supercomputer Summit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wells, Jack; Bland, Buddy; Nichols, Jeff

    Summit is the next leap in leadership-class computing systems for open science. With Summit we will be able to address, with greater complexity and higher fidelity, questions concerning who we are, our place on earth, and in our universe. Summit will deliver more than five times the computational performance of Titan’s 18,688 nodes, using only approximately 3,400 nodes when it arrives in 2017. Like Titan, Summit will have a hybrid architecture, and each node will contain multiple IBM POWER9 CPUs and NVIDIA Volta GPUs all connected together with NVIDIA’s high-speed NVLink. Each node will have over half a terabyte ofmore » coherent memory (high bandwidth memory + DDR4) addressable by all CPUs and GPUs plus 800GB of non-volatile RAM that can be used as a burst buffer or as extended memory. To provide a high rate of I/O throughput, the nodes will be connected in a non-blocking fat-tree using a dual-rail Mellanox EDR InfiniBand interconnect. Upon completion, Summit will allow researchers in all fields of science unprecedented access to solving some of the world’s most pressing challenges.« less

  16. Motion video compression system with neural network having winner-take-all function

    NASA Technical Reports Server (NTRS)

    Fang, Wai-Chi (Inventor); Sheu, Bing J. (Inventor)

    1997-01-01

    A motion video data system includes a compression system, including an image compressor, an image decompressor correlative to the image compressor having an input connected to an output of the image compressor, a feedback summing node having one input connected to an output of the image decompressor, a picture memory having an input connected to an output of the feedback summing node, apparatus for comparing an image stored in the picture memory with a received input image and deducing therefrom pixels having differences between the stored image and the received image and for retrieving from the picture memory a partial image including the pixels only and applying the partial image to another input of the feedback summing node, whereby to produce at the output of the feedback summing node an updated decompressed image, a subtraction node having one input connected to received the received image and another input connected to receive the partial image so as to generate a difference image, the image compressor having an input connected to receive the difference image whereby to produce a compressed difference image at the output of the image compressor.

  17. [Subjective cognitive impairment and Alzheimer's disease: a two year follow up of 51 subjects during two years].

    PubMed

    Sambuchi, Nathalie; Muraccioli, Isabelle; Alescio-Lautier, Béatrice; Paban, Véronique; Sambuc, Roland; Jouve, Élisabeth; Geda, Yonas Endale; Petersen, Ronald Karl; Michel, Bernard François

    2015-12-01

    Subjective cognitive impairment (SCI) is defined by a state of subjective complaint, without objective cognitive deterioration. Amnestic mild cognitive impairment (A-MCI), which characterizes a syndrome between normal cognitive aging and early Alzheimer's disease (E-AD), is preceded by A-MCI from many years. SCI expresses a metacognitive impairment. A cohort of 51 subjects [7 normal controls (NC), 28 SCI, 12 A-MCI and 5 E-AD] was followed up during 24 months, with a neuropsychological evaluation each 6 months during 1 year (V1, V2, V3), then 1 year later (V4). Among the 28 SCI, 6 converted to A-MCI at V4 (21.42%), 1 to A-MCI-A at V3, then to E-AD at V4. These results suggest a continuum from SCI to A-MCI, and E-AD. Progressive SCI differed from non-progressive SCI on verbal episodic memory and executive functions tests at the initial examination. MRI showed anterior cingular atrophy in all SCI patients but hippocampal atrophy was only observed in 20 patients. Our results suggest that metacognition impairment is the expression of a dysfunction in the anterior pre-frontal cortex, in correlation with a syndrome of hyper-attention.

  18. Working memory training improves visual short-term memory capacity.

    PubMed

    Schwarb, Hillary; Nail, Jayde; Schumacher, Eric H

    2016-01-01

    Since antiquity, philosophers, theologians, and scientists have been interested in human memory. However, researchers today are still working to understand the capabilities, boundaries, and architecture. While the storage capabilities of long-term memory are seemingly unlimited (Bahrick, J Exp Psychol 113:1-2, 1984), working memory, or the ability to maintain and manipulate information held in memory, seems to have stringent capacity limits (e.g., Cowan, Behav Brain Sci 24:87-185, 2001). Individual differences, however, do exist and these differences can often predict performance on a wide variety of tasks (cf. Engle What is working-memory capacity? 297-314, 2001). Recently, researchers have promoted the enticing possibility that simple behavioral training can expand the limits of working memory which indeed may also lead to improvements on other cognitive processes as well (cf. Morrison and Chein, Psychol Bull Rev 18:46-60 2011). However, initial investigations across a wide variety of cognitive functions have produced mixed results regarding the transferability of training-related improvements. Across two experiments, the present research focuses on the benefit of working memory training on visual short-term memory capacity-a cognitive process that has received little attention in the training literature. Data reveal training-related improvement of global measures of visual short-term memory as well as of measures of the independent sub-processes that contribute to capacity (Awh et al., Psychol Sci 18(7):622-628, 2007). These results suggest that the ability to inhibit irrelevant information within and between trials is enhanced via n-back training allowing for selective improvement on untrained tasks. Additionally, we highlight a potential limitation of the standard adaptive training procedure and propose a modified design to ensure variability in the training environment.

  19. Cognitive performance of people with traumatic spinal cord injury: a cross-sectional study comparing people with subacute and chronic injuries.

    PubMed

    Molina, B; Segura, A; Serrano, J P; Alonso, F J; Molina, L; Pérez-Borrego, Y A; Ugarte, M I; Oliviero, A

    2018-02-22

    Cross-sectional study. To assess the impact of spinal cord injury (SCI) on cognitive function in individuals with subacute and chronic SCI. National Hospital for SCI patients (Spain). The present investigation was designed to determine the nature, pattern, and extent of cognitive deficits in a group of participants with subacute (n = 32) and chronic (n = 34) SCI, using a comprehensive battery of reliable and validated neuropsychological assessments to study a broad range of cognitive functions. Twenty-seven able-bodied subjects matched to the groups with SCI for age and educational level formed the control group. The neuropsychological assessment showed alterations in the domain of attention, processing speed, memory and learning, executive functions, and in recognition in participants with SCI. The prevalence of cognitive dysfunction in the chronic stage was also confirmed at the individual level. The comparison of the neuropsychological assessment between the groups with subacute and chronic SCI showed a worsening of cognitive functions in those with chronic SCI compared to the group with subacute SCI. In participants with SCI, cognitive dysfunctions are present in the subacute stage and worsen over time. From a clinical point of view, we confirmed the presence of cognitive dysfunction that may interfere with the first stage of rehabilitation which is the most intense and important. Moreover, cognitive dysfunction may be important beyond the end of the first stage of rehabilitation as it can affect an individual's quality of life and possible integration to society.

  20. Parallel reduced-instruction-set-computer architecture for real-time symbolic pattern matching

    NASA Astrophysics Data System (ADS)

    Parson, Dale E.

    1991-03-01

    This report discusses ongoing work on a parallel reduced-instruction- set-computer (RISC) architecture for automatic production matching. The PRIOPS compiler takes advantage of the memoryless character of automatic processing by translating a program's collection of automatic production tests into an equivalent combinational circuit-a digital circuit without memory, whose outputs are immediate functions of its inputs. The circuit provides a highly parallel, fine-grain model of automatic matching. The compiler then maps the combinational circuit onto RISC hardware. The heart of the processor is an array of comparators capable of testing production conditions in parallel, Each comparator attaches to private memory that contains virtual circuit nodes-records of the current state of nodes and busses in the combinational circuit. All comparator memories hold identical information, allowing simultaneous update for a single changing circuit node and simultaneous retrieval of different circuit nodes by different comparators. Along with the comparator-based logic unit is a sequencer that determines the current combination of production-derived comparisons to try, based on the combined success and failure of previous combinations of comparisons. The memoryless nature of automatic matching allows the compiler to designate invariant memory addresses for virtual circuit nodes, and to generate the most effective sequences of comparison test combinations. The result is maximal utilization of parallel hardware, indicating speed increases and scalability beyond that found for course-grain, multiprocessor approaches to concurrent Rete matching. Future work will consider application of this RISC architecture to the standard (controlled) Rete algorithm, where search through memory dominates portions of matching.

  1. On the effect of memory in one-dimensional K=4 automata on networks

    NASA Astrophysics Data System (ADS)

    Alonso-Sanz, Ramón; Cárdenas, Juan Pablo

    2008-12-01

    The effect of implementing memory in cells of one-dimensional CA, and on nodes of various types of automata on networks with increasing degrees of random rewiring is studied in this article, paying particular attention to the case of four inputs. As a rule, memory induces a moderation in the rate of changing nodes and in the damage spreading, albeit in the latter case memory turns out to be ineffective in the control of the damage as the wiring network moves away from the ordered structure that features proper one-dimensional CA. This article complements the previous work done in the two-dimensional context.

  2. A new routing enhancement scheme based on node blocking state advertisement in wavelength-routed WDM networks

    NASA Astrophysics Data System (ADS)

    Hu, Peigang; Jin, Yaohui; Zhang, Chunlei; He, Hao; Hu, WeiSheng

    2005-02-01

    The increasing switching capacity brings the optical node with considerable complexity. Due to the limitation in cost and technology, an optical node is often designed with partial switching capability and partial resource sharing. It means that the node is of blocking to some extent, for example multi-granularity switching node, which in fact is a structure using pass wavelength to reduce the dimension of OXC, and partial sharing wavelength converter (WC) OXC. It is conceivable that these blocking nodes will have great effects on the problem of routing and wavelength assignment. Some previous works studied the blocking case, partial WC OXC, using complicated wavelength assignment algorithm. But the complexities of these schemes decide them to be not in practice in real networks. In this paper, we propose a new scheme based on the node blocking state advertisement to reduce the retry or rerouting probability and improve the efficiency of routing in the networks with blocking nodes. In the scheme, node blocking state are advertised to the other nodes in networks, which will be used for subsequent route calculation to find a path with lowest blocking probability. The performance of the scheme is evaluated using discrete event model in 14-node NSFNET, all the nodes of which employ a kind of partial sharing WC OXC structure. In the simulation, a simple First-Fit wavelength assignment algorithm is used. The simulation results demonstrate that the new scheme considerably reduces the retry or rerouting probability in routing process.

  3. CD40L+ CD4+ memory T cells migrate in a CD62P-dependent fashion into reactive lymph nodes and license dendritic cells for T cell priming

    PubMed Central

    Martín-Fontecha, Alfonso; Baumjohann, Dirk; Guarda, Greta; Reboldi, Andrea; Hons, Miroslav; Lanzavecchia, Antonio; Sallusto, Federica

    2008-01-01

    There is growing evidence that the maturation state of dendritic cells (DCs) is a critical parameter determining the balance between tolerance and immunity. We report that mouse CD4+ effector memory T (TEM) cells, but not naive or central memory T cells, constitutively expressed CD40L at levels sufficient to induce DC maturation in vitro and in vivo in the absence of antigenic stimulation. CD4+ TEM cells were excluded from resting lymph nodes but migrated in a CD62P-dependent fashion into reactive lymph nodes that were induced to express CD62P, in a transient or sustained fashion, on high endothelial venules. Trafficking of CD4+ TEM cells into chronic reactive lymph nodes maintained resident DCs in a mature state and promoted naive T cell responses and experimental autoimmune encephalomyelitis (EAE) to antigens administered in the absence of adjuvants. Antibodies to CD62P, which blocked CD4+ TEM cell migration into reactive lymph nodes, inhibited DC maturation, T cell priming, and induction of EAE. These results show that TEM cells can behave as endogenous adjuvants and suggest a mechanistic link between lymphocyte traffic in lymph nodes and induction of autoimmunity. PMID:18838544

  4. Human CD30+ B cells represent a unique subset related to Hodgkin lymphoma cells.

    PubMed

    Weniger, Marc A; Tiacci, Enrico; Schneider, Stefanie; Arnolds, Judith; Rüschenbaum, Sabrina; Duppach, Janine; Seifert, Marc; Döring, Claudia; Hansmann, Martin-Leo; Küppers, Ralf

    2018-06-11

    Very few B cells in germinal centers (GCs) and extrafollicular (EF) regions of lymph nodes express CD30. Their specific features and relationship to CD30-expressing Hodgkin and Reed/Sternberg (HRS) cells of Hodgkin lymphoma are unclear but highly relevant, because numerous patients with lymphoma are currently treated with an anti-CD30 immunotoxin. We performed a comprehensive analysis of human CD30+ B cells. Phenotypic and IgV gene analyses indicated that CD30+ GC B lymphocytes represent typical GC B cells, and that CD30+ EF B cells are mostly post-GC B cells. The transcriptomes of CD30+ GC and EF B cells largely overlapped, sharing a strong MYC signature, but were strikingly different from conventional GC B cells and memory B and plasma cells, respectively. CD30+ GC B cells represent MYC+ centrocytes redifferentiating into centroblasts; CD30+ EF B cells represent active, proliferating memory B cells. HRS cells shared typical transcriptome patterns with CD30+ B cells, suggesting that they originate from these lymphocytes or acquire their characteristic features during lymphomagenesis. By comparing HRS to normal CD30+ B cells we redefined aberrant and disease-specific features of HRS cells. A remarkable downregulation of genes regulating genomic stability and cytokinesis in HRS cells may explain their genomic instability and multinuclearity.

  5. Extended write combining using a write continuation hint flag

    DOEpatents

    Chen, Dong; Gara, Alan; Heidelberger, Philip; Ohmacht, Martin; Vranas, Pavlos

    2013-06-04

    A computing apparatus for reducing the amount of processing in a network computing system which includes a network system device of a receiving node for receiving electronic messages comprising data. The electronic messages are transmitted from a sending node. The network system device determines when more data of a specific electronic message is being transmitted. A memory device stores the electronic message data and communicating with the network system device. A memory subsystem communicates with the memory device. The memory subsystem stores a portion of the electronic message when more data of the specific message will be received, and the buffer combines the portion with later received data and moves the data to the memory device for accessible storage.

  6. Biomarkers for PTSD

    DTIC Science & Technology

    2014-07-01

    Molecular evidence of stress- induced acute heart injury in a mouse model simulating posttraumatic stress disorder. Proc Natl Acad Sci U S A. 2014 Feb...obtaining measures aligned with the core neurocognitive domains: IQ, working memory ( auditory /visual), processing speed, verbal memory (immediate...in the test sample and combined sample with a similar pattern for the validation sample. Similarly, performance on tests of auditory and visual

  7. Cognitive Control Network Contributions to Memory-Guided Visual Attention

    PubMed Central

    Rosen, Maya L.; Stern, Chantal E.; Michalka, Samantha W.; Devaney, Kathryn J.; Somers, David C.

    2016-01-01

    Visual attentional capacity is severely limited, but humans excel in familiar visual contexts, in part because long-term memories guide efficient deployment of attention. To investigate the neural substrates that support memory-guided visual attention, we performed a set of functional MRI experiments that contrast long-term, memory-guided visuospatial attention with stimulus-guided visuospatial attention in a change detection task. Whereas the dorsal attention network was activated for both forms of attention, the cognitive control network (CCN) was preferentially activated during memory-guided attention. Three posterior nodes in the CCN, posterior precuneus, posterior callosal sulcus/mid-cingulate, and lateral intraparietal sulcus exhibited the greatest specificity for memory-guided attention. These 3 regions exhibit functional connectivity at rest, and we propose that they form a subnetwork within the broader CCN. Based on the task activation patterns, we conclude that the nodes of this subnetwork are preferentially recruited for long-term memory guidance of visuospatial attention. PMID:25750253

  8. Efficient Numeric and Geometric Computations using Heterogeneous Shared Memory Architectures

    DTIC Science & Technology

    2017-10-04

    Report: Efficient Numeric and Geometric Computations using Heterogeneous Shared Memory Architectures The views, opinions and/or findings contained in this...Chapel Hill Title: Efficient Numeric and Geometric Computations using Heterogeneous Shared Memory Architectures Report Term: 0-Other Email: dm...algorithms for scientific and geometric computing by exploiting the power and performance efficiency of heterogeneous shared memory architectures . These

  9. Overview of the LINCS architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fletcher, J.G.; Watson, R.W.

    1982-01-13

    Computing at the Lawrence Livermore National Laboratory (LLNL) has evolved over the past 15 years with a computer network based resource sharing environment. The increasing use of low cost and high performance micro, mini and midi computers and commercially available local networking systems will accelerate this trend. Further, even the large scale computer systems, on which much of the LLNL scientific computing depends, are evolving into multiprocessor systems. It is our belief that the most cost effective use of this environment will depend on the development of application systems structured into cooperating concurrent program modules (processes) distributed appropriately over differentmore » nodes of the environment. A node is defined as one or more processors with a local (shared) high speed memory. Given the latter view, the environment can be characterized as consisting of: multiple nodes communicating over noisy channels with arbitrary delays and throughput, heterogenous base resources and information encodings, no single administration controlling all resources, distributed system state, and no uniform time base. The system design problem is - how to turn the heterogeneous base hardware/firmware/software resources of this environment into a coherent set of resources that facilitate development of cost effective, reliable, and human engineered applications. We believe the answer lies in developing a layered, communication oriented distributed system architecture; layered and modular to support ease of understanding, reconfiguration, extensibility, and hiding of implementation or nonessential local details; communication oriented because that is a central feature of the environment. The Livermore Interactive Network Communication System (LINCS) is a hierarchical architecture designed to meet the above needs. While having characteristics in common with other architectures, it differs in several respects.« less

  10. Announcing Supercomputer Summit

    ScienceCinema

    Wells, Jack; Bland, Buddy; Nichols, Jeff; Hack, Jim; Foertter, Fernanda; Hagen, Gaute; Maier, Thomas; Ashfaq, Moetasim; Messer, Bronson; Parete-Koon, Suzanne

    2018-01-16

    Summit is the next leap in leadership-class computing systems for open science. With Summit we will be able to address, with greater complexity and higher fidelity, questions concerning who we are, our place on earth, and in our universe. Summit will deliver more than five times the computational performance of Titan’s 18,688 nodes, using only approximately 3,400 nodes when it arrives in 2017. Like Titan, Summit will have a hybrid architecture, and each node will contain multiple IBM POWER9 CPUs and NVIDIA Volta GPUs all connected together with NVIDIA’s high-speed NVLink. Each node will have over half a terabyte of coherent memory (high bandwidth memory + DDR4) addressable by all CPUs and GPUs plus 800GB of non-volatile RAM that can be used as a burst buffer or as extended memory. To provide a high rate of I/O throughput, the nodes will be connected in a non-blocking fat-tree using a dual-rail Mellanox EDR InfiniBand interconnect. Upon completion, Summit will allow researchers in all fields of science unprecedented access to solving some of the world’s most pressing challenges.

  11. The effect of the order in which episodic autobiographical memories versus autobiographical knowledge are shared on feelings of closeness.

    PubMed

    Brandon, Nicole R; Beike, Denise R; Cole, Holly E

    2017-07-01

    Autobiographical memories (AMs) can be used to create and maintain closeness with others [Alea, N., & Bluck, S. (2003). Why are you telling me that? A conceptual model of the social function of autobiographical memory. Memory, 11(2), 165-178]. However, the differential effects of memory specificity are not well established. Two studies with 148 participants tested whether the order in which autobiographical knowledge (AK) and specific episodic AM (EAM) are shared affects feelings of closeness. Participants read two memories hypothetically shared by each of four strangers. The strangers first shared either AK or an EAM, and then shared either AK or an EAM. Participants were randomly assigned to read either positive or negative AMs from the strangers. Findings suggest that people feel closer to those who share positive AMs in the same way they construct memories: starting with general and moving to specific.

  12. Indoxacarb, Metaflumizone, and Other Sodium Channel Inhibitor Insecticides: Mechanism and Site of Action on Mammalian Voltage-Gated Sodium Channels

    PubMed Central

    von Stein, Richard T.; Silver, Kristopher S.; Soderlund, David M.

    2013-01-01

    Sodium channel inhibitor (SCI) insecticides were discovered almost four decades ago but have only recently yielded important commercial products (eg., indoxacarb and metaflumizone). SCI insecticides inhibit sodium channel function by binding selectively to slow-inactivated (non-conducting) sodium channel states. Characterization of the action of SCI insecticides on mammalian sodium channels using both biochemical and electrophysiological approaches demonstrates that they bind at or near a drug receptor site, the "local anesthetic (LA) receptor." This mechanism and site of action on sodium channels differentiates SCI insecticides from other insecticidal agents that act on sodium channels. However, SCI insecticides share a common mode of action with drugs currently under investigation as anticonvulsants and treatments for neuropathic pain. In this paper we summarize the development of the SCI insecticide class and the evidence that this structurally diverse group of compounds have a common mode of action on sodium channels. We then review research that has used site-directed mutagenesis and heterologous expression of cloned mammalian sodium channels in Xenopus laevis oocytes to further elucidate the site and mechanism of action of SCI insecticides. The results of these studies provide new insight into the mechanism of action of SCI insecticides on voltage-gated sodium channels, the location of the SCI insecticide receptor, and its relationship to the LA receptor that binds therapeutic SCI agents. PMID:24072940

  13. Language Comprehension as Structure Building

    DTIC Science & Technology

    1991-10-17

    Final Tech Report 89-0258 LANGUAGE COMPREHENSION AS STRUCTURE BUILDING Morton Ann Gemsbacher Psychology De artment Uiversity of regon Eugene, OR 97403...they represent is no longer as necessary. My students and I have investigated the three subprocesses involved in structure building, namely, laying a...memory nodes; once activated, two cognitive mechanisms control memory nodes’ activation levels: suppression and enhancement. My students and I have also

  14. Methods of editing cloud and atmospheric layer affected pixels from satellite data

    NASA Technical Reports Server (NTRS)

    Nixon, P. R. (Principal Investigator); Wiegand, C. L.; Richardson, A. J.; Johnson, M. P.

    1981-01-01

    Plotted transects made from south Texas daytime HCMM data show the effect of subvisible cirrus (SCI) clouds in the emissive (IR) band but the effect is unnoticable in the reflective (VIS) band. The depression of satellite indicated temperatures ws greatest in the center of SCi streamers and tapered off at the edges. Pixels of uncontaminated land and water features in the HCMM test area shared identical VIS and IR digital count combinations with other pixels representing similar features. A minimum of 0.015 percent repeats of identical VIS-IR combinations are characteristic of land and water features in a scene of 30 percent cloud cover. This increases to 0.021 percent of more when the scene is clear. Pixels having shared VIS-IR combinations less than these amounts are considered to be cloud contaminated in the cluster screening method. About twenty percent of SCi was machine indistinguishable from land features in two dimensional spectral space (VIS vs IR).

  15. Architecture and method for a burst buffer using flash technology

    DOEpatents

    Tzelnic, Percy; Faibish, Sorin; Gupta, Uday K.; Bent, John; Grider, Gary Alan; Chen, Hsing-bung

    2016-03-15

    A parallel supercomputing cluster includes compute nodes interconnected in a mesh of data links for executing an MPI job, and solid-state storage nodes each linked to a respective group of the compute nodes for receiving checkpoint data from the respective compute nodes, and magnetic disk storage linked to each of the solid-state storage nodes for asynchronous migration of the checkpoint data from the solid-state storage nodes to the magnetic disk storage. Each solid-state storage node presents a file system interface to the MPI job, and multiple MPI processes of the MPI job write the checkpoint data to a shared file in the solid-state storage in a strided fashion, and the solid-state storage node asynchronously migrates the checkpoint data from the shared file in the solid-state storage to the magnetic disk storage and writes the checkpoint data to the magnetic disk storage in a sequential fashion.

  16. Maladaptive spinal plasticity opposes spinal learning and recovery in spinal cord injury

    PubMed Central

    Ferguson, Adam R.; Huie, J. Russell; Crown, Eric D.; Baumbauer, Kyle M.; Hook, Michelle A.; Garraway, Sandra M.; Lee, Kuan H.; Hoy, Kevin C.; Grau, James W.

    2012-01-01

    Synaptic plasticity within the spinal cord has great potential to facilitate recovery of function after spinal cord injury (SCI). Spinal plasticity can be induced in an activity-dependent manner even without input from the brain after complete SCI. A mechanistic basis for these effects is provided by research demonstrating that spinal synapses have many of the same plasticity mechanisms that are known to underlie learning and memory in the brain. In addition, the lumbar spinal cord can sustain several forms of learning and memory, including limb-position training. However, not all spinal plasticity promotes recovery of function. Central sensitization of nociceptive (pain) pathways in the spinal cord may emerge in response to various noxious inputs, demonstrating that plasticity within the spinal cord may contribute to maladaptive pain states. In this review we discuss interactions between adaptive and maladaptive forms of activity-dependent plasticity in the spinal cord below the level of SCI. The literature demonstrates that activity-dependent plasticity within the spinal cord must be carefully tuned to promote adaptive spinal training. Prior work from our group has shown that stimulation that is delivered in a limb position-dependent manner or on a fixed interval can induce adaptive plasticity that promotes future spinal cord learning and reduces nociceptive hyper-reactivity. On the other hand, stimulation that is delivered in an unsynchronized fashion, such as randomized electrical stimulation or peripheral skin injuries, can generate maladaptive spinal plasticity that undermines future spinal cord learning, reduces recovery of locomotor function, and promotes nociceptive hyper-reactivity after SCI. We review these basic phenomena, how these findings relate to the broader spinal plasticity literature, discuss the cellular and molecular mechanisms, and finally discuss implications of these and other findings for improved rehabilitative therapies after SCI. PMID:23087647

  17. Non-volatile memory for checkpoint storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blumrich, Matthias A.; Chen, Dong; Cipolla, Thomas M.

    A system, method and computer program product for supporting system initiated checkpoints in high performance parallel computing systems and storing of checkpoint data to a non-volatile memory storage device. The system and method generates selective control signals to perform checkpointing of system related data in presence of messaging activity associated with a user application running at the node. The checkpointing is initiated by the system such that checkpoint data of a plurality of network nodes may be obtained even in the presence of user applications running on highly parallel computers that include ongoing user messaging activity. In one embodiment, themore » non-volatile memory is a pluggable flash memory card.« less

  18. Dual Tasking and Working Memory in Alcoholism: Relation to Frontocerebellar Circuitry

    PubMed Central

    Chanraud, Sandra; Pitel, Anne-Lise; Rohlfing, Torsten; Pfefferbaum, Adolf; Sullivan, Edith V

    2010-01-01

    Controversy exists regarding the role of cerebellar systems in cognition and whether working memory compromise commonly marking alcoholism can be explained by compromise of nodes of corticocerebellar circuitry. We tested 17 alcoholics and 31 age-matched controls with dual-task, working memory paradigms. Interference tasks competed with verbal and spatial working memory tasks using low (three item) or high (six item) memory loads. Participants also underwent structural MRI to obtain volumes of nodes of the frontocerebellar system. On the verbal working memory task, both groups performed equally. On the spatial working memory with the high-load task, the alcoholic group was disproportionately more affected by the arithmetic distractor than were controls. In alcoholics, volumes of the left thalamus and left cerebellar Crus I volumes were more robust predictors of performance in the spatial working memory task with the arithmetic distractor than the left frontal superior cortex. In controls, volumes of the right middle frontal gyrus and right cerebellar Crus I were independent predictors over the left cerebellar Crus I, left thalamus, right superior parietal cortex, or left middle frontal gyrus of spatial working memory performance with tracking interference. The brain–behavior correlations suggest that alcoholics and controls relied on the integrity of certain nodes of corticocerebellar systems to perform these verbal and spatial working memory tasks, but that the specific pattern of relationships differed by group. The resulting brain structure–function patterns provide correlational support that components of this corticocerebellar system not typically related to normal performance in dual-task conditions may be available to augment otherwise dampened performance by alcoholics. PMID:20410871

  19. Role of the Local Anesthetic Receptor in the State-Dependent Inhibition of Voltage-Gated Sodium Channels by the Insecticide Metaflumizone

    PubMed Central

    von Stein, Richard T.

    2012-01-01

    Sodium channel inhibitor (SCI) insecticides selectively target voltage-gated sodium (Nav) channels in the slow-inactivated state by binding at or near the local anesthetic receptor within the sodium channel pore. Metaflumizone is a new insecticide for the treatment of fleas on domesticated pets and has recently been reported to block insect sodium channels in the slow-inactivated state, thereby implying that it is also a member of the SCI class. Using the two-electrode voltage-clamp technique, we examined metaflumizone inhibition of rat Nav1.4 sodium channels expressed in Xenopus laevis oocytes. Metaflumizone selectively inhibited Nav1.4 channels at potentials that promoted slow inactivation and shifted the voltage dependence of slow inactivation in the direction of hyperpolarization. Metaflumizone perfusion at a hyperpolarized holding potential also shifted the conductance-voltage curve for activation in the direction of depolarization and antagonized use-dependent lidocaine inhibition of fast-inactivated sodium channels, actions not previously observed with other SCI insecticides. We expressed mutated Nav1.4/F1579A and Nav1.4/Y1586A channels to investigate whether metaflumizone shares the domain IV segment S6 (DIV-S6) binding determinants identified for other SCI insecticides. Consistent with previous investigations of SCI insecticides on rat Nav1.4 channels, the F1579A mutation reduced sensitivity to block by metaflumizone, whereas the Y1586A mutation paradoxically increased the sensitivity to metaflumizone. We conclude that metaflumizone selectively inhibits slow-inactivated Nav1.4 channels and shares DIV-S6 binding determinants with other SCI insecticides and therapeutic drugs. However, our results suggest that metaflumizone interacts with resting and fast-inactivated channels in a manner that is distinct from other compounds in this insecticide class. PMID:22127519

  20. Finding of increased caudate nucleus in patients with Alzheimer's disease.

    PubMed

    Persson, K; Bohbot, V D; Bogdanovic, N; Selbaek, G; Braekhus, A; Engedal, K

    2018-02-01

    A recently published study using an automated MRI volumetry method (NeuroQuant®) unexpectedly demonstrated larger caudate nucleus volume in patients with Alzheimer's disease dementia (AD) compared to patients with subjective and mild cognitive impairment (SCI and MCI). The aim of this study was to explore this finding. The caudate nucleus and the hippocampus volumes were measured (both expressed as ratios of intracranial volume) in a total of 257 patients with SCI and MCI according to the Winblad criteria and AD according to ICD-10 criteria. Demographic data, cognitive measures, and APOE-ɛ4 status were collected. Compared with non-dementia patients (SCI and MCI), AD patients were older, more of them were female, and they had a larger caudate nucleus volume and smaller hippocampus volume (P<.001). In multiple linear regression analysis, age and female sex were associated with larger caudate nucleus volume, but neither diagnosis nor memory function was. Age, gender, and memory function were associated with hippocampus volume, and age and memory function were associated with caudate nucleus/hippocampus ratio. A larger caudate nucleus volume in AD patients was partly explained by older age and being female. These results are further discussed in the context of (1) the caudate nucleus possibly serving as a mechanism for temporary compensation; (2) methodological properties of automated volumetry of this brain region; and (3) neuropathological alterations. Further studies are needed to fully understand the role of the caudate nucleus in AD. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  1. Implication of altered autonomic control for orthostatic tolerance in SCI.

    PubMed

    Wecht, Jill Maria; Bauman, William A

    2018-01-01

    Neural output from the sympathetic and parasympathetic branches of the autonomic nervous system (ANS) are integrated to appropriately control cardiovascular responses during routine activities of daily living including orthostatic positioning. Sympathetic control of the upper extremity vasculature and the heart arises from the thoracic cord between T1 and T5, whereas splanchnic bed and lower extremity vasculature receive sympathetic neural input from the lower cord between segments T5 and L2. Although the vasculature is not directly innervated by the parasympathetic nervous system, the SA node is innervated by post-ganglionic vagal nerve fibers via cranial nerve X. Segmental differences in sympathetic cardiovascular innervation highlight the effect of lesion level on orthostatic cardiovascular control following spinal cord injury (SCI). Due to impaired sympathetic cardiovascular control, many individuals with SCI, particularly those with lesions above T6, are prone to orthostatic hypotension (OH) and orthostatic intolerance (OI). Symptomatic OH, which may result in OI, is a consequence of episodic reductions in cerebral perfusion pressure and the symptoms may include: dizziness, lightheadedness, nausea, blurred vision, ringing in the ears, headache and syncope. However, many, if not most, individuals with SCI who experience persistent and episodic hypotension and OH do not report symptoms of cerebral hypoperfusion and therefore do not raise clinical concern. This review will discuss the mechanism underlying OH and OI following SCI, and will review our knowledge to date regarding the prevalence, consequences and possible treatment options for these conditions in the SCI population. Published by Elsevier B.V.

  2. Multiple mediation path model of pain's cascading influence on physical disability in individuals with SCI from Colombia, South America.

    PubMed

    Perrin, Paul B; Paredes, Alejandra Morlett; Olivera, Silvia Leonor; Lozano, Juan Esteban; Leal, Wendy Tatiana; Ahmad, Usman F; Arango-Lasprilla, Juan Carlos

    2017-01-01

    Research has begun to document the bivariate connections between pain in individuals with spinal cord injury (SCI) and various aspects of health related quality of life (HRQOL), such as fatigue, social functioning, mental health, and physical functioning. The purpose of this study was to construct and test a theoretical path model illuminating the stage-wise and sequential (cascading) HRQOL pathways through which pain increases physical disability in individuals with SCI in a sample from Colombia, South America. It was hypothesized that increased pain would lead to decreased energy, which would lead to decreased mental health and social functioning, which both would lead to emotional role limitations, which finally would lead to physical role limitations. A cross-sectional study assessed individuals with SCI (n = 40) in Neiva, Colombia. Participants completed a measure indexing various aspects of HRQOL. The path model overall showed excellent fit indices, and each individual path within the model was statistically significant. Pain exerted significant indirect effects through all possible mediators in the model, ultimately suggesting that energy, mental health, social functioning, and role limitations-emotional were likely pathways through which pain exerted its effects on physical disability in individuals with SCI. These findings uncover several potential nodes for clinical intervention which if targeted in the context of rehabilitation or outpatient services, could result in salubrious direct and indirect effects reverberating down the theoretical causal chain and ultimately reducing physical disability in individuals with SCI.

  3. Aging and Semantic Activation.

    ERIC Educational Resources Information Center

    Howard, Darlene V.

    Three studies tested the theory that long term memory consists of a semantically organized network of concept nodes interconnected by leveled associations or relations, and that when a stimulus is processed, the corresponding concept node is assumed to be temporarily activated and this activation spreads to nearby semantically related nodes. In…

  4. Efficient implementation of multidimensional fast fourier transform on a distributed-memory parallel multi-node computer

    DOEpatents

    Bhanot, Gyan V [Princeton, NJ; Chen, Dong [Croton-On-Hudson, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY

    2012-01-10

    The present in invention is directed to a method, system and program storage device for efficiently implementing a multidimensional Fast Fourier Transform (FFT) of a multidimensional array comprising a plurality of elements initially distributed in a multi-node computer system comprising a plurality of nodes in communication over a network, comprising: distributing the plurality of elements of the array in a first dimension across the plurality of nodes of the computer system over the network to facilitate a first one-dimensional FFT; performing the first one-dimensional FFT on the elements of the array distributed at each node in the first dimension; re-distributing the one-dimensional FFT-transformed elements at each node in a second dimension via "all-to-all" distribution in random order across other nodes of the computer system over the network; and performing a second one-dimensional FFT on elements of the array re-distributed at each node in the second dimension, wherein the random order facilitates efficient utilization of the network thereby efficiently implementing the multidimensional FFT. The "all-to-all" re-distribution of array elements is further efficiently implemented in applications other than the multidimensional FFT on the distributed-memory parallel supercomputer.

  5. Efficient implementation of a multidimensional fast fourier transform on a distributed-memory parallel multi-node computer

    DOEpatents

    Bhanot, Gyan V [Princeton, NJ; Chen, Dong [Croton-On-Hudson, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY

    2008-01-01

    The present in invention is directed to a method, system and program storage device for efficiently implementing a multidimensional Fast Fourier Transform (FFT) of a multidimensional array comprising a plurality of elements initially distributed in a multi-node computer system comprising a plurality of nodes in communication over a network, comprising: distributing the plurality of elements of the array in a first dimension across the plurality of nodes of the computer system over the network to facilitate a first one-dimensional FFT; performing the first one-dimensional FFT on the elements of the array distributed at each node in the first dimension; re-distributing the one-dimensional FFT-transformed elements at each node in a second dimension via "all-to-all" distribution in random order across other nodes of the computer system over the network; and performing a second one-dimensional FFT on elements of the array re-distributed at each node in the second dimension, wherein the random order facilitates efficient utilization of the network thereby efficiently implementing the multidimensional FFT. The "all-to-all" re-distribution of array elements is further efficiently implemented in applications other than the multidimensional FFT on the distributed-memory parallel supercomputer.

  6. Mapping virtual addresses to different physical addresses for value disambiguation for thread memory access requests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gala, Alan; Ohmacht, Martin

    A multiprocessor system includes nodes. Each node includes a data path that includes a core, a TLB, and a first level cache implementing disambiguation. The system also includes at least one second level cache and a main memory. For thread memory access requests, the core uses an address associated with an instruction format of the core. The first level cache uses an address format related to the size of the main memory plus an offset corresponding to hardware thread meta data. The second level cache uses a physical main memory address plus software thread meta data to store the memorymore » access request. The second level cache accesses the main memory using the physical address with neither the offset nor the thread meta data after resolving speculation. In short, this system includes mapping of a virtual address to a different physical addresses for value disambiguation for different threads.« less

  7. Shared Semantics and the Use of Organizational Memories for E-Mail Communications.

    ERIC Educational Resources Information Center

    Schwartz, David G.

    1998-01-01

    Examines the use of shared semantics information to link concepts in an organizational memory to e-mail communications. Presents a framework for determining shared semantics based on organizational and personal user profiles. Illustrates how shared semantics are used by the HyperMail system to help link organizational memories (OM) content to…

  8. Isolated spinal cord contusion in rats induces chronic brain neuroinflammation, neurodegeneration, and cognitive impairment. Involvement of cell cycle activation.

    PubMed

    Wu, Junfang; Stoica, Bogdan A; Luo, Tao; Sabirzhanov, Boris; Zhao, Zaorui; Guanciale, Kelsey; Nayar, Suresh K; Foss, Catherine A; Pomper, Martin G; Faden, Alan I

    2014-01-01

    Cognitive dysfunction has been reported in patients with spinal cord injury (SCI), but it has been questioned whether such changes may reflect concurrent head injury, and the issue has not been addressed mechanistically or in a well-controlled experimental model. Our recent rodent studies examining SCI-induced hyperesthesia revealed neuroinflammatory changes not only in supratentorial pain-regulatory sites, but also in other brain regions, suggesting that additional brain functions may be impacted following SCI. Here we examined effects of isolated thoracic SCI in rats on cognition, brain inflammation, and neurodegeneration. We show for the first time that SCI causes widespread microglial activation in the brain, with increased expression of markers for activated microglia/macrophages, including translocator protein and chemokine ligand 21 (C-C motif). Stereological analysis demonstrated significant neuronal loss in the cortex, thalamus, and hippocampus. SCI caused chronic impairment in spatial, retention, contextual, and fear-related emotional memory-evidenced by poor performance in the Morris water maze, novel objective recognition, and passive avoidance tests. Based on our prior work implicating cell cycle activation (CCA) in chronic neuroinflammation after SCI or traumatic brain injury, we evaluated whether CCA contributed to the observed changes. Increased expression of cell cycle-related genes and proteins was found in hippocampus and cortex after SCI. Posttraumatic brain inflammation, neuronal loss, and cognitive changes were attenuated by systemic post-injury administration of a selective cyclin-dependent kinase inhibitor. These studies demonstrate that chronic brain neurodegeneration occurs after isolated SCI, likely related to sustained microglial activation mediated by cell cycle activation.

  9. Nanophotonic rare-earth quantum memory with optically controlled retrieval.

    PubMed

    Zhong, Tian; Kindem, Jonathan M; Bartholomew, John G; Rochman, Jake; Craiciu, Ioana; Miyazono, Evan; Bettinelli, Marco; Cavalli, Enrico; Verma, Varun; Nam, Sae Woo; Marsili, Francesco; Shaw, Matthew D; Beyer, Andrew D; Faraon, Andrei

    2017-09-29

    Optical quantum memories are essential elements in quantum networks for long-distance distribution of quantum entanglement. Scalable development of quantum network nodes requires on-chip qubit storage functionality with control of the readout time. We demonstrate a high-fidelity nanophotonic quantum memory based on a mesoscopic neodymium ensemble coupled to a photonic crystal cavity. The nanocavity enables >95% spin polarization for efficient initialization of the atomic frequency comb memory and time bin-selective readout through an enhanced optical Stark shift of the comb frequencies. Our solid-state memory is integrable with other chip-scale photon source and detector devices for multiplexed quantum and classical information processing at the network nodes. Copyright © 2017 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  10. A randomized controlled trial to test the efficacy of the SCI Get Fit Toolkit on leisure-time physical activity behaviour and social-cognitive processes in adults with spinal cord injury.

    PubMed

    Arbour-Nicitopoulos, Kelly P; Sweet, Shane N; Lamontagne, Marie-Eve; Ginis, Kathleen A Martin; Jeske, Samantha; Routhier, François; Latimer-Cheung, Amy E

    2017-01-01

    Single blind, two-group randomized controlled trial. To evaluate the efficacy of the SCI Get Fit Toolkit delivered online on theoretical constructs and moderate-to-vigorous physical activity (MVPA) among adults with SCI. Ontario and Quebec, Canada. Inactive, English- and French-speaking Canadian adults with traumatic SCI with Internet access, and no self-reported cognitive or memory impairments. Participants ( N =90 M age =48.12±11.29 years; 79% male) were randomized to view the SCI Get Fit Toolkit or the Physical Activity Guidelines for adults with SCI (PAG-SCI) online. Primary (intentions) and secondary (outcome expectancies, self-efficacy, planning and MVPA behaviour) outcomes were assessed over a 1-month period. Of the 90 participants randomized, 77 were included in the analyses. Participants viewed the experimental stimuli only briefly, reading the 4-page toolkit for approximately 2.5 min longer than the 1-page guideline document. No condition effects were found for intentions, outcome expectancies, self-efficacy, and planning (ΔR 2 ⩽0.03). Individuals in the toolkit condition were more likely to participate in at least one bout of 20 min of MVPA behaviour at 1-week post-intervention compared to individuals in the guidelines condition (OR=3.54, 95% CI=0.95, 13.17). However, no differences were found when examining change in weekly minutes of MVPA or comparing whether participants met the PAG-SCI. No firm conclusions can be made regarding the impact of the SCI Get Fit Toolkit in comparison to the PAG-SCI on social cognitions and MVPA behaviour. The limited online access to this resource may partially explain these null findings.

  11. Comparing the OpenMP, MPI, and Hybrid Programming Paradigm on an SMP Cluster

    NASA Technical Reports Server (NTRS)

    Jost, Gabriele; Jin, Haoqiang; anMey, Dieter; Hatay, Ferhat F.

    2003-01-01

    With the advent of parallel hardware and software technologies users are faced with the challenge to choose a programming paradigm best suited for the underlying computer architecture. With the current trend in parallel computer architectures towards clusters of shared memory symmetric multi-processors (SMP), parallel programming techniques have evolved to support parallelism beyond a single level. Which programming paradigm is the best will depend on the nature of the given problem, the hardware architecture, and the available software. In this study we will compare different programming paradigms for the parallelization of a selected benchmark application on a cluster of SMP nodes. We compare the timings of different implementations of the same CFD benchmark application employing the same numerical algorithm on a cluster of Sun Fire SMP nodes. The rest of the paper is structured as follows: In section 2 we briefly discuss the programming models under consideration. We describe our compute platform in section 3. The different implementations of our benchmark code are described in section 4 and the performance results are presented in section 5. We conclude our study in section 6.

  12. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets.

    PubMed

    Bicer, Tekin; Gürsoy, Doğa; Andrade, Vincent De; Kettimuthu, Rajkumar; Scullin, William; Carlo, Francesco De; Foster, Ian T

    2017-01-01

    Modern synchrotron light sources and detectors produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used imaging techniques that generates data at tens of gigabytes per second is computed tomography (CT). Although CT experiments result in rapid data generation, the analysis and reconstruction of the collected data may require hours or even days of computation time with a medium-sized workstation, which hinders the scientific progress that relies on the results of analysis. We present Trace, a data-intensive computing engine that we have developed to enable high-performance implementation of iterative tomographic reconstruction algorithms for parallel computers. Trace provides fine-grained reconstruction of tomography datasets using both (thread-level) shared memory and (process-level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations that we apply to the replicated reconstruction objects and evaluate them using tomography datasets collected at the Advanced Photon Source. Our experimental evaluations show that our optimizations and parallelization techniques can provide 158× speedup using 32 compute nodes (384 cores) over a single-core configuration and decrease the end-to-end processing time of a large sinogram (with 4501 × 1 × 22,400 dimensions) from 12.5 h to <5 min per iteration. The proposed tomographic reconstruction engine can efficiently process large-scale tomographic data using many compute nodes and minimize reconstruction times.

  13. Distributed processing method for arbitrary view generation in camera sensor network

    NASA Astrophysics Data System (ADS)

    Tehrani, Mehrdad P.; Fujii, Toshiaki; Tanimoto, Masayuki

    2003-05-01

    Camera sensor network as a new advent of technology is a network that each sensor node can capture video signals, process and communicate them with other nodes. The processing task in this network is to generate arbitrary view, which can be requested from central node or user. To avoid unnecessary communication between nodes in camera sensor network and speed up the processing time, we have distributed the processing tasks between nodes. In this method, each sensor node processes part of interpolation algorithm to generate the interpolated image with local communication between nodes. The processing task in camera sensor network is ray-space interpolation, which is an object independent method and based on MSE minimization by using adaptive filtering. Two methods were proposed for distributing processing tasks, which are Fully Image Shared Decentralized Processing (FIS-DP), and Partially Image Shared Decentralized Processing (PIS-DP), to share image data locally. Comparison of the proposed methods with Centralized Processing (CP) method shows that PIS-DP has the highest processing speed after FIS-DP, and CP has the lowest processing speed. Communication rate of CP and PIS-DP is almost same and better than FIS-DP. So, PIS-DP is recommended because of its better performance than CP and FIS-DP.

  14. Exploiting Efficient Transpacking for One-Sided Communication and MPI-IO

    NASA Astrophysics Data System (ADS)

    Mir, Faisal Ghias; Träff, Jesper Larsson

    Based on a construction of socalled input-output datatypes that define a mapping between non-consecutive input and output buffers, we outline an efficient method for copying of structured data. We term this operation transpacking, and show how transpacking can be applied for the MPI implementation of one-sided communication and MPI-IO. For one-sided communication via shared-memory, we demonstrate the expected performance improvements by up to a factor of two. For individual MPI-IO, the time to read or write from file dominates the overall time, but even here efficient transpacking can in some scenarios reduce file I/O time considerably. The reported results have been achieved on a single NEC SX-8 vector node.

  15. The Role of Scientific Communication Skills in Trainees' Intention to Pursue Biomedical Research Careers: A Social Cognitive Analysis.

    PubMed

    Cameron, Carrie; Lee, Hwa Young; Anderson, Cheryl; Byars-Winston, Angela; Baldwin, Constance D; Chang, Shine

    2015-01-01

    Scientific communication (SciComm) skills are indispensable for success in biomedical research, but many trainees may not have fully considered the necessity of regular writing and speaking for research career progression. Our purpose was to investigate the relationship between SciComm skill acquisition and research trainees' intentions to remain in research careers. We used social cognitive career theory (SCCT) to test a model of the relationship of SciComm skills to SciComm-related cognitive variables in explaining career intentions. A sample of 510 graduate students and postdoctoral fellows at major academic health science centers in the Texas Medical Center, Houston, Texas, were surveyed online. Results suggested that interest in performing SciComm tasks, SciComm outcome expectations (SCOEs), and SciComm productivity predicted intention to remain in a research career, while SciComm self-efficacy did not directly predict career intention. SCOEs also predicted interest in performing SciComm tasks. As in other SCCT studies, SciComm self-efficacy predicted SCOEs. We conclude that social cognitive factors of SciComm skill acquisition and SciComm productivity significantly predict biomedical trainees' intentions to pursue research careers whether within or outside academia. While further studies are needed, these findings may lead to evidence-based interventions to help trainees remain in their chosen career paths. © 2015 C. Cameron et al. CBE—Life Sciences Education © 2015 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  16. Analytic model for the long-term evolution of circular Earth satellite orbits including lunar node regression

    NASA Astrophysics Data System (ADS)

    Zhu, Ting-Lei; Zhao, Chang-Yin; Zhang, Ming-Jiang

    2017-04-01

    This paper aims to obtain an analytic approximation to the evolution of circular orbits governed by the Earth's J2 and the luni-solar gravitational perturbations. Assuming that the lunar orbital plane coincides with the ecliptic plane, Allan and Cook (Proc. R. Soc. A, Math. Phys. Eng. Sci. 280(1380):97, 1964) derived an analytic solution to the orbital plane evolution of circular orbits. Using their result as an intermediate solution, we establish an approximate analytic model with lunar orbital inclination and its node regression be taken into account. Finally, an approximate analytic expression is derived, which is accurate compared to the numerical results except for the resonant cases when the period of the reference orbit approximately equals the integer multiples (especially 1 or 2 times) of lunar node regression period.

  17. Design & implementation of distributed spatial computing node based on WPS

    NASA Astrophysics Data System (ADS)

    Liu, Liping; Li, Guoqing; Xie, Jibo

    2014-03-01

    Currently, the research work of SIG (Spatial Information Grid) technology mostly emphasizes on the spatial data sharing in grid environment, while the importance of spatial computing resources is ignored. In order to implement the sharing and cooperation of spatial computing resources in grid environment, this paper does a systematical research of the key technologies to construct Spatial Computing Node based on the WPS (Web Processing Service) specification by OGC (Open Geospatial Consortium). And a framework of Spatial Computing Node is designed according to the features of spatial computing resources. Finally, a prototype of Spatial Computing Node is implemented and the relevant verification work under the environment is completed.

  18. Harnessing the power of emerging petascale platforms

    NASA Astrophysics Data System (ADS)

    Mellor-Crummey, John

    2007-07-01

    As part of the US Department of Energy's Scientific Discovery through Advanced Computing (SciDAC-2) program, science teams are tackling problems that require computational simulation and modeling at the petascale. A grand challenge for computer science is to develop software technology that makes it easier to harness the power of these systems to aid scientific discovery. As part of its activities, the SciDAC-2 Center for Scalable Application Development Software (CScADS) is building open source software tools to support efficient scientific computing on the emerging leadership-class platforms. In this paper, we describe two tools for performance analysis and tuning that are being developed as part of CScADS: a tool for analyzing scalability and performance, and a tool for optimizing loop nests for better node performance. We motivate these tools by showing how they apply to S3D, a turbulent combustion code under development at Sandia National Laboratory. For S3D, our node performance analysis tool helped uncover several performance bottlenecks. Using our loop nest optimization tool, we transformed S3D's most costly loop nest to reduce execution time by a factor of 2.94 for a processor working on a 503 domain.

  19. Strategies for Energy Efficient Resource Management of Hybrid Programming Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Dong; Supinski, Bronis de; Schulz, Martin

    2013-01-01

    Many scientific applications are programmed using hybrid programming models that use both message-passing and shared-memory, due to the increasing prevalence of large-scale systems with multicore, multisocket nodes. Previous work has shown that energy efficiency can be improved using software-controlled execution schemes that consider both the programming model and the power-aware execution capabilities of the system. However, such approaches have focused on identifying optimal resource utilization for one programming model, either shared-memory or message-passing, in isolation. The potential solution space, thus the challenge, increases substantially when optimizing hybrid models since the possible resource configurations increase exponentially. Nonetheless, with the accelerating adoptionmore » of hybrid programming models, we increasingly need improved energy efficiency in hybrid parallel applications on large-scale systems. In this work, we present new software-controlled execution schemes that consider the effects of dynamic concurrency throttling (DCT) and dynamic voltage and frequency scaling (DVFS) in the context of hybrid programming models. Specifically, we present predictive models and novel algorithms based on statistical analysis that anticipate application power and time requirements under different concurrency and frequency configurations. We apply our models and methods to the NPB MZ benchmarks and selected applications from the ASC Sequoia codes. Overall, we achieve substantial energy savings (8.74% on average and up to 13.8%) with some performance gain (up to 7.5%) or negligible performance loss.« less

  20. Porting Gravitational Wave Signal Extraction to Parallel Virtual Machine (PVM)

    NASA Technical Reports Server (NTRS)

    Thirumalainambi, Rajkumar; Thompson, David E.; Redmon, Jeffery

    2009-01-01

    Laser Interferometer Space Antenna (LISA) is a planned NASA-ESA mission to be launched around 2012. The Gravitational Wave detection is fundamentally the determination of frequency, source parameters, and waveform amplitude derived in a specific order from the interferometric time-series of the rotating LISA spacecrafts. The LISA Science Team has developed a Mock LISA Data Challenge intended to promote the testing of complicated nested search algorithms to detect the 100-1 millihertz frequency signals at amplitudes of 10E-21. However, it has become clear that, sequential search of the parameters is very time consuming and ultra-sensitive; hence, a new strategy has been developed. Parallelization of existing sequential search algorithms of Gravitational Wave signal identification consists of decomposing sequential search loops, beginning with outermost loops and working inward. In this process, the main challenge is to detect interdependencies among loops and partitioning the loops so as to preserve concurrency. Existing parallel programs are based upon either shared memory or distributed memory paradigms. In PVM, master and node programs are used to execute parallelization and process spawning. The PVM can handle process management and process addressing schemes using a virtual machine configuration. The task scheduling and the messaging and signaling can be implemented efficiently for the LISA Gravitational Wave search process using a master and 6 nodes. This approach is accomplished using a server that is available at NASA Ames Research Center, and has been dedicated to the LISA Data Challenge Competition. Historically, gravitational wave and source identification parameters have taken around 7 days in this dedicated single thread Linux based server. Using PVM approach, the parameter extraction problem can be reduced to within a day. The low frequency computation and a proxy signal-to-noise ratio are calculated in separate nodes that are controlled by the master using message and vector of data passing. The message passing among nodes follows a pattern of synchronous and asynchronous send-and-receive protocols. The communication model and the message buffers are allocated dynamically to address rapid search of gravitational wave source information in the Mock LISA data sets.

  1. Protective Capacity of Memory CD8+ T Cells is Dictated by Antigen Exposure History and Nature of the Infection

    PubMed Central

    Nolz, Jeffrey C.; Harty, John T.

    2011-01-01

    SUMMARY Infection or vaccination confers heightened resistance to pathogen re-challenge due to quantitative and qualitative differences between naïve and primary memory T cells. Herein, we show that secondary (boosted) memory CD8+ T cells were better than primary memory CD8+ T cells in controlling some, but not all acute infections with diverse pathogens. However, secondary memory CD8+ T cells were less efficient than an equal number of primary memory cells at preventing chronic LCMV infection and are more susceptible to functional exhaustion. Importantly, localization of memory CD8+ T cells within lymph nodes, which is reduced by antigen re-stimulation, was critical for both viral control in lymph nodes and for the sustained CD8+ T cell response required to prevent chronic LCMV infection. Thus, repeated antigen-stimulation shapes memory CD8+ T cell populations to either enhance or decrease per cell protective immunity in a pathogen-specific manner, a concept of importance in vaccine design against specific diseases. PMID:21549619

  2. Integrating the Nqueens Algorithm into a Parameterized Benchmark Suite

    DTIC Science & Technology

    2016-02-01

    FOB is a 64-node heterogeneous cluster consisting of 16-IBM dx360M4 nodes, each with one NVIDIA Kepler K20M GPUs and 48-IBM dx360M4 nodes, and each...nodes have 256-GB of memory and an NVIDIA Tesla K40 GPU. More details on Excalibur can be found on the US Army DSRC website.19 Figures 3 and 4 show the

  3. Memory-induced mechanism for self-sustaining activity in networks

    NASA Astrophysics Data System (ADS)

    Allahverdyan, A. E.; Steeg, G. Ver; Galstyan, A.

    2015-12-01

    We study a mechanism of activity sustaining on networks inspired by a well-known model of neuronal dynamics. Our primary focus is the emergence of self-sustaining collective activity patterns, where no single node can stay active by itself, but the activity provided initially is sustained within the collective of interacting agents. In contrast to existing models of self-sustaining activity that are caused by (long) loops present in the network, here we focus on treelike structures and examine activation mechanisms that are due to temporal memory of the nodes. This approach is motivated by applications in social media, where long network loops are rare or absent. Our results suggest that under a weak behavioral noise, the nodes robustly split into several clusters, with partial synchronization of nodes within each cluster. We also study the randomly weighted version of the models where the nodes are allowed to change their connection strength (this can model attention redistribution) and show that it does facilitate the self-sustained activity.

  4. DataSync - sharing data via filesystem

    NASA Astrophysics Data System (ADS)

    Ulbricht, Damian; Klump, Jens

    2014-05-01

    Usually research work is a cycle of to hypothesize, to collect data, to corroborate the hypothesis, and finally to publish the results. In this sequence there are possibilities to base the own work on the work of others. Maybe there are candidates of physical samples listed in the IGSN-Registry and there is no need to go on excursion to acquire physical samples. Hopefully the DataCite catalogue lists already metadata of datasets that meet the constraints of the hypothesis and that are now open for reappraisal. After all, working with the measured data to corroborate the hypothesis involves new methods, and proven methods as well as different software tools. A cohort of intermediate data is created that can be shared with colleagues to discuss the research progress and receive a first evaluation. In consequence, the intermediate data should be versioned to easily get back to valid intermediate data, when you notice you get on the wrong track. Things are different for project managers. They want to know what is currently done, what has been done, and what is the last valid data, if somebody has to continue the work. To make life of members of small science projects easier we developed Datasync [1] as a software for sharing and versioning data. Datasync is designed to synchronize directory trees between different computers of a research team over the internet. The software is developed as JAVA application and watches a local directory tree for changes that are replicated as eSciDoc-objects into an eSciDoc-infrastructure [2] using the eSciDoc REST API. Modifications to the local filesystem automatically create a new version of an eSciDoc-object inside the eSciDoc-infrastructure. This way individual folders can be shared between team members while project managers can get a general idea of current status by synchronizing whole project inventories. Additionally XML metadata from separate files can be managed together with data files inside the eSciDoc-objects. While Datasync's major task is to distribute directory trees, we complement its functionality with the PHP-based application panMetaDocs [3]. panMetaDocs is the successor to panMetaWorks [4] and inherits most of its functionality. Through an internet browser PanMetaDocs provides a web-based overview of the datasets inside the eSciDoc-infrastructure. The software allows to upload further data, to add and edit metadata using the metadata editor, and it disseminates metadata through various channels. In addition, previous versions of a file can be downloaded and access rights can be defined on files and folders to control visibility of files for users of both panMetaDocs and Datasync. panMetaDocs serves as a publication agent for datasets and it serves as a registration agent for dataset DOIs. The application stack presented here allows sharing, versioning, and central storage of data from the very beginning of project activities by using the file synchronization service Datasync. The web-application panMetaDocs complements the functionality of DataSync by providing a dataset publication agent and other tools to handle administrative tasks on the data. [1] http://github.com/ulbricht/datasync [2] http://github.com/escidoc [3] http://panmetadocs.sf.net [4] http://metaworks.pangaea.de

  5. The BlueGene/L supercomputer

    NASA Astrophysics Data System (ADS)

    Bhanota, Gyan; Chen, Dong; Gara, Alan; Vranas, Pavlos

    2003-05-01

    The architecture of the BlueGene/L massively parallel supercomputer is described. Each computing node consists of a single compute ASIC plus 256 MB of external memory. The compute ASIC integrates two 700 MHz PowerPC 440 integer CPU cores, two 2.8 Gflops floating point units, 4 MB of embedded DRAM as cache, a memory controller for external memory, six 1.4 Gbit/s bi-directional ports for a 3-dimensional torus network connection, three 2.8 Gbit/s bi-directional ports for connecting to a global tree network and a Gigabit Ethernet for I/O. 65,536 of such nodes are connected into a 3-d torus with a geometry of 32×32×64. The total peak performance of the system is 360 Teraflops and the total amount of memory is 16 TeraBytes.

  6. Towards quantum networks of single spins: analysis of a quantum memory with an optical interface in diamond.

    PubMed

    Blok, M S; Kalb, N; Reiserer, A; Taminiau, T H; Hanson, R

    2015-01-01

    Single defect centers in diamond have emerged as a powerful platform for quantum optics experiments and quantum information processing tasks. Connecting spatially separated nodes via optical photons into a quantum network will enable distributed quantum computing and long-range quantum communication. Initial experiments on trapped atoms and ions as well as defects in diamond have demonstrated entanglement between two nodes over several meters. To realize multi-node networks, additional quantum bit systems that store quantum states while new entanglement links are established are highly desirable. Such memories allow for entanglement distillation, purification and quantum repeater protocols that extend the size, speed and distance of the network. However, to be effective, the memory must be robust against the entanglement generation protocol, which typically must be repeated many times. Here we evaluate the prospects of using carbon nuclear spins in diamond as quantum memories that are compatible with quantum networks based on single nitrogen vacancy (NV) defects in diamond. We present a theoretical framework to describe the dephasing of the nuclear spins under repeated generation of NV spin-photon entanglement and show that quantum states can be stored during hundreds of repetitions using typical experimental coupling parameters. This result demonstrates that nuclear spins with weak hyperfine couplings are promising quantum memories for quantum networks.

  7. Children's episodic memory.

    PubMed

    Ghetti, Simona; Lee, Joshua

    2011-07-01

    Episodic memory develops during childhood and adolescence. This trajectory depends on several underlying processes. In this article, we first discuss the development of the basic binding processes (e.g., the processes by which elements are bound together to form a memory episode) and control processes (e.g., reasoning and metamemory processes) involved in episodic remembering. Then, we discuss the role of these processes in false-memory formation. In the subsequent sections, we examine the neural substrates of the development of episodic memory. Finally, we discuss atypical development of episodic memory. As we proceed through the article, we suggest potential avenues for future research. WIREs Cogni Sci 2011 2 365-373 DOI: 10.1002/wcs.114 For further resources related to this article, please visit the WIREs website. Copyright © 2010 John Wiley & Sons, Ltd.

  8. Technical support for digital systems technology development. Task order 1: ISP contention analysis and control

    NASA Technical Reports Server (NTRS)

    Stehle, Roy H.; Ogier, Richard G.

    1993-01-01

    Alternatives for realizing a packet-based network switch for use on a frequency division multiple access/time division multiplexed (FDMA/TDM) geostationary communication satellite were investigated. Each of the eight downlink beams supports eight directed dwells. The design needed to accommodate multicast packets with very low probability of loss due to contention. Three switch architectures were designed and analyzed. An output-queued, shared bus system yielded a functionally simple system, utilizing a first-in, first-out (FIFO) memory per downlink dwell, but at the expense of a large total memory requirement. A shared memory architecture offered the most efficiency in memory requirements, requiring about half the memory of the shared bus design. The processing requirement for the shared-memory system adds system complexity that may offset the benefits of the smaller memory. An alternative design using a shared memory buffer per downlink beam decreases circuit complexity through a distributed design, and requires at most 1000 packets of memory more than the completely shared memory design. Modifications to the basic packet switch designs were proposed to accommodate circuit-switched traffic, which must be served on a periodic basis with minimal delay. Methods for dynamically controlling the downlink dwell lengths were developed and analyzed. These methods adapt quickly to changing traffic demands, and do not add significant complexity or cost to the satellite and ground station designs. Methods for reducing the memory requirement by not requiring the satellite to store full packets were also proposed and analyzed. In addition, optimal packet and dwell lengths were computed as functions of memory size for the three switch architectures.

  9. The evolution of stories: from mimesis to language, from fact to fiction.

    PubMed

    Boyd, Brian

    2018-01-01

    Why a species as successful as Homo sapiens should spend so much time in fiction, in telling one another stories that neither side believes, at first seems an evolutionary riddle. Because of the advantages of tracking and recombining true information, capacities for event comprehension, memory, imagination, and communication evolved in a range of animal species-yet even chimpanzees cannot communicate beyond the here and now. By Homo erectus, our forebears had reached an increasing dependence on one another, not least in sharing information in mimetic, prelinguistic ways. As Daniel Dor shows, the pressure to pool ever more information, even beyond currently shared experience, led to the invention of language. Language in turn swiftly unlocked efficient forms of narrative, allowing early humans to learn much more about their kind than they could experience at first hand, so that they could cooperate and compete better through understanding one another more fully. This changed the payoff of sociality for individuals and groups. But true narrative was still limited to what had already happened. Once the strong existing predisposition to play combined with existing capacities for event comprehension, memory, imagination, language, and narrative, we could begin to invent fiction, and to explore the full range of human possibilities in concentrated, engaging, memorable forms. First language, then narrative, then fiction, created niches that altered selection pressures, and made us ever more deeply dependent on knowing more about our kind and our risks and opportunities than we could discover through direct experience. WIREs Cogn Sci 2018, 9:e1444. doi: 10.1002/wcs.1444 This article is categorized under: Cognitive Biology > Evolutionary Roots of Cognition Linguistics > Evolution of Language Neuroscience > Cognition. © 2017 The Author. WIREs Cognitive Science published by Wiley Periodicals, Inc.

  10. Load sharing in distributed real-time systems with state-change broadcasts

    NASA Technical Reports Server (NTRS)

    Shin, Kang G.; Chang, Yi-Chieh

    1989-01-01

    A decentralized dynamic load-sharing (LS) method based on state-change broadcasts is proposed for a distributed real-time system. Whenever the state of a node changes from underloaded to fully loaded and vice versa, the node broadcasts this change to a set of nodes, called a buddy set, in the system. The performance of the method is evaluated with both analytic modeling and simulation. It is modeled first by an embedded Markov chain for which numerical solutions are derived. The model solutions are then used to calculate the distribution of queue lengths at the nodes and the probability of meeting task deadlines. The analytical results show that buddy sets of 10 nodes outperform those of less than 10 nodes, and the incremental benefit gained from increasing the buddy set size beyond 15 nodes is insignificant. These and other analytical results are verified by simulation. The proposed LS method is shown to meet task deadlines with a very high probability.

  11. Request queues for interactive clients in a shared file system of a parallel computing system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bent, John M.; Faibish, Sorin

    Interactive requests are processed from users of log-in nodes. A metadata server node is provided for use in a file system shared by one or more interactive nodes and one or more batch nodes. The interactive nodes comprise interactive clients to execute interactive tasks and the batch nodes execute batch jobs for one or more batch clients. The metadata server node comprises a virtual machine monitor; an interactive client proxy to store metadata requests from the interactive clients in an interactive client queue; a batch client proxy to store metadata requests from the batch clients in a batch client queue;more » and a metadata server to store the metadata requests from the interactive client queue and the batch client queue in a metadata queue based on an allocation of resources by the virtual machine monitor. The metadata requests can be prioritized, for example, based on one or more of a predefined policy and predefined rules.« less

  12. Perpendicular STT_RAM cell in 8 nm technology node using Co1/Ni3(1 1 1)||Gr2||Co1/Ni3(1 1 1) structure as magnetic tunnel junction

    NASA Astrophysics Data System (ADS)

    Varghani, Ali; Peiravi, Ali; Moradi, Farshad

    2018-04-01

    The perpendicular anisotropy Spin-Transfer Torque Random Access Memory (P-STT-RAM) is considered to be a promising candidate for high-density memories. Many distinct advantages of Perpendicular Magnetic Tunnel Junction (P-MTJ) compared to the conventional in-plane MTJ (I-MTJ) such as lower switching current, circular cell shape that facilitates manufacturability in smaller technology nodes, large thermal stability, smaller cell size, and lower dipole field interaction between adjacent cells make it a promising candidate as a universal memory. However, for small MTJ cell sizes, the perpendicular technology requires new materials with high polarization and low damping factor as well as low resistance area product of a P-MTJ in order to avoid a high write voltage as technology is scaled down. A new graphene-based STT-RAM cell for 8 nm technology node that uses high perpendicular magnetic anisotropy cobalt/nickel (Co/Ni) multilayer as magnetic layers is proposed in this paper. The proposed junction benefits from enough Tunneling Magnetoresistance Ratio (TMR), low resistance area product, low write voltage, and low power consumption that make it suitable for 8 nm technology node.

  13. Spacecraft On-Board Information Extraction Computer (SOBIEC)

    NASA Technical Reports Server (NTRS)

    Eisenman, David; Decaro, Robert E.; Jurasek, David W.

    1994-01-01

    The Jet Propulsion Laboratory is the Technical Monitor on an SBIR Program issued for Irvine Sensors Corporation to develop a highly compact, dual use massively parallel processing node known as SOBIEC. SOBIEC couples 3D memory stacking technology provided by nCUBE. The node contains sufficient network Input/Output to implement up to an order-13 binary hypercube. The benefit of this network, is that it scales linearly as more processors are added, and it is a superset of other commonly used interconnect topologies such as: meshes, rings, toroids, and trees. In this manner, a distributed processing network can be easily devised and supported. The SOBIEC node has sufficient memory for most multi-computer applications, and also supports external memory expansion and DMA interfaces. The SOBIEC node is supported by a mature set of software development tools from nCUBE. The nCUBE operating system (OS) provides configuration and operational support for up to 8000 SOBIEC processors in an order-13 binary hypercube or any subset or partition(s) thereof. The OS is UNIX (USL SVR4) compatible, with C, C++, and FORTRAN compilers readily available. A stand-alone development system is also available to support SOBIEC test and integration.

  14. Position control of desiccation cracks by memory effect and Faraday waves.

    PubMed

    Nakayama, Hiroshi; Matsuo, Yousuke; Takeshi, Ooshida; Nakahara, Akio

    2013-01-01

    Pattern formation of desiccation cracks on a layer of a calcium carbonate paste is studied experimentally. This paste is known to exhibit a memory effect, which means that a short-time application of horizontal vibration to the fresh paste predetermines the direction of the cracks that are formed after the paste is dried. While the position of the cracks (as opposed to their direction) is still stochastic in the case of horizontal vibration, the present work reports that their positioning is also controllable, at least to some extent, by applying vertical vibration to the paste and imprinting the pattern of Faraday waves, thus breaking the translational symmetry of the system. The experiments show that the cracks tend to appear in the node zones of the Faraday waves: in the case of stripe-patterned Faraday waves, the cracks are formed twice more frequently in the node zones than in the anti-node zones, presumably due to the localized horizontal motion. As a result of this preference of the cracks to the node zones, the memory of the square lattice pattern of Faraday waves makes the cracks run in the oblique direction differing by 45 degrees from the intuitive lattice direction of the Faraday waves.

  15. Advanced Networking and Distributed Systems Defense Advanced Research Projects Agency

    DTIC Science & Technology

    1992-06-01

    balanced. The averagenumber of raviving nodes in each subcube is 2I (1 - *A node cannot transmit a message to a faulty p),. Each will send a units of...space at the processing elements. After being tal traffic will go to memory module 0 in a 1024-node generated, a packet is discarded if it cannot be

  16. Event-related rTMS at encoding affects differently deep and shallow memory traces.

    PubMed

    Innocenti, Iglis; Giovannelli, Fabio; Cincotta, Massimo; Feurra, Matteo; Polizzotto, Nicola R; Bianco, Giovanni; Cappa, Stefano F; Rossi, Simone

    2010-10-15

    The "level of processing" effect is a classical finding of the experimental psychology of memory. Actually, the depth of information processing at encoding predicts the accuracy of the subsequent episodic memory performance. When the incoming stimuli are analyzed in terms of their meaning (semantic, or deep, encoding), the memory performance is superior with respect to the case in which the same stimuli are analyzed in terms of their perceptual features (shallow encoding). As suggested by previous neuroimaging studies and by some preliminary findings with transcranial magnetic stimulation (TMS), the left prefrontal cortex may play a role in semantic processing requiring the allocation of working memory resources. However, it still remains unclear whether deep and shallow encoding share or not the same cortical networks, as well as how these networks contribute to the "level of processing" effect. To investigate the brain areas casually involved in this phenomenon, we applied event-related repetitive TMS (rTMS) during deep (semantic) and shallow (perceptual) encoding of words. Retrieval was subsequently tested without rTMS interference. RTMS applied to the left dorsolateral prefrontal cortex (DLPFC) abolished the beneficial effect of deep encoding on memory performance, both in terms of accuracy (decrease) and reaction times (increase). Neither accuracy nor reaction times were instead affected by rTMS to the right DLPFC or to an additional control site excluded by the memory process (vertex). The fact that online measures of semantic processing at encoding were unaffected suggests that the detrimental effect on memory performance for semantically encoded items took place in the subsequent consolidation phase. These results highlight the specific causal role of the left DLPFC among the wide left-lateralized cortical network engaged by long-term memory, suggesting that it probably represents a crucial node responsible for the improved memory performance induced by semantic processing. Copyright 2010 Elsevier Inc. All rights reserved.

  17. Direct memory access transfer completion notification

    DOEpatents

    Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Parker, Jeffrey J [Rochester, MN

    2011-02-15

    DMA transfer completion notification includes: inserting, by an origin DMA engine on an origin node in an injection first-in-first-out (`FIFO`) buffer, a data descriptor for an application message to be transferred to a target node on behalf of an application on the origin node; inserting, by the origin DMA engine, a completion notification descriptor in the injection FIFO buffer after the data descriptor for the message, the completion notification descriptor specifying a packet header for a completion notification packet; transferring, by the origin DMA engine to the target node, the message in dependence upon the data descriptor; sending, by the origin DMA engine, the completion notification packet to a local reception FIFO buffer using a local memory FIFO transfer operation; and notifying, by the origin DMA engine, the application that transfer of the message is complete in response to receiving the completion notification packet in the local reception FIFO buffer.

  18. SciReader enables reading of medical content with instantaneous definitions.

    PubMed

    Gradie, Patrick R; Litster, Megan; Thomas, Rinu; Vyas, Jay; Schiller, Martin R

    2011-01-25

    A major problem patients encounter when reading about health related issues is document interpretation, which limits reading comprehension and therefore negatively impacts health care. Currently, searching for medical definitions from an external source is time consuming, distracting, and negatively impacts reading comprehension and memory of the material. SciReader was built as a Java application with a Flex-based front-end client. The dictionary used by SciReader was built by consolidating data from several sources and generating new definitions with a standardized syntax. The application was evaluated by measuring the percentage of words defined in different documents. A survey was used to test the perceived effect of SciReader on reading time and comprehension. We present SciReader, a web-application that simplifies document interpretation by allowing users to instantaneously view medical, English, and scientific definitions as they read any document. This tool reveals the definitions of any selected word in a small frame at the top of the application. SciReader relies on a dictionary of ~750,000 unique Biomedical and English word definitions. Evaluation of the application shows that it maps ~98% of words in several different types of documents and that most users tested in a survey indicate that the application decreases reading time and increases comprehension. SciReader is a web application useful for reading medical and scientific documents. The program makes jargon-laden content more accessible to patients, educators, health care professionals, and the general public.

  19. Cognitive Control Network Contributions to Memory-Guided Visual Attention.

    PubMed

    Rosen, Maya L; Stern, Chantal E; Michalka, Samantha W; Devaney, Kathryn J; Somers, David C

    2016-05-01

    Visual attentional capacity is severely limited, but humans excel in familiar visual contexts, in part because long-term memories guide efficient deployment of attention. To investigate the neural substrates that support memory-guided visual attention, we performed a set of functional MRI experiments that contrast long-term, memory-guided visuospatial attention with stimulus-guided visuospatial attention in a change detection task. Whereas the dorsal attention network was activated for both forms of attention, the cognitive control network(CCN) was preferentially activated during memory-guided attention. Three posterior nodes in the CCN, posterior precuneus, posterior callosal sulcus/mid-cingulate, and lateral intraparietal sulcus exhibited the greatest specificity for memory-guided attention. These 3 regions exhibit functional connectivity at rest, and we propose that they form a subnetwork within the broader CCN. Based on the task activation patterns, we conclude that the nodes of this subnetwork are preferentially recruited for long-term memory guidance of visuospatial attention. Published by Oxford University Press 2015. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  20. Collectively loading an application in a parallel computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aho, Michael E.; Attinella, John E.; Gooding, Thomas M.

    Collectively loading an application in a parallel computer, the parallel computer comprising a plurality of compute nodes, including: identifying, by a parallel computer control system, a subset of compute nodes in the parallel computer to execute a job; selecting, by the parallel computer control system, one of the subset of compute nodes in the parallel computer as a job leader compute node; retrieving, by the job leader compute node from computer memory, an application for executing the job; and broadcasting, by the job leader to the subset of compute nodes in the parallel computer, the application for executing the job.

  1. CSF Aβ1-42, but not p-Tau181, differentiates aMCI from SCI.

    PubMed

    Rizzi, Liara; Maria Portal, Marcelle; Batista, Carlos Eduardo Alves; Missiaggia, Luciane; Roriz-Cruz, Matheus

    2018-01-01

    Individuals with amnestic mild cognitive impairment (aMCI) are at a high risk to develop Alzheimer's disease (AD). We compared CSF levels of biomarkers of amyloidosis (Aβ 1-42 ) and neurodegeneration (p-Tau 181 ) in individuals with aMCI and with subjective cognitive impairment (SCI) in order to ascertain diagnostic accuracy and predict the odds ratio associated with aMCI. We collected CSF of individuals clinically diagnosed with aMCI (33) and SCI (12) of a memory clinic of Southern Brazil. Levels of Aβ 1-42 and p-Tau 181 were measured by immunoenzymatic assay. Participants also underwent neuropsychological testing including the verbal memory test subscore of the Consortium to Establish a Registry for Alzheimer's Disease (VM-CERAD). CSF concentration of Aβ 1-42 was significantly lower (p: .007) and p-Tau 181 /Aβ 1-42 ratio higher (p: .014) in aMCI individuals than in SCI. However, isolate p-Tau 181 levels were not associated with aMCI (p: .166). There was a statistically significant association between Aβ 1-42 and p-Tau 181 (R 2 : 0.177; β: -4.43; p: .017). ROC AUC of CSF Aβ 1-42 was 0.768 and of the p-Tau 181 /Aβ 1-42 ratio equals 0.742. Individuals with Aβ 1-42  < 823 pg/mL levels were 6.0 times more likely to be diagnosed with aMCI (p: .019), with a 68.9% accuracy. Those with p-Tau 181 /Aβ 1-42 ratio > 0.071 were at 4.6 increased odds to have aMCI (p: .043), with a 64.5% accuracy. VM-CERAD was significantly lower in aMCI than among SCI (p: .041). CSF Aβ 1-42 , but not p-Tau 181, level was significantly associated with aMCI. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Laterality effects in functional connectivity of the angular gyrus during rest and episodic retrieval.

    PubMed

    Bellana, Buddhika; Liu, Zhongxu; Anderson, John A E; Moscovitch, Morris; Grady, Cheryl L

    2016-01-08

    The angular gyrus (AG) is consistently reported in neuroimaging studies of episodic memory retrieval and is a fundamental node within the default mode network (DMN). Its specific contribution to episodic memory is debated, with some suggesting it is important for the subjective experience of episodic recollection, rather than retrieval of objective episodic details. Across studies of episodic retrieval, the left AG is recruited more reliably than the right. We explored functional connectivity of the right and left AG with the DMN during rest and retrieval to assess whether connectivity could provide insight into the nature of this laterality effect. Using data from the publically available 1000 Functional Connectome Project, 8min of resting fMRI data from 180 healthy young adults were analysed. Whole-brain functional connectivity at rest was measured using a seed-based Partial Least Squares (seed-PLS) approach (McIntosh and Lobaugh, 2004) with bilateral AG seeds. A subsequent analysis used 6-min of rest and 6-min of unconstrained, silent retrieval of autobiographical events from a new sample of 20 younger adults. Analysis of this dataset took a more targeted approach to functional connectivity analysis, consisting of univariate pairwise correlations restricted to nodes of the DMN. The seed-PLS analysis resulted in two Latent Variables that together explained ~86% of the shared cross-block covariance. The first LV revealed a common network consistent with the DMN and engaging the AG bilaterally, whereas the second LV revealed a less robust, yet significant, laterality effect in connectivity - the left AG was more strongly connected to the DMN. Univariate analyses of the second sample again revealed better connectivity between the left AG and the DMN at rest. However, during retrieval the left AG was more strongly connected than the right to non-medial temporal (MTL) nodes of the DMN, and MTL nodes were more strongly connected to the right AG. The multivariate analysis of resting connectivity revealed that the left and right AG show similar connectivity with the DMN. Only after accounting for this commonality were we able to detect a left laterality effect in DMN connectivity. Further probing with univariate connectivity analyses during retrieval demonstrates that the left preference we observe is restricted to the non-MTL regions of the DMN, whereas the right AG shows significantly better connectivity with the MTL. These data suggest bilateral involvement of the AG during retrieval, despite the focus on the left AG in the literature. Furthermore, the results suggest that the contribution of the left AG to retrieval may be separable from that of the MTL, consistent with a role for the left AG in the subjective aspects of recollection in memory, whereas the MTL and the right AG may contribute to objective recollection of specific memory details. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Fan Size and Foil Type in Recognition Memory.

    ERIC Educational Resources Information Center

    Walls, Richard T.; And Others

    An experiment involving 20 graduate and undergraduate students (7 males and 13 females) at West Virginia University (Morgantown) assessed "fan network structures" of recognition memory. A fan in network memory structure occurs when several facts are connected into a single node (concept). The more links from that concept to various…

  4. Direct memory access transfer completion notification

    DOEpatents

    Chen, Dong; Giampapa, Mark E.; Heidelberger, Philip; Kumar, Sameer; Parker, Jeffrey J.; Steinmacher-Burow, Burkhard D.; Vranas, Pavlos

    2010-07-27

    Methods, compute nodes, and computer program products are provided for direct memory access (`DMA`) transfer completion notification. Embodiments include determining, by an origin DMA engine on an origin compute node, whether a data descriptor for an application message to be sent to a target compute node is currently in an injection first-in-first-out (`FIFO`) buffer in dependence upon a sequence number previously associated with the data descriptor, the total number of descriptors currently in the injection FIFO buffer, and the current sequence number for the newest data descriptor stored in the injection FIFO buffer; and notifying a processor core on the origin DMA engine that the message has been sent if the data descriptor for the message is not currently in the injection FIFO buffer.

  5. A site oriented supercomputer for theoretical physics: The Fermilab Advanced Computer Program Multi Array Processor System (ACMAPS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nash, T.; Atac, R.; Cook, A.

    1989-03-06

    The ACPMAPS multipocessor is a highly cost effective, local memory parallel computer with a hypercube or compound hypercube architecture. Communication requires the attention of only the two communicating nodes. The design is aimed at floating point intensive, grid like problems, particularly those with extreme computing requirements. The processing nodes of the system are single board array processors, each with a peak power of 20 Mflops, supported by 8 Mbytes of data and 2 Mbytes of instruction memory. The system currently being assembled has a peak power of 5 Gflops. The nodes are based on the Weitek XL Chip set. Themore » system delivers performance at approximately $300/Mflop. 8 refs., 4 figs.« less

  6. Accelerate quasi Monte Carlo method for solving systems of linear algebraic equations through shared memory

    NASA Astrophysics Data System (ADS)

    Lai, Siyan; Xu, Ying; Shao, Bo; Guo, Menghan; Lin, Xiaola

    2017-04-01

    In this paper we study on Monte Carlo method for solving systems of linear algebraic equations (SLAE) based on shared memory. Former research demostrated that GPU can effectively speed up the computations of this issue. Our purpose is to optimize Monte Carlo method simulation on GPUmemoryachritecture specifically. Random numbers are organized to storein shared memory, which aims to accelerate the parallel algorithm. Bank conflicts can be avoided by our Collaborative Thread Arrays(CTA)scheme. The results of experiments show that the shared memory based strategy can speed up the computaions over than 3X at most.

  7. CaLRS: A Critical-Aware Shared LLC Request Scheduling Algorithm on GPGPU

    PubMed Central

    Ma, Jianliang; Meng, Jinglei; Chen, Tianzhou; Wu, Minghui

    2015-01-01

    Ultra high thread-level parallelism in modern GPUs usually introduces numerous memory requests simultaneously. So there are always plenty of memory requests waiting at each bank of the shared LLC (L2 in this paper) and global memory. For global memory, various schedulers have already been developed to adjust the request sequence. But we find few work has ever focused on the service sequence on the shared LLC. We measured that a big number of GPU applications always queue at LLC bank for services, which provide opportunity to optimize the service order on LLC. Through adjusting the GPU memory request service order, we can improve the schedulability of SM. So we proposed a critical-aware shared LLC request scheduling algorithm (CaLRS) in this paper. The priority representative of memory request is critical for CaLRS. We use the number of memory requests that originate from the same warp but have not been serviced when they arrive at the shared LLC bank to represent the criticality of each warp. Experiments show that the proposed scheme can boost the SM schedulability effectively by promoting the scheduling priority of the memory requests with high criticality and improves the performance of GPU indirectly. PMID:25729772

  8. Development of an International Canine Spinal Cord Injury observational registry: a collaborative data-sharing network to optimize translational studies of SCI.

    PubMed

    Moore, Sarah A; Zidan, Natalia; Spitzbarth, Ingo; Nout-Lomas, Yvette S; Granger, Nicolas; da Costa, Ronaldo C; Levine, Jonathan M; Jeffery, Nick D; Stein, Veronika M; Tipold, Andrea; Olby, Natasha J

    2018-05-23

    Prospective cross-sectional cohort study. The canine spontaneous model of spinal cord injury (SCI) is as an important pre-clinical platform as it recapitulates key facets of human injury in a naturally occurring context. The establishment of an observational canine SCI registry constitutes a key step in performing epidemiologic studies and assessing the impact of therapeutic strategies to enhance translational research. Further, accumulating information on dogs with SCI may contribute to current "big data" approaches to enhance understanding of the disease using heterogeneous multi-institutional, multi-species datasets from both pre-clinical and human studies. Multiple veterinary academic institutions across the United States and Europe. Common data elements recommended for experimental and human SCI studies were reviewed and adapted for use in a web-based registry, to which all dogs presenting to member veterinary tertiary care facilities were prospectively entered over ~1 year. Analysis of data accumulated during the first year of the registry suggests that 16% of dogs with SCI present with severe, sensorimotor-complete injury and that 15% of cases are seen by a tertiary care facility within 8 h of injury. Similar to the human SCI population, 34% were either overweight or obese. Severity of injury and timing of presentation suggests that neuroprotective studies using the canine clinical model could be conducted efficiently using a multi-institutional approach. Additionally, pet dogs with SCI experience similar comorbidities to people with SCI, in particular obesity, and could serve as an important model to evaluate the effects of this condition.

  9. The role of sleep in cognitive processing: focusing on memory consolidation.

    PubMed

    Chambers, Alexis M

    2017-05-01

    Research indicates that sleep promotes various cognitive functions, such as decision-making, language, categorization, and memory. Of these, most work has focused on the influence of sleep on memory, with ample work showing that sleep enhances memory consolidation, a process that stores new memories in the brain over time. Recent psychological and neurophysiological research has vastly increased understanding of this process. Such work not only suggests that consolidation relies on plasticity-related mechanisms that reactivate and stabilize memory representations, but also that this process may be experimentally manipulated by methods that target which memory traces are reactivated during sleep. Furthermore, aside from memory storage capabilities, memory consolidation also appears to reorganize and integrate memories with preexisting knowledge, which may facilitate the discovery of underlying rules and associations that benefit other cognitive functioning, including problem solving and creativity. WIREs Cogn Sci 2017, 8:e1433. doi: 10.1002/wcs.1433 For further resources related to this article, please visit the WIREs website. © 2017 Wiley Periodicals, Inc.

  10. Understanding the implementation of evidence-based care: a structural network approach.

    PubMed

    Parchman, Michael L; Scoglio, Caterina M; Schumm, Phillip

    2011-02-24

    Recent study of complex networks has yielded many new insights into phenomenon such as social networks, the internet, and sexually transmitted infections. The purpose of this analysis is to examine the properties of a network created by the 'co-care' of patients within one region of the Veterans Health Affairs. Data were obtained for all outpatient visits from 1 October 2006 to 30 September 2008 within one large Veterans Integrated Service Network. Types of physician within each clinic were nodes connected by shared patients, with a weighted link representing the number of shared patients between each connected pair. Network metrics calculated included edge weights, node degree, node strength, node coreness, and node betweenness. Log-log plots were used to examine the distribution of these metrics. Sizes of k-core networks were also computed under multiple conditions of node removal. There were 4,310,465 encounters by 266,710 shared patients between 722 provider types (nodes) across 41 stations or clinics resulting in 34,390 edges. The number of other nodes to which primary care provider nodes have a connection (172.7) is 42% greater than that of general surgeons and two and one-half times as high as cardiology. The log-log plot of the edge weight distribution appears to be linear in nature, revealing a 'scale-free' characteristic of the network, while the distributions of node degree and node strength are less so. The analysis of the k-core network sizes under increasing removal of primary care nodes shows that about 10 most connected primary care nodes play a critical role in keeping the k-core networks connected, because their removal disintegrates the highest k-core network. Delivery of healthcare in a large healthcare system such as that of the US Department of Veterans Affairs (VA) can be represented as a complex network. This network consists of highly connected provider nodes that serve as 'hubs' within the network, and demonstrates some 'scale-free' properties. By using currently available tools to explore its topology, we can explore how the underlying connectivity of such a system affects the behavior of providers, and perhaps leverage that understanding to improve quality and outcomes of care.

  11. Time Constraints and Resource Sharing in Adults' Working Memory Spans

    ERIC Educational Resources Information Center

    Barrouillet, Pierre; Bernardin, Sophie; Camos, Valerie

    2004-01-01

    This article presents a new model that accounts for working memory spans in adults, the time-based resource-sharing model. The model assumes that both components (i.e., processing and maintenance) of the main working memory tasks require attention and that memory traces decay as soon as attention is switched away. Because memory retrievals are…

  12. Evaluating the Impact of Data Placement to Spark and SciDB with an Earth Science Use Case

    NASA Technical Reports Server (NTRS)

    Doan, Khoa; Oloso, Amidu; Kuo, Kwo-Sen; Clune, Thomas; Yu, Hongfeng; Nelson, Brian; Zhang, Jian

    2016-01-01

    We investigate the impact of data placement for two Big Data technologies, Spark and SciDB, with a use case from Earth Science where data arrays are multidimensional. Simultaneously, this investigation provides an opportunity to evaluate the performance of the technologies involved. Two datastores, HDFS and Cassandra, are used with Spark for our comparison. It is found that Spark with Cassandra performs better than with HDFS, but SciDB performs better yet than Spark with either datastore. The investigation also underscores the value of having data aligned for the most common analysis scenarios in advance on a shared nothing architecture. Otherwise, repartitioning needs to be carried out on the fly, degrading overall performance.

  13. Externalising the autobiographical self: sharing personal memories online facilitated memory retention.

    PubMed

    Wang, Qi; Lee, Dasom; Hou, Yubo

    2017-07-01

    Internet technology provides a new means of recalling and sharing personal memories in the digital age. What is the mnemonic consequence of posting personal memories online? Theories of transactive memory and autobiographical memory would make contrasting predictions. In the present study, college students completed a daily diary for a week, listing at the end of each day all the events that happened to them on that day. They also reported whether they posted any of the events online. Participants received a surprise memory test after the completion of the diary recording and then another test a week later. At both tests, events posted online were significantly more likely than those not posted online to be recalled. It appears that sharing memories online may provide unique opportunities for rehearsal and meaning-making that facilitate memory retention.

  14. Remote direct memory access

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.

    2012-12-11

    Methods, parallel computers, and computer program products are disclosed for remote direct memory access. Embodiments include transmitting, from an origin DMA engine on an origin compute node to a plurality target DMA engines on target compute nodes, a request to send message, the request to send message specifying a data to be transferred from the origin DMA engine to data storage on each target compute node; receiving, by each target DMA engine on each target compute node, the request to send message; preparing, by each target DMA engine, to store data according to the data storage reference and the data length, including assigning a base storage address for the data storage reference; sending, by one or more of the target DMA engines, an acknowledgment message acknowledging that all the target DMA engines are prepared to receive a data transmission from the origin DMA engine; receiving, by the origin DMA engine, the acknowledgement message from the one or more of the target DMA engines; and transferring, by the origin DMA engine, data to data storage on each of the target compute nodes according to the data storage reference using a single direct put operation.

  15. Asynchronous reference frame agreement in a quantum network

    NASA Astrophysics Data System (ADS)

    Islam, Tanvirul; Wehner, Stephanie

    2016-03-01

    An efficient implementation of many multiparty protocols for quantum networks requires that all the nodes in the network share a common reference frame. Establishing such a reference frame from scratch is especially challenging in an asynchronous network where network links might have arbitrary delays and the nodes do not share synchronised clocks. In this work, we study the problem of establishing a common reference frame in an asynchronous network of n nodes of which at most t are affected by arbitrary unknown error, and the identities of the faulty nodes are not known. We present a protocol that allows all the correctly functioning nodes to agree on a common reference frame as long as the network graph is complete and not more than t\\lt n/4 nodes are faulty. As the protocol is asynchronous, it can be used with some assumptions to synchronise clocks over a network. Also, the protocol has the appealing property that it allows any existing two-node asynchronous protocol for reference frame agreement to be lifted to a robust protocol for an asynchronous quantum network.

  16. Method and apparatus for offloading compute resources to a flash co-processing appliance

    DOEpatents

    Tzelnic, Percy; Faibish, Sorin; Gupta, Uday K.; Bent, John; Grider, Gary Alan; Chen, Hsing -bung

    2015-10-13

    Solid-State Drive (SSD) burst buffer nodes are interposed into a parallel supercomputing cluster to enable fast burst checkpoint of cluster memory to or from nearby interconnected solid-state storage with asynchronous migration between the burst buffer nodes and slower more distant disk storage. The SSD nodes also perform tasks offloaded from the compute nodes or associated with the checkpoint data. For example, the data for the next job is preloaded in the SSD node and very fast uploaded to the respective compute node just before the next job starts. During a job, the SSD nodes perform fast visualization and statistical analysis upon the checkpoint data. The SSD nodes can also perform data reduction and encryption of the checkpoint data.

  17. Pacing a data transfer operation between compute nodes on a parallel computer

    DOEpatents

    Blocksome, Michael A [Rochester, MN

    2011-09-13

    Methods, systems, and products are disclosed for pacing a data transfer between compute nodes on a parallel computer that include: transferring, by an origin compute node, a chunk of an application message to a target compute node; sending, by the origin compute node, a pacing request to a target direct memory access (`DMA`) engine on the target compute node using a remote get DMA operation; determining, by the origin compute node, whether a pacing response to the pacing request has been received from the target DMA engine; and transferring, by the origin compute node, a next chunk of the application message if the pacing response to the pacing request has been received from the target DMA engine.

  18. ERDC MSRC Resource. High Performance Computing for the Warfighter. Spring 2006

    DTIC Science & Technology

    2006-01-01

    named Ruby, and the HP/Compaq SC45, named Emerald , continue to add their unique sparkle to the ERDC MSRC computer infrastructure. ERDC invited the...configuration on B-52H purchased additional memory for the login nodes so that this part of the solution process could be done as a preprocessing step. On...application and system services. Of the service nodes, 10 are login nodes and 23 are input/output (I/O) server nodes for the Lustre file system (i.e., the

  19. XLMS: A Linguistic Memory System,

    DTIC Science & Technology

    1980-09-01

    summum- genus as the root. The node tree is used by XLMS for accessing concepts via their ilks (or nodes via their genuses ) and as the basis for the...2 stereotype 6 *p 6 subclassp 17 #P 9 subcouceptp 17 parenthesized pair 5 subtree extent pointer 35 partitive 6 summum- genus 1, 17.19,.32F path...type of object, the node. A node has two essential components, known as the genus and specializer, which are its immediate constituents. The genus of a

  20. Soft-error tolerance and energy consumption evaluation of embedded computer with magnetic random access memory in practical systems using computer simulations

    NASA Astrophysics Data System (ADS)

    Nebashi, Ryusuke; Sakimura, Noboru; Sugibayashi, Tadahiko

    2017-08-01

    We evaluated the soft-error tolerance and energy consumption of an embedded computer with magnetic random access memory (MRAM) using two computer simulators. One is a central processing unit (CPU) simulator of a typical embedded computer system. We simulated the radiation-induced single-event-upset (SEU) probability in a spin-transfer-torque MRAM cell and also the failure rate of a typical embedded computer due to its main memory SEU error. The other is a delay tolerant network (DTN) system simulator. It simulates the power dissipation of wireless sensor network nodes of the system using a revised CPU simulator and a network simulator. We demonstrated that the SEU effect on the embedded computer with 1 Gbit MRAM-based working memory is less than 1 failure in time (FIT). We also demonstrated that the energy consumption of the DTN sensor node with MRAM-based working memory can be reduced to 1/11. These results indicate that MRAM-based working memory enhances the disaster tolerance of embedded computers.

  1. Neural Correlates of Confidence during Item Recognition and Source Memory Retrieval: Evidence for Both Dual-Process and Strength Memory Theories

    ERIC Educational Resources Information Center

    Hayes, Scott M.; Buchler, Norbou; Stokes, Jared; Kragel, James; Cabeza, Roberto

    2011-01-01

    Although the medial-temporal lobes (MTL), PFC, and parietal cortex are considered primary nodes in the episodic memory network, there is much debate regarding the contributions of MTL, PFC, and parietal subregions to recollection versus familiarity (dual-process theory) and the feasibility of accounts on the basis of a single memory strength…

  2. A Trustworthy Key Generation Prototype Based on DDR3 PUF for Wireless Sensor Networks

    PubMed Central

    Liu, Wenchao; Zhang, Zhenhua; Li, Miaoxin; Liu, Zhenglin

    2014-01-01

    Secret key leakage in wireless sensor networks (WSNs) is a high security risk especially when sensor nodes are deployed in hostile environment and physically accessible to attackers. With nowadays semi/fully-invasive attack techniques attackers can directly derive the cryptographic key from non-volatile memory (NVM) storage. Physically Unclonable Function (PUF) is a promising technology to resist node capture attacks, and it also provides a low cost and tamper-resistant key provisioning solution. In this paper, we designed a PUF based on double-data-rate SDRAM Type 3 (DDR3) memory by exploring its memory decay characteristics. We also described a prototype of 128-bit key generation based on DDR3 PUF with integrated fuzzy extractor. Due to the wide adoption of DDR3 memory in WSN, our proposed DDR3 PUF technology with high security levels and no required hardware changes is suitable for a wide range of WSN applications. PMID:24984058

  3. Quantum storage of entangled telecom-wavelength photons in an erbium-doped optical fibre

    NASA Astrophysics Data System (ADS)

    Saglamyurek, Erhan; Jin, Jeongwan; Verma, Varun B.; Shaw, Matthew D.; Marsili, Francesco; Nam, Sae Woo; Oblak, Daniel; Tittel, Wolfgang

    2015-02-01

    The realization of a future quantum Internet requires the processing and storage of quantum information at local nodes and interconnecting distant nodes using free-space and fibre-optic links. Quantum memories for light are key elements of such quantum networks. However, to date, neither an atomic quantum memory for non-classical states of light operating at a wavelength compatible with standard telecom fibre infrastructure, nor a fibre-based implementation of a quantum memory, has been reported. Here, we demonstrate the storage and faithful recall of the state of a 1,532 nm wavelength photon entangled with a 795 nm photon, in an ensemble of cryogenically cooled erbium ions doped into a 20-m-long silica fibre, using a photon-echo quantum memory protocol. Despite its currently limited efficiency and storage time, our broadband light-matter interface brings fibre-based quantum networks one step closer to reality.

  4. Spinal Cord Injury Causes Brain Inflammation Associated with Cognitive and Affective Changes: Role of Cell Cycle Pathways

    PubMed Central

    Zhao, Zaorui; Sabirzhanov, Boris; Stoica, Bogdan A.; Kumar, Alok; Luo, Tao; Skovira, Jacob; Faden, Alan I.

    2014-01-01

    Experimental spinal cord injury (SCI) causes chronic neuropathic pain associated with inflammatory changes in thalamic pain regulatory sites. Our recent studies examining chronic pain mechanisms after rodent SCI showed chronic inflammatory changes not only in thalamus, but also in other regions including hippocampus and cerebral cortex. Because changes appeared similar to those in our rodent TBI models that are associated with neurodegeneration and neurobehavioral dysfunction, we examined effects of mouse SCI on cognition, depressive-like behavior, and brain inflammation. SCI caused spatial and retention memory impairment and depressive-like behavior, as evidenced by poor performance in the Morris water maze, Y-maze, novel objective recognition, step-down passive avoidance, tail suspension, and sucrose preference tests. SCI caused chronic microglial activation in the hippocampus and cerebral cortex, where microglia with hypertrophic morphologies and M1 phenotype predominated. Stereological analyses showed significant neuronal loss in the hippocampus at 12 weeks but not 8 d after injury. Increased cell-cycle-related gene (cyclins A1, A2, D1, E2F1, and PCNA) and protein (cyclin D1 and CDK4) expression were found chronically in hippocampus and cerebral cortex. Systemic administration of the selective cyclin-dependent kinase inhibitor CR8 after SCI significantly reduced cell cycle gene and protein expression, microglial activation and neurodegeneration in the brain, cognitive decline, and depression. These studies indicate that SCI can initiate a chronic brain neurodegenerative response, likely related to delayed, sustained induction of M1-type microglia and related cell cycle activation, which result in cognitive deficits and physiological depression. PMID:25122899

  5. AB052. The Human Variome Project (HVP) and the HVP ASEAN Node

    PubMed Central

    Alwi, Zilfalil Bin

    2015-01-01

    The Human Variome Project (HVP) is an international NGO that is working to build capacity in responsible clinical genomics around the world. Founded in 2006, and lead in its early years by Professor Richard Cotton, the Project has grown to become a global movement with over 1,300 individual members from 81 countries and close to 200 data provider members. The project works to ensure that the lack of access to information on genetic variants and their effects on human health is not an impediment to diagnosis and treatment. Together with partner organisations including national and regional human genetics societies, national governments and intergovernmental organisations such as UNESCO—of which the HVP is an NGO official partner—the project establishes standards, guidelines and best practices for the responsible development and operation of genetic variation data sharing infrastructure, facilitates training and education of the public and the clinical genomics field and assists in embedding data sharing into routine clinical practice. The HVP believes that the free and open sharing of genetic variation is fundamental to providing quality clinical care. To ensure that this data can be collected, curated, interpreted and shared in a responsible manner that is respectful of the diverse ethical, legal and social differences of its member countries, the HVP works with local stakeholders in each country to establish HVP Country Nodes. An HVP Country Node is defined as having three components: (I) a repository, or linked network of databases, that collect and store information on variation in the human genome that has been generated within each country and that enables the sharing of that information both nationally and internationally; (II) a governance structure that ensures that the work of the Node is both sustainable in the long term and is consistent with all relevant national and international ethical, legal and social requirements; and (III) a set of policies and procedures that ensures that the repository is operated and maintained in a responsible and accountable manner that is consistent with both national and HVP Standards. The HVP Malaysian Node (MyHVP), one of 23 HVP Country Nodes currently in existence, was established in 2010 and officially launched by Professor Cotton. The MyHVP database was made available on the internet a year later. The HVP Malaysian Node has taken a key role in the region and has worked to establish the HVP ASEAN Regional Node. Among its objectives is to foster closer collaboration among ASEAN member states on issues relating to data sharing, data basing and variant interpretation expertise, resources and technical facilities. The HVP ASEAN Regional Node also provides help with capacity building and training, especially to less well-resourced countries in the South East Asian region, for example Cambodia, Laos, and Myanmar. The HVP ASEAN Regional Node was launched in 2013 at Universiti Sains Malaysia, Kota Bharu, Malaysia. Representatives from Thailand, Vietnam, Singapore, Brunei, Indonesia, Philippines and the international Human Variome Project were represented. This presentation will provide an in-depth overview of the HVP ASEAN Regional Node and its progress to date.

  6. Direct match data flow memory for data driven computing

    DOEpatents

    Davidson, George S.; Grafe, Victor Gerald

    1997-01-01

    A data flow computer and method of computing is disclosed which utilizes a data driven processor node architecture. The apparatus in a preferred embodiment includes a plurality of First-In-First-Out (FIFO) registers, a plurality of related data flow memories, and a processor. The processor makes the necessary calculations and includes a control unit to generate signals to enable the appropriate FIFO register receiving the result. In a particular embodiment, there are three FIFO registers per node: an input FIFO register to receive input information form an outside source and provide it to the data flow memories; an output FIFO register to provide output information from the processor to an outside recipient; and an internal FIFO register to provide information from the processor back to the data flow memories. The data flow memories are comprised of four commonly addressed memories. A parameter memory holds the A and B parameters used in the calculations; an opcode memory holds the instruction; a target memory holds the output address; and a tag memory contains status bits for each parameter. One status bit indicates whether the corresponding parameter is in the parameter memory and one status bit to indicate whether the stored information in the corresponding data parameter is to be reused. The tag memory outputs a "fire" signal (signal R VALID) when all of the necessary information has been stored in the data flow memories, and thus when the instruction is ready to be fired to the processor.

  7. Direct match data flow memory for data driven computing

    DOEpatents

    Davidson, G.S.; Grafe, V.G.

    1997-10-07

    A data flow computer and method of computing is disclosed which utilizes a data driven processor node architecture. The apparatus in a preferred embodiment includes a plurality of First-In-First-Out (FIFO) registers, a plurality of related data flow memories, and a processor. The processor makes the necessary calculations and includes a control unit to generate signals to enable the appropriate FIFO register receiving the result. In a particular embodiment, there are three FIFO registers per node: an input FIFO register to receive input information form an outside source and provide it to the data flow memories; an output FIFO register to provide output information from the processor to an outside recipient; and an internal FIFO register to provide information from the processor back to the data flow memories. The data flow memories are comprised of four commonly addressed memories. A parameter memory holds the A and B parameters used in the calculations; an opcode memory holds the instruction; a target memory holds the output address; and a tag memory contains status bits for each parameter. One status bit indicates whether the corresponding parameter is in the parameter memory and one status bit to indicate whether the stored information in the corresponding data parameter is to be reused. The tag memory outputs a ``fire`` signal (signal R VALID) when all of the necessary information has been stored in the data flow memories, and thus when the instruction is ready to be fired to the processor. 11 figs.

  8. Software/hardware distributed processing network supporting the Ada environment

    NASA Astrophysics Data System (ADS)

    Wood, Richard J.; Pryk, Zen

    1993-09-01

    A high-performance, fault-tolerant, distributed network has been developed, tested, and demonstrated. The network is based on the MIPS Computer Systems, Inc. R3000 Risc for processing, VHSIC ASICs for high speed, reliable, inter-node communications and compatible commercial memory and I/O boards. The network is an evolution of the Advanced Onboard Signal Processor (AOSP) architecture. It supports Ada application software with an Ada- implemented operating system. A six-node implementation (capable of expansion up to 256 nodes) of the RISC multiprocessor architecture provides 120 MIPS of scalar throughput, 96 Mbytes of RAM and 24 Mbytes of non-volatile memory. The network provides for all ground processing applications, has merit for space-qualified RISC-based network, and interfaces to advanced Computer Aided Software Engineering (CASE) tools for application software development.

  9. A high performance parallel algorithm for 1-D FFT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agarwal, R.C.; Gustavson, F.G.; Zubair, M.

    1994-12-31

    In this paper the authors propose a parallel high performance FFT algorithm based on a multi-dimensional formulation. They use this to solve a commonly encountered FFT based kernel on a distributed memory parallel machine, the IBM scalable parallel system, SP1. The kernel requires a forward FFT computation of an input sequence, multiplication of the transformed data by a coefficient array, and finally an inverse FFT computation of the resultant data. They show that the multi-dimensional formulation helps in reducing the communication costs and also improves the single node performance by effectively utilizing the memory system of the node. They implementedmore » this kernel on the IBM SP1 and observed a performance of 1.25 GFLOPS on a 64-node machine.« less

  10. Simplifying and speeding the management of intra-node cache coherence

    DOEpatents

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton on Hudson, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Phillip [Cortlandt Manor, NY; Hoenicke, Dirk [Ossining, NY; Ohmacht, Martin [Yorktown Heights, NY

    2012-04-17

    A method and apparatus for managing coherence between two processors of a two processor node of a multi-processor computer system. Generally the present invention relates to a software algorithm that simplifies and significantly speeds the management of cache coherence in a message passing parallel computer, and to hardware apparatus that assists this cache coherence algorithm. The software algorithm uses the opening and closing of put/get windows to coordinate the activated required to achieve cache coherence. The hardware apparatus may be an extension to the hardware address decode, that creates, in the physical memory address space of the node, an area of virtual memory that (a) does not actually exist, and (b) is therefore able to respond instantly to read and write requests from the processing elements.

  11. Lambda network having 2.sup.m-1 nodes in each of m stages with each node coupled to four other nodes for bidirectional routing of data packets between nodes

    DOEpatents

    Napolitano, Jr., Leonard M.

    1995-01-01

    The Lambda network is a single stage, packet-switched interprocessor communication network for a distributed memory, parallel processor computer. Its design arises from the desired network characteristics of minimizing mean and maximum packet transfer time, local routing, expandability, deadlock avoidance, and fault tolerance. The network is based on fixed degree nodes and has mean and maximum packet transfer distances where n is the number of processors. The routing method is detailed, as are methods for expandability, deadlock avoidance, and fault tolerance.

  12. Methods for operating parallel computing systems employing sequenced communications

    DOEpatents

    Benner, R.E.; Gustafson, J.L.; Montry, G.R.

    1999-08-10

    A parallel computing system and method are disclosed having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system. 15 figs.

  13. Methods for operating parallel computing systems employing sequenced communications

    DOEpatents

    Benner, Robert E.; Gustafson, John L.; Montry, Gary R.

    1999-01-01

    A parallel computing system and method having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system.

  14. Method for simultaneous overlapped communications between neighboring processors in a multiple

    DOEpatents

    Benner, Robert E.; Gustafson, John L.; Montry, Gary R.

    1991-01-01

    A parallel computing system and method having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system.

  15. We Remember 2015 - A Video Memorial

    NASA Image and Video Library

    2015-06-10

    Video tribute to 12 members of the NASA Astrobiology community who passed away since the 2012 AbSciCon meeting. Tributes to: Dick Holland, Bob Wharton, Carl Woese, David McKay, Tom Wdowiak, John Billingham, Bishun Khare, Tom Pierson, Colin Pillinger, Katrina Edwards, Martin Brasier and Alberto Behar.

  16. Preconditioned implicit solvers for the Navier-Stokes equations on distributed-memory machines

    NASA Technical Reports Server (NTRS)

    Ajmani, Kumud; Liou, Meng-Sing; Dyson, Rodger W.

    1994-01-01

    The GMRES method is parallelized, and combined with local preconditioning to construct an implicit parallel solver to obtain steady-state solutions for the Navier-Stokes equations of fluid flow on distributed-memory machines. The new implicit parallel solver is designed to preserve the convergence rate of the equivalent 'serial' solver. A static domain-decomposition is used to partition the computational domain amongst the available processing nodes of the parallel machine. The SPMD (Single-Program Multiple-Data) programming model is combined with message-passing tools to develop the parallel code on a 32-node Intel Hypercube and a 512-node Intel Delta machine. The implicit parallel solver is validated for internal and external flow problems, and is found to compare identically with flow solutions obtained on a Cray Y-MP/8. A peak computational speed of 2300 MFlops/sec has been achieved on 512 nodes of the Intel Delta machine,k for a problem size of 1024 K equations (256 K grid points).

  17. Distributed clone detection in static wireless sensor networks: random walk with network division.

    PubMed

    Khan, Wazir Zada; Aalsalem, Mohammed Y; Saad, N M

    2015-01-01

    Wireless Sensor Networks (WSNs) are vulnerable to clone attacks or node replication attacks as they are deployed in hostile and unattended environments where they are deprived of physical protection, lacking physical tamper-resistance of sensor nodes. As a result, an adversary can easily capture and compromise sensor nodes and after replicating them, he inserts arbitrary number of clones/replicas into the network. If these clones are not efficiently detected, an adversary can be further capable to mount a wide variety of internal attacks which can emasculate the various protocols and sensor applications. Several solutions have been proposed in the literature to address the crucial problem of clone detection, which are not satisfactory as they suffer from some serious drawbacks. In this paper we propose a novel distributed solution called Random Walk with Network Division (RWND) for the detection of node replication attack in static WSNs which is based on claimer-reporter-witness framework and combines a simple random walk with network division. RWND detects clone(s) by following a claimer-reporter-witness framework and a random walk is employed within each area for the selection of witness nodes. Splitting the network into levels and areas makes clone detection more efficient and the high security of witness nodes is ensured with moderate communication and memory overheads. Our simulation results show that RWND outperforms the existing witness node based strategies with moderate communication and memory overheads.

  18. Phylogenetic Analysis of Local-Scale Tree Soil Associations in a Lowland Moist Tropical Forest

    PubMed Central

    Schreeg, Laura A.; Kress, W. John; Erickson, David L.; Swenson, Nathan G.

    2010-01-01

    Background Local plant-soil associations are commonly studied at the species-level, while associations at the level of nodes within a phylogeny have been less well explored. Understanding associations within a phylogenetic context, however, can improve our ability to make predictions across systems and can advance our understanding of the role of evolutionary history in structuring communities. Methodology/Principal Findings Here we quantified evolutionary signal in plant-soil associations using a DNA sequence-based community phylogeny and several soil variables (e.g., extractable phosphorus, aluminum and manganese, pH, and slope as a proxy for soil water). We used published plant distributional data from the 50-ha plot on Barro Colorado Island (BCI), Republic of Panamá. Our results suggest some groups of closely related species do share similar soil associations. Most notably, the node shared by Myrtaceae and Vochysiaceae was associated with high levels of aluminum, a potentially toxic element. The node shared by Apocynaceae was associated with high extractable phosphorus, a nutrient that could be limiting on a taxon specific level. The node shared by the large group of Laurales and Magnoliales was associated with both low extractable phosphorus and with steeper slope. Despite significant node-specific associations, this study detected little to no phylogeny-wide signal. We consider the majority of the ‘traits’ (i.e., soil variables) evaluated to fall within the category of ecological traits. We suggest that, given this category of traits, phylogeny-wide signal might not be expected while node-specific signals can still indicate phylogenetic structure with respect to the variable of interest. Conclusions Within the BCI forest dynamics plot, distributions of some plant taxa are associated with local-scale differences in soil variables when evaluated at individual nodes within the phylogenetic tree, but they are not detectable by phylogeny-wide signal. Trends highlighted in this analysis suggest how plant-soil associations may drive plant distributions and diversity at the local-scale. PMID:21060686

  19. Circuit-Switched Memory Access in Photonic Interconnection Networks for High-Performance Embedded Computing

    DTIC Science & Technology

    2010-07-22

    dependent , providing a natural bandwidth match between compute cores and the memory subsystem. • High Bandwidth Dcnsity. Waveguides crossing the chip...simulate this memory access architecture on a 2S6-core chip with a concentrated 64-node network lIsing detailed traces of high-performance embedded...memory modulcs, wc placc memory access poi nts (MAPs) around the pcriphery of the chip connected to thc nctwork. These MAPs, shown in Figure 4, contain

  20. Spreading activation in emotional memory networks and the cumulative effects of somatic markers.

    PubMed

    Foster, Paul S; Hubbard, Tyler; Campbell, Ransom W; Poole, Jonathan; Pridmore, Michael; Bell, Chris; Harrison, David W

    2017-06-01

    The theory of spreading activation proposes that the activation of a semantic memory node may spread along bidirectional associative links to other related nodes. Although this theory was originally proposed to explain semantic memory networks, a similar process may be said to exist with episodic or emotional memory networks. The Somatic Marker hypothesis proposes that remembering an emotional memory activates the somatic sensations associated with the memory. An integration of these two models suggests that as spreading activation in emotional memory networks increases, a greater number of associated somatic markers would become activated. This process would then result in greater changes in physiological functioning. We sought to investigate this possibility by having subjects recall words associated with sad and happy memories, in addition to a neutral condition. The average ages of the memories and the number of word memories recalled were then correlated with measures of heart rate and skin conductance. The results indicated significant positive correlations between the number of happy word memories and heart rate (r = .384, p = .022) and between the average ages of the sad memories and skin conductance (r = .556, p = .001). Unexpectedly, a significant negative relationship was found between the number of happy word memories and skin conductance (r = -.373, p = .025). The results provide partial support for our hypothesis, indicating that increasing spreading activation in emotional memory networks activates an increasing number of somatic markers and this is then reflected in greater physiological activity at the time of recalling the memories.

  1. A Community-based Participatory Research Approach to the Development of a Peer Navigator Health Promotion Intervention for People with Spinal Cord Injury

    PubMed Central

    Newman, Susan D.; Gillenwater, Gwen; Toatley, Sherwood; Rodgers, Marka D.; Todd, Nathan; Epperly, Diane; Andrews, Jeannette O.

    2014-01-01

    Background Recent trends indicate research targeting outcomes of importance to people with disabilities, such as spinal cord injury (SCI), may be best informed by those individuals; however, there are very few published rehabilitation intervention studies that include people with disabilities in the research process in a role beyond study participant. Objective To describe a community-based participatory research (CBPR) approach to the development and pilot testing of an intervention using community-based Peer Navigators with SCI to provide health education to individuals with SCI, with the goal of reducing preventable secondary conditions and rehospitalizations, and improving community participation. Methods A CBPR framework guides the research partnership between academic researchers and a community-based team of individuals who either have SCI or provide SCI-related services. Using this framework, the processes of our research partnership supporting the current study are described including: partnership formation, problem identification, intervention development, and pilot testing of the intervention. Challenges associated with CBPR are identified. Results Using CBPR, the SCI Peer Navigator intervention addresses the partnership’s priority issues identified in the formative studies. Utilization of the framework and integration of CBPR principles into all phases of research have promoted sustainability of the partnership. Recognition of and proactive planning for challenges that are commonly encountered in CBPR, such as sharing power and limited resources, has helped sustain our partnership. Conclusions The CBPR framework provides a guide for inclusion of individuals with SCI as research partners in the development, implementation, and evaluation of interventions intended to improve outcomes after SCI. PMID:25224988

  2. Direct match data flow machine apparatus and process for data driven computing

    DOEpatents

    Davidson, G.S.; Grafe, V.G.

    1997-08-12

    A data flow computer and method of computing are disclosed which utilizes a data driven processor node architecture. The apparatus in a preferred embodiment includes a plurality of First-In-First-Out (FIFO) registers, a plurality of related data flow memories, and a processor. The processor makes the necessary calculations and includes a control unit to generate signals to enable the appropriate FIFO register receiving the result. In a particular embodiment, there are three FIFO registers per node: an input FIFO register to receive input information form an outside source and provide it to the data flow memories; an output FIFO register to provide output information from the processor to an outside recipient; and an internal FIFO register to provide information from the processor back to the data flow memories. The data flow memories are comprised of four commonly addressed memories. A parameter memory holds the A and B parameters used in the calculations; an opcode memory holds the instruction; a target memory holds the output address; and a tag memory contains status bits for each parameter. One status bit indicates whether the corresponding parameter is in the parameter memory and one status but to indicate whether the stored information in the corresponding data parameter is to be reused. The tag memory outputs a ``fire`` signal (signal R VALID) when all of the necessary information has been stored in the data flow memories, and thus when the instruction is ready to be fired to the processor. 11 figs.

  3. Data flow machine for data driven computing

    DOEpatents

    Davidson, G.S.; Grafe, V.G.

    1988-07-22

    A data flow computer and method of computing is disclosed which utilizes a data driven processor node architecture. The apparatus in a preferred embodiment includes a plurality of First-In-First-Out (FIFO) registers, a plurality of related data flow memories, and a processor. The processor makes the necessary calculations and includes a control unit to generate signals to enable the appropriate FIFO register receiving the result. In a particular embodiment, there are three FIFO registers per node: an input FIFO register to receive input information from an outside source and provide it to the data flow memories; an output FIFO register to provide output information from the processor to an outside recipient; and an internal FIFO register to provide information from the processor back to the data flow memories. The data flow memories are comprised of four commonly addressed memories. A parameter memory holds the A and B parameters used in the calculations; an opcode memory holds the instruction; a target memory holds the output address; and a tag memory contains status bits for each parameter. One status bit indicates whether the corresponding parameter is in the parameter memory and one status bit to indicate whether the stored information in the corresponding data parameter is to be reused. The tag memory outputs a ''fire'' signal (signal R VALID) when all of the necessary information has been stored in the data flow memories, and thus when the instruction is ready to be fired to the processor. 11 figs.

  4. Data flow machine for data driven computing

    DOEpatents

    Davidson, George S.; Grafe, Victor G.

    1995-01-01

    A data flow computer which of computing is disclosed which utilizes a data driven processor node architecture. The apparatus in a preferred embodiment includes a plurality of First-In-First-Out (FIFO) registers, a plurality of related data flow memories, and a processor. The processor makes the necessary calculations and includes a control unit to generate signals to enable the appropriate FIFO register receiving the result. In a particular embodiment, there are three FIFO registers per node: an input FIFO register to receive input information form an outside source and provide it to the data flow memories; an output FIFO register to provide output information from the processor to an outside recipient; and an internal FIFO register to provide information from the processor back to the data flow memories. The data flow memories are comprised of four commonly addressed memories. A parameter memory holds the A and B parameters used in the calculations; an opcode memory holds the instruction; a target memory holds the output address; and a tag memory contains status bits for each parameter. One status bit indicates whether the corresponding parameter is in the parameter memory and one status but to indicate whether the stored information in the corresponding data parameter is to be reused. The tag memory outputs a "fire" signal (signal R VALID) when all of the necessary information has been stored in the data flow memories, and thus when the instruction is ready to be fired to the processor.

  5. Direct match data flow machine apparatus and process for data driven computing

    DOEpatents

    Davidson, George S.; Grafe, Victor Gerald

    1997-01-01

    A data flow computer and method of computing is disclosed which utilizes a data driven processor node architecture. The apparatus in a preferred embodiment includes a plurality of First-In-First-Out (FIFO) registers, a plurality of related data flow memories, and a processor. The processor makes the necessary calculations and includes a control unit to generate signals to enable the appropriate FIFO register receiving the result. In a particular embodiment, there are three FIFO registers per node: an input FIFO register to receive input information form an outside source and provide it to the data flow memories; an output FIFO register to provide output information from the processor to an outside recipient; and an internal FIFO register to provide information from the processor back to the data flow memories. The data flow memories are comprised of four commonly addressed memories. A parameter memory holds the A and B parameters used in the calculations; an opcode memory holds the instruction; a target memory holds the output address; and a tag memory contains status bits for each parameter. One status bit indicates whether the corresponding parameter is in the parameter memory and one status but to indicate whether the stored information in the corresponding data parameter is to be reused. The tag memory outputs a "fire" signal (signal R VALID) when all of the necessary information has been stored in the data flow memories, and thus when the instruction is ready to be fired to the processor.

  6. Trypanosoma congolense: tissue distribution of long-term T- and B-cell responses in cattle.

    PubMed

    Lutje, V; Taylor, K A; Boulangé, A; Authié, E

    1995-11-01

    Memory T- and B-cell responses to trypanosome antigens were measured in peripheral blood mononuclear cells, spleen and lymph node cells obtained from four trypanotolerant N'Dama cattle which had been exposed to six experimental infections with Trypanosoma congolense. These cattle were treated with trypanocidal drugs following each infection and had remained aparasitemic for 3 years prior to this study. The antigens used were whole trypanosome lysate, variable surface glycoprotein, a 33-kDa cysteine protease (congopain) and a 70-kDa heat-shock protein. As parameters of T-cell-mediated immunity, we measured T-cell proliferation and IFN-gamma production. Lymph node cells, spleen cells and peripheral blood mononuclear cells all proliferated to a mitogenic stimulus (concanavalin A) but only lymph node cells responded to trypanosome antigens. Similarly, IFN-gamma was produced by both lymph node and spleen cells stimulated with concanavalin A but only by lymph node cells stimulated with variable surface glycoprotein and whole trypanosome lysate. T. congolense-specific antibodies were detected in sera and in supernatants of cultured lymph node and spleen cells after in vitro stimulation with lipopolysaccharide and recombinant bovine interleukin-2. In conclusion, we have demonstrated that memory T- and B-cell responses are detectable in various lymphoid organs in cattle 3 years following infection and treatment with T. congolense.

  7. Active non-volatile memory post-processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kannan, Sudarsun; Milojicic, Dejan S.; Talwar, Vanish

    A computing node includes an active Non-Volatile Random Access Memory (NVRAM) component which includes memory and a sub-processor component. The memory is to store data chunks received from a processor core, the data chunks comprising metadata indicating a type of post-processing to be performed on data within the data chunks. The sub-processor component is to perform post-processing of said data chunks based on said metadata.

  8. Shared versus distributed memory multiprocessors

    NASA Technical Reports Server (NTRS)

    Jordan, Harry F.

    1991-01-01

    The question of whether multiprocessors should have shared or distributed memory has attracted a great deal of attention. Some researchers argue strongly for building distributed memory machines, while others argue just as strongly for programming shared memory multiprocessors. A great deal of research is underway on both types of parallel systems. Special emphasis is placed on systems with a very large number of processors for computation intensive tasks and considers research and implementation trends. It appears that the two types of systems will likely converge to a common form for large scale multiprocessors.

  9. Visual and Spatial Working Memory Are Not that Dissociated after All: A Time-Based Resource-Sharing Account

    ERIC Educational Resources Information Center

    Vergauwe, Evie; Barrouillet, Pierre; Camos, Valerie

    2009-01-01

    Examinations of interference between visual and spatial materials in working memory have suggested domain- and process-based fractionations of visuo-spatial working memory. The present study examined the role of central time-based resource sharing in visuo-spatial working memory and assessed its role in obtained interference patterns. Visual and…

  10. An analytical study of physical models with inherited temporal and spatial memory

    NASA Astrophysics Data System (ADS)

    Jaradat, Imad; Alquran, Marwan; Al-Khaled, Kamel

    2018-04-01

    Du et al. (Sci. Reb. 3, 3431 (2013)) demonstrated that the fractional derivative order can be physically interpreted as a memory index by fitting the test data of memory phenomena. The aim of this work is to study analytically the joint effect of the memory index on time and space coordinates simultaneously. For this purpose, we introduce a novel bivariate fractional power series expansion that is accompanied by twofold fractional derivatives ordering α, β\\in(0,1]. Further, some convergence criteria concerning our expansion are presented and an analog of the well-known bivariate Taylor's formula in the sense of mixed fractional derivatives is obtained. Finally, in order to show the functionality and efficiency of this expansion, we employ the corresponding Taylor's series method to obtain closed-form solutions of various physical models with inherited time and space memory.

  11. The Challenges of Internetworking Unattended Autonomous Sensors

    DTIC Science & Technology

    2006-12-01

    number of nodes that can hear the broadcast by a squared factor. The establishment of path keys for multi-hop range extension should however be used...Conf. on Info. Sci. and Syst. (CISS 󈧅), Baltimore, 21-23 March 2001 [9] R . Ramanathan, “On the Performance of Ad Hoc Networks with Beamforming...Computing”, Prentice Hall PTR, 2002 [12] S. Slijepcevic, M. Potkonjak, V. Tsiatsis, S. Zimbeck, M.B. Srivastava “On Communication Security inWireless Ad

  12. Implementing asyncronous collective operations in a multi-node processing system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Dong; Eisley, Noel A.; Heidelberger, Philip

    A method, system, and computer program product are disclosed for implementing an asynchronous collective operation in a multi-node data processing system. In one embodiment, the method comprises sending data to a plurality of nodes in the data processing system, broadcasting a remote get to the plurality of nodes, and using this remote get to implement asynchronous collective operations on the data by the plurality of nodes. In one embodiment, each of the nodes performs only one task in the asynchronous operations, and each nodes sets up a base address table with an entry for a base address of a memorymore » buffer associated with said each node. In another embodiment, each of the nodes performs a plurality of tasks in said collective operations, and each task of each node sets up a base address table with an entry for a base address of a memory buffer associated with the task.« less

  13. Self-pacing direct memory access data transfer operations for compute nodes in a parallel computer

    DOEpatents

    Blocksome, Michael A

    2015-02-17

    Methods, apparatus, and products are disclosed for self-pacing DMA data transfer operations for nodes in a parallel computer that include: transferring, by an origin DMA on an origin node, a RTS message to a target node, the RTS message specifying an message on the origin node for transfer to the target node; receiving, in an origin injection FIFO for the origin DMA from a target DMA on the target node in response to transferring the RTS message, a target RGET descriptor followed by a DMA transfer operation descriptor, the DMA descriptor for transmitting a message portion to the target node, the target RGET descriptor specifying an origin RGET descriptor on the origin node that specifies an additional DMA descriptor for transmitting an additional message portion to the target node; processing, by the origin DMA, the target RGET descriptor; and processing, by the origin DMA, the DMA transfer operation descriptor.

  14. Securely and Flexibly Sharing a Biomedical Data Management System

    PubMed Central

    Wang, Fusheng; Hussels, Phillip; Liu, Peiya

    2011-01-01

    Biomedical database systems need not only to address the issues of managing complex data, but also to provide data security and access control to the system. These include not only system level security, but also instance level access control such as access of documents, schemas, or aggregation of information. The latter is becoming more important as multiple users can share a single scientific data management system to conduct their research, while data have to be protected before they are published or IP-protected. This problem is challenging as users’ needs for data security vary dramatically from one application to another, in terms of who to share with, what resources to be shared, and at what access level. We develop a comprehensive data access framework for a biomedical data management system SciPort. SciPort provides fine-grained multi-level space based access control of resources at not only object level (documents and schemas), but also space level (resources set aggregated in a hierarchy way). Furthermore, to simplify the management of users and privileges, customizable role-based user model is developed. The access control is implemented efficiently by integrating access privileges into the backend XML database, thus efficient queries are supported. The secure access approach we take makes it possible for multiple users to share the same biomedical data management system with flexible access management and high data security. PMID:21625285

  15. Rapid recovery from transient faults in the fault-tolerant processor with fault-tolerant shared memory

    NASA Technical Reports Server (NTRS)

    Harper, Richard E.; Butler, Bryan P.

    1990-01-01

    The Draper fault-tolerant processor with fault-tolerant shared memory (FTP/FTSM), which is designed to allow application tasks to continue execution during the memory alignment process, is described. Processor performance is not affected by memory alignment. In addition, the FTP/FTSM incorporates a hardware scrubber device to perform the memory alignment quickly during unused memory access cycles. The FTP/FTSM architecture is described, followed by an estimate of the time required for channel reintegration.

  16. OpenGeoSys-GEMS: Hybrid parallelization of a reactive transport code with MPI and threads

    NASA Astrophysics Data System (ADS)

    Kosakowski, G.; Kulik, D. A.; Shao, H.

    2012-04-01

    OpenGeoSys-GEMS is a generic purpose reactive transport code based on the operator splitting approach. The code couples the Finite-Element groundwater flow and multi-species transport modules of the OpenGeoSys (OGS) project (http://www.ufz.de/index.php?en=18345) with the GEM-Selektor research package to model thermodynamic equilibrium of aquatic (geo)chemical systems utilizing the Gibbs Energy Minimization approach (http://gems.web.psi.ch/). The combination of OGS and the GEM-Selektor kernel (GEMS3K) is highly flexible due to the object-oriented modular code structures and the well defined (memory based) data exchange modules. Like other reactive transport codes, the practical applicability of OGS-GEMS is often hampered by the long calculation time and large memory requirements. • For realistic geochemical systems which might include dozens of mineral phases and several (non-ideal) solid solutions the time needed to solve the chemical system with GEMS3K may increase exceptionally. • The codes are coupled in a sequential non-iterative loop. In order to keep the accuracy, the time step size is restricted. In combination with a fine spatial discretization the time step size may become very small which increases calculation times drastically even for small 1D problems. • The current version of OGS is not optimized for memory use and the MPI version of OGS does not distribute data between nodes. Even for moderately small 2D problems the number of MPI processes that fit into memory of up-to-date workstations or HPC hardware is limited. One strategy to overcome the above mentioned restrictions of OGS-GEMS is to parallelize the coupled code. For OGS a parallelized version already exists. It is based on a domain decomposition method implemented with MPI and provides a parallel solver for fluid and mass transport processes. In the coupled code, after solving fluid flow and solute transport, geochemical calculations are done in form of a central loop over all finite element nodes with calls to GEMS3K and consecutive calculations of changed material parameters. In a first step the existing MPI implementation was utilized to parallelize this loop. Calculations were split between the MPI processes and afterwards data was synchronized by using MPI communication routines. Furthermore, multi-threaded calculation of the loop was implemented with help of the boost thread library (http://www.boost.org). This implementation provides a flexible environment to distribute calculations between several threads. For each MPI process at least one and up to several dozens of worker threads are spawned. These threads do not replicate the complete OGS-GEM data structure and use only a limited amount of memory. Calculation of the central geochemical loop is shared between all threads. Synchronization between the threads is done by barrier commands. The overall number of local threads times MPI processes should match the number of available computing nodes. The combination of multi-threading and MPI provides an effective and flexible environment to speed up OGS-GEMS calculations while limiting the required memory use. Test calculations on different hardware show that for certain types of applications tremendous speedups are possible.

  17. Relations of maternal style and child self-concept to autobiographical memories in chinese, chinese immigrant, and European american 3-year-olds.

    PubMed

    Wang, Qi

    2006-01-01

    The relations of maternal reminiscing style and child self-concept to children's shared and independent autobiographical memories were examined in a sample of 189 three-year-olds and their mothers from Chinese families in China, first-generation Chinese immigrant families in the United States, and European American families. Mothers shared memories with their children and completed questionnaires; children recounted autobiographical events and described themselves with a researcher. Independent of culture, gender, child age, and language skills, maternal elaborations and evaluations were associated with children's shared memory reports, and maternal evaluations and child agentic self-focus were associated with children's independent memory reports. Maternal style and child self-concept further mediated cultural influences on children's memory. The findings provide insight into the social-cultural construction of autobiographical memory.

  18. Initiation of Antiretroviral Therapy Restores CD4+ T Memory Stem Cell Homeostasis in Simian Immunodeficiency Virus-Infected Macaques.

    PubMed

    Cartwright, Emily K; Palesch, David; Mavigner, Maud; Paiardini, Mirko; Chahroudi, Ann; Silvestri, Guido

    2016-08-01

    Treatment of human immunodeficiency virus (HIV) infection with antiretroviral therapy (ART) has significantly improved prognosis. Unfortunately, interruption of ART almost invariably results in viral rebound, attributed to a pool of long-lived, latently infected cells. Based on their longevity and proliferative potential, CD4(+) T memory stem cells (TSCM) have been proposed as an important site of HIV persistence. In a previous study, we found that in simian immunodeficiency virus (SIV)-infected rhesus macaques (RM), CD4(+) TSCM are preserved in number but show (i) a decrease in the frequency of CCR5(+) cells, (ii) an expansion of the fraction of proliferating Ki-67(+) cells, and (iii) high levels of SIV DNA. To understand the impact of ART on both CD4(+) TSCM homeostasis and virus persistence, we conducted a longitudinal analysis of these cells in the blood and lymph nodes of 25 SIV-infected RM. We found that ART induced a significant restoration of CD4(+) CCR5(+) TSCM both in blood and in lymph nodes and a reduction in the fraction of proliferating CD4(+) Ki-67(+) TSCM in blood (but not lymph nodes). Importantly, we found that the level of SIV DNA in CD4(+) transitional memory (TTM) and effector memory (TEM) T cells declined ∼100-fold after ART in both blood and lymph nodes, while the level of SIV DNA in CD4(+) TSCM and central memory T cells (TCM-) did not significantly change. These data suggest that ART is effective at partially restoring CD4(+) TSCM homeostasis, and the observed stable level of virus in TSCM supports the hypothesis that these cells are a critical contributor to SIV persistence. Understanding the roles of various CD4(+) T cell memory subsets in immune homeostasis and HIV/SIV persistence during antiretroviral therapy (ART) is critical to effectively treat and cure HIV infection. T memory stem cells (TSCM) are a unique memory T cell subset with enhanced self-renewal capacity and the ability to differentiate into other memory T cell subsets, such as central and transitional memory T cells (TCM and TTM, respectively). CD4(+) TSCM are disrupted but not depleted during pathogenic SIV infection. We find that ART is partially effective at restoring CD4(+) TSCM homeostasis and that SIV DNA harbored within this subset contracts more slowly than virus harbored in shorter-lived subsets, such as TTM and effector memory (TEM). Because of their ability to persist long-term in an individual, understanding the dynamics of virally infected CD4(+) TSCM during suppressive ART is important for future therapeutic interventions aimed at modulating immune activation and purging the HIV reservoir. Copyright © 2016, American Society for Microbiology. All Rights Reserved.

  19. Initiation of Antiretroviral Therapy Restores CD4+ T Memory Stem Cell Homeostasis in Simian Immunodeficiency Virus-Infected Macaques

    PubMed Central

    Cartwright, Emily K.; Palesch, David; Mavigner, Maud; Paiardini, Mirko; Chahroudi, Ann

    2016-01-01

    ABSTRACT Treatment of human immunodeficiency virus (HIV) infection with antiretroviral therapy (ART) has significantly improved prognosis. Unfortunately, interruption of ART almost invariably results in viral rebound, attributed to a pool of long-lived, latently infected cells. Based on their longevity and proliferative potential, CD4+ T memory stem cells (TSCM) have been proposed as an important site of HIV persistence. In a previous study, we found that in simian immunodeficiency virus (SIV)-infected rhesus macaques (RM), CD4+ TSCM are preserved in number but show (i) a decrease in the frequency of CCR5+ cells, (ii) an expansion of the fraction of proliferating Ki-67+ cells, and (iii) high levels of SIV DNA. To understand the impact of ART on both CD4+ TSCM homeostasis and virus persistence, we conducted a longitudinal analysis of these cells in the blood and lymph nodes of 25 SIV-infected RM. We found that ART induced a significant restoration of CD4+ CCR5+ TSCM both in blood and in lymph nodes and a reduction in the fraction of proliferating CD4+ Ki-67+ TSCM in blood (but not lymph nodes). Importantly, we found that the level of SIV DNA in CD4+ transitional memory (TTM) and effector memory (TEM) T cells declined ∼100-fold after ART in both blood and lymph nodes, while the level of SIV DNA in CD4+ TSCM and central memory T cells (TCM-) did not significantly change. These data suggest that ART is effective at partially restoring CD4+ TSCM homeostasis, and the observed stable level of virus in TSCM supports the hypothesis that these cells are a critical contributor to SIV persistence. IMPORTANCE Understanding the roles of various CD4+ T cell memory subsets in immune homeostasis and HIV/SIV persistence during antiretroviral therapy (ART) is critical to effectively treat and cure HIV infection. T memory stem cells (TSCM) are a unique memory T cell subset with enhanced self-renewal capacity and the ability to differentiate into other memory T cell subsets, such as central and transitional memory T cells (TCM and TTM, respectively). CD4+ TSCM are disrupted but not depleted during pathogenic SIV infection. We find that ART is partially effective at restoring CD4+ TSCM homeostasis and that SIV DNA harbored within this subset contracts more slowly than virus harbored in shorter-lived subsets, such as TTM and effector memory (TEM). Because of their ability to persist long-term in an individual, understanding the dynamics of virally infected CD4+ TSCM during suppressive ART is important for future therapeutic interventions aimed at modulating immune activation and purging the HIV reservoir. PMID:27170752

  20. Job Management Requirements for NAS Parallel Systems and Clusters

    NASA Technical Reports Server (NTRS)

    Saphir, William; Tanner, Leigh Ann; Traversat, Bernard

    1995-01-01

    A job management system is a critical component of a production supercomputing environment, permitting oversubscribed resources to be shared fairly and efficiently. Job management systems that were originally designed for traditional vector supercomputers are not appropriate for the distributed-memory parallel supercomputers that are becoming increasingly important in the high performance computing industry. Newer job management systems offer new functionality but do not solve fundamental problems. We address some of the main issues in resource allocation and job scheduling we have encountered on two parallel computers - a 160-node IBM SP2 and a cluster of 20 high performance workstations located at the Numerical Aerodynamic Simulation facility. We describe the requirements for resource allocation and job management that are necessary to provide a production supercomputing environment on these machines, prioritizing according to difficulty and importance, and advocating a return to fundamental issues.

  1. Administering an epoch initiated for remote memory access

    DOEpatents

    Blocksome, Michael A; Miller, Douglas R

    2014-03-18

    Methods, systems, and products are disclosed for administering an epoch initiated for remote memory access that include: initiating, by an origin application messaging module on an origin compute node, one or more data transfers to a target compute node for the epoch; initiating, by the origin application messaging module after initiating the data transfers, a closing stage for the epoch, including rejecting any new data transfers after initiating the closing stage for the epoch; determining, by the origin application messaging module, whether the data transfers have completed; and closing, by the origin application messaging module, the epoch if the data transfers have completed.

  2. Administering an epoch initiated for remote memory access

    DOEpatents

    Blocksome, Michael A; Miller, Douglas R

    2012-10-23

    Methods, systems, and products are disclosed for administering an epoch initiated for remote memory access that include: initiating, by an origin application messaging module on an origin compute node, one or more data transfers to a target compute node for the epoch; initiating, by the origin application messaging module after initiating the data transfers, a closing stage for the epoch, including rejecting any new data transfers after initiating the closing stage for the epoch; determining, by the origin application messaging module, whether the data transfers have completed; and closing, by the origin application messaging module, the epoch if the data transfers have completed.

  3. Administering an epoch initiated for remote memory access

    DOEpatents

    Blocksome, Michael A.; Miller, Douglas R.

    2013-01-01

    Methods, systems, and products are disclosed for administering an epoch initiated for remote memory access that include: initiating, by an origin application messaging module on an origin compute node, one or more data transfers to a target compute node for the epoch; initiating, by the origin application messaging module after initiating the data transfers, a closing stage for the epoch, including rejecting any new data transfers after initiating the closing stage for the epoch; determining, by the origin application messaging module, whether the data transfers have completed; and closing, by the origin application messaging module, the epoch if the data transfers have completed.

  4. Granger causality analysis reveals distinct spatio-temporal connectivity patterns in motor and perceptual visuo-spatial working memory.

    PubMed

    Protopapa, Foteini; Siettos, Constantinos I; Evdokimidis, Ioannis; Smyrnis, Nikolaos

    2014-01-01

    We employed spectral Granger causality analysis on a full set of 56 electroencephalographic recordings acquired during the execution of either a 2D movement pointing or a perceptual (yes/no) change detection task with memory and non-memory conditions. On the basis of network characteristics across frequency bands, we provide evidence for the full dissociation of the corresponding cognitive processes. Movement-memory trial types exhibited higher degree nodes during the first 2 s of the delay period, mainly at central, left frontal and right-parietal areas. Change detection-memory trial types resulted in a three-peak temporal pattern of the total degree with higher degree nodes emerging mainly at central, right frontal, and occipital areas. Functional connectivity networks resulting from non-memory trial types were characterized by more sparse structures for both tasks. The movement-memory trial types encompassed an apparent coarse flow from frontal to parietal areas while the opposite flow from occipital, parietal to central and frontal areas was evident for the change detection-memory trial types. The differences among tasks and conditions were more profound in α (8-12 Hz) and β (12-30 Hz) and less in γ (30-45 Hz) band. Our results favor the hypothesis which considers spatial working memory as a by-product of specific mental processes that engages common brain areas under different network organizations.

  5. SciServer Compute brings Analysis to Big Data in the Cloud

    NASA Astrophysics Data System (ADS)

    Raddick, Jordan; Medvedev, Dmitry; Lemson, Gerard; Souter, Barbara

    2016-06-01

    SciServer Compute uses Jupyter Notebooks running within server-side Docker containers attached to big data collections to bring advanced analysis to big data "in the cloud." SciServer Compute is a component in the SciServer Big-Data ecosystem under development at JHU, which will provide a stable, reproducible, sharable virtual research environment.SciServer builds on the popular CasJobs and SkyServer systems that made the Sloan Digital Sky Survey (SDSS) archive one of the most-used astronomical instruments. SciServer extends those systems with server-side computational capabilities and very large scratch storage space, and further extends their functions to a range of other scientific disciplines.Although big datasets like SDSS have revolutionized astronomy research, for further analysis, users are still restricted to downloading the selected data sets locally - but increasing data sizes make this local approach impractical. Instead, researchers need online tools that are co-located with data in a virtual research environment, enabling them to bring their analysis to the data.SciServer supports this using the popular Jupyter notebooks, which allow users to write their own Python and R scripts and execute them on the server with the data (extensions to Matlab and other languages are planned). We have written special-purpose libraries that enable querying the databases and other persistent datasets. Intermediate results can be stored in large scratch space (hundreds of TBs) and analyzed directly from within Python or R with state-of-the-art visualization and machine learning libraries. Users can store science-ready results in their permanent allocation on SciDrive, a Dropbox-like system for sharing and publishing files. Communication between the various components of the SciServer system is managed through SciServer‘s new Single Sign-on Portal.We have created a number of demos to illustrate the capabilities of SciServer Compute, including Python and R scripts accessing a range of datasets and showing the data flow between storage and compute components.Demos, documentation, and more information can be found at www.sciserver.org.SciServer is funded by the National Science Foundation Award ACI-1261715.

  6. Supporting shared data structures on distributed memory architectures

    NASA Technical Reports Server (NTRS)

    Koelbel, Charles; Mehrotra, Piyush; Vanrosendale, John

    1990-01-01

    Programming nonshared memory systems is more difficult than programming shared memory systems, since there is no support for shared data structures. Current programming languages for distributed memory architectures force the user to decompose all data structures into separate pieces, with each piece owned by one of the processors in the machine, and with all communication explicitly specified by low-level message-passing primitives. A new programming environment is presented for distributed memory architectures, providing a global name space and allowing direct access to remote parts of data values. The analysis and program transformations required to implement this environment are described, and the efficiency of the resulting code on the NCUBE/7 and IPSC/2 hypercubes are described.

  7. Performance Evaluation of Remote Memory Access (RMA) Programming on Shared Memory Parallel Computers

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Jost, Gabriele; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    The purpose of this study is to evaluate the feasibility of remote memory access (RMA) programming on shared memory parallel computers. We discuss different RMA based implementations of selected CFD application benchmark kernels and compare them to corresponding message passing based codes. For the message-passing implementation we use MPI point-to-point and global communication routines. For the RMA based approach we consider two different libraries supporting this programming model. One is a shared memory parallelization library (SMPlib) developed at NASA Ames, the other is the MPI-2 extensions to the MPI Standard. We give timing comparisons for the different implementation strategies and discuss the performance.

  8. Lambda network having 2{sup m{minus}1} nodes in each of m stages with each node coupled to four other nodes for bidirectional routing of data packets between nodes

    DOEpatents

    Napolitano, L.M. Jr.

    1995-11-28

    The Lambda network is a single stage, packet-switched interprocessor communication network for a distributed memory, parallel processor computer. Its design arises from the desired network characteristics of minimizing mean and maximum packet transfer time, local routing, expandability, deadlock avoidance, and fault tolerance. The network is based on fixed degree nodes and has mean and maximum packet transfer distances where n is the number of processors. The routing method is detailed, as are methods for expandability, deadlock avoidance, and fault tolerance. 14 figs.

  9. SAHAYOG: A Testbed for Load Sharing under Failure,

    DTIC Science & Technology

    1987-07-01

    messages, shared memory and semaphores . To communicate using messages, processes create message queues using system-provided prim- itives. The message...The size of the memory that is to be shared is decided by the process when it makes a request for memory allocation. The semaphore option of IPC can be...used to prevent two or more concurrent processes from executing their critical sections at the same time. Semaphores must be used when the processes

  10. Implementation and evaluation of shared-memory communication and synchronization operations in MPICH2 using the Nemesis communication subsystem.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buntinas, D.; Mercier, G.; Gropp, W.

    2007-09-01

    This paper presents the implementation of MPICH2 over the Nemesis communication subsystem and the evaluation of its shared-memory performance. We describe design issues as well as some of the optimization techniques we employed. We conducted a performance evaluation over shared memory using microbenchmarks. The evaluation shows that MPICH2 Nemesis has very low communication overhead, making it suitable for smaller-grained applications.

  11. Federal Support for Research and Development

    DTIC Science & Technology

    2007-06-01

    Nonprofits State and Local Governments 0 B In recent years, the share of federal research funding allocated to the life sciences has expanded, an empha...sis supported by the high rates of returns to life sci- ences research that some studies have reported. But other studies indicate that researchers...changed over time, with the life sciences accounting for an increasing share of federal research spending since the 1990s (see Summary Figure 5

  12. Visual Information-Processing in the Perception of Features and Objects

    DTIC Science & Technology

    1989-01-05

    or nodes in a semantic memory network, whereas recall and recognition depend on separate episodic memory traces. In our experiment, we used the same...problem for the account in terms of the separation of episodic from semantic memory , since no pre- existing representations of our line patterns were... semantic memory : amnesic patients were thought to have lost the ability to lay down (or retrieve) episodic traces of autobiographical events, but had

  13. Comparison of two paradigms for distributed shared memory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levelt, W.G.; Kaashoek, M.F.; Bal, H.E.

    1990-08-01

    The paper compares two paradigms for Distributed Shared Memory on loosely coupled computing systems: the shared data-object model as used in Orca, a programming language specially designed for loosely coupled computing systems and the Shared Virtual Memory model. For both paradigms the authors have implemented two systems, one using only point-to-point messages, the other using broadcasting as well. They briefly describe these two paradigms and their implementations. Then they compare their performance on four applications: the traveling salesman problem, alpha-beta search, matrix multiplication and the all pairs shortest paths problem. The measurements show that both paradigms can be used efficientlymore » for programming large-grain parallel applications. Significant speedups were obtained on all applications. The unstructured Shared Virtual Memory paradigm achieves the best absolute performance, although this is largely due to the preliminary nature of the Orca compiler used. The structured shared data-object model achieves the highest speedups and is much easier to program and to debug.« less

  14. BLESS 2: accurate, memory-efficient and fast error correction method.

    PubMed

    Heo, Yun; Ramachandran, Anand; Hwu, Wen-Mei; Ma, Jian; Chen, Deming

    2016-08-01

    The most important features of error correction tools for sequencing data are accuracy, memory efficiency and fast runtime. The previous version of BLESS was highly memory-efficient and accurate, but it was too slow to handle reads from large genomes. We have developed a new version of BLESS to improve runtime and accuracy while maintaining a small memory usage. The new version, called BLESS 2, has an error correction algorithm that is more accurate than BLESS, and the algorithm has been parallelized using hybrid MPI and OpenMP programming. BLESS 2 was compared with five top-performing tools, and it was found to be the fastest when it was executed on two computing nodes using MPI, with each node containing twelve cores. Also, BLESS 2 showed at least 11% higher gain while retaining the memory efficiency of the previous version for large genomes. Freely available at https://sourceforge.net/projects/bless-ec dchen@illinois.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  15. How visual working memory contents influence priming of visual attention.

    PubMed

    Carlisle, Nancy B; Kristjánsson, Árni

    2017-04-12

    Recent evidence shows that when the contents of visual working memory overlap with targets and distractors in a pop-out search task, intertrial priming is inhibited (Kristjánsson, Sævarsson & Driver, Psychon Bull Rev 20(3):514-521, 2013, Experiment 2, Psychonomic Bulletin and Review). This may reflect an interesting interaction between implicit short-term memory-thought to underlie intertrial priming-and explicit visual working memory. Evidence from a non-pop-out search task suggests that it may specifically be holding distractors in visual working memory that disrupts intertrial priming (Cunningham & Egeth, Psychol Sci 27(4):476-485, 2016, Experiment 2, Psychological Science). We examined whether the inhibition of priming depends on whether feature values in visual working memory overlap with targets or distractors in the pop-out search, and we found that the inhibition of priming resulted from holding distractors in visual working memory. These results are consistent with separate mechanisms of target and distractor effects in intertrial priming, and support the notion that the impact of implicit short-term memory and explicit visual working memory can interact when each provides conflicting attentional signals.

  16. Performance Management of High Performance Computing for Medical Image Processing in Amazon Web Services.

    PubMed

    Bao, Shunxing; Damon, Stephen M; Landman, Bennett A; Gokhale, Aniruddha

    2016-02-27

    Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical-Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for-use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline.

  17. Performance management of high performance computing for medical image processing in Amazon Web Services

    NASA Astrophysics Data System (ADS)

    Bao, Shunxing; Damon, Stephen M.; Landman, Bennett A.; Gokhale, Aniruddha

    2016-03-01

    Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical- Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for- use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline.

  18. Performance Management of High Performance Computing for Medical Image Processing in Amazon Web Services

    PubMed Central

    Bao, Shunxing; Damon, Stephen M.; Landman, Bennett A.; Gokhale, Aniruddha

    2016-01-01

    Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical-Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for-use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline. PMID:27127335

  19. General stability of memory-type thermoelastic Timoshenko beam acting on shear force

    NASA Astrophysics Data System (ADS)

    Apalara, Tijani A.

    2018-03-01

    In this paper, we consider a linear thermoelastic Timoshenko system with memory effects where the thermoelastic coupling is acting on shear force under Neumann-Dirichlet-Dirichlet boundary conditions. The same system with fully Dirichlet boundary conditions was considered by Messaoudi and Fareh (Nonlinear Anal TMA 74(18):6895-6906, 2011, Acta Math Sci 33(1):23-40, 2013), but they obtained a general stability result which depends on the speeds of wave propagation. In our case, we obtained a general stability result irrespective of the wave speeds of the system.

  20. Low-Latency and Energy-Efficient Data Preservation Mechanism in Low-Duty-Cycle Sensor Networks.

    PubMed

    Jiang, Chan; Li, Tao-Shen; Liang, Jun-Bin; Wu, Heng

    2017-05-06

    Similar to traditional wireless sensor networks (WSN), the nodes only have limited memory and energy in low-duty-cycle sensor networks (LDC-WSN). However, different from WSN, the nodes in LDC-WSN often sleep most of their time to preserve their energies. The sleeping feature causes serious data transmission delay. However, each source node that has sensed data needs to quickly disseminate its data to other nodes in the network for redundant storage. Otherwise, data would be lost due to its source node possibly being destroyed by outer forces in a harsh environment. The quick dissemination requirement produces a contradiction with the sleeping delay in the network. How to quickly disseminate all the source data to all the nodes with limited memory in the network for effective preservation is a challenging issue. In this paper, a low-latency and energy-efficient data preservation mechanism in LDC-WSN is proposed. The mechanism is totally distributed. The data can be disseminated to the network with low latency by using a revised probabilistic broadcasting mechanism, and then stored by the nodes with LT (Luby Transform) codes, which are a famous rateless erasure code. After the process of data dissemination and storage completes, some nodes may die due to being destroyed by outer forces. If a mobile sink enters the network at any time and from any place to collect the data, it can recover all of the source data by visiting a small portion of survived nodes in the network. Theoretical analyses and simulation results show that our mechanism outperforms existing mechanisms in the performances of data dissemination delay and energy efficiency.

  1. GeoSciML and EarthResourceML Update, 2012

    NASA Astrophysics Data System (ADS)

    Richard, S. M.; Commissionthe Management; Application Inte, I.

    2012-12-01

    CGI Interoperability Working Group activities during 2012 include deployment of services using the GeoSciML-Portrayal schema, addition of new vocabularies to support properties added in version 3.0, improvements to server software for deploying services, introduction of EarthResourceML v.2 for mineral resources, and collaboration with the IUSS on a markup language for soils information. GeoSciML and EarthResourceML have been used as the basis for the INSPIRE Geology and Mineral Resources specifications respectively. GeoSciML-Portrayal is an OGC GML simple-feature application schema for presentation of geologic map unit, contact, and shear displacement structure (fault and ductile shear zone) descriptions in web map services. Use of standard vocabularies for geologic age and lithology enables map services using shared legends to achieve visual harmonization of maps provided by different services. New vocabularies have been added to the collection of CGI vocabularies provided to support interoperable GeoSciML services, and can be accessed through http://resource.geosciml.org. Concept URIs can be dereferenced to obtain SKOS rdf or html representations using the SISSVoc vocabulary service. New releases of the FOSS GeoServer application greatly improve support for complex XML feature schemas like GeoSciML, and the ArcGIS for INSPIRE extension implements similar complex feature support for ArcGIS Server. These improved server implementations greatly facilitate deploying GeoSciML services. EarthResourceML v2 adds features for information related to mining activities. SoilML provides an interchange format for soil material, soil profile, and terrain information. Work is underway to add GeoSciML to the portfolio of Open Geospatial Consortium (OGC) specifications.

  2. Distributed Clone Detection in Static Wireless Sensor Networks: Random Walk with Network Division

    PubMed Central

    Khan, Wazir Zada; Aalsalem, Mohammed Y.; Saad, N. M.

    2015-01-01

    Wireless Sensor Networks (WSNs) are vulnerable to clone attacks or node replication attacks as they are deployed in hostile and unattended environments where they are deprived of physical protection, lacking physical tamper-resistance of sensor nodes. As a result, an adversary can easily capture and compromise sensor nodes and after replicating them, he inserts arbitrary number of clones/replicas into the network. If these clones are not efficiently detected, an adversary can be further capable to mount a wide variety of internal attacks which can emasculate the various protocols and sensor applications. Several solutions have been proposed in the literature to address the crucial problem of clone detection, which are not satisfactory as they suffer from some serious drawbacks. In this paper we propose a novel distributed solution called Random Walk with Network Division (RWND) for the detection of node replication attack in static WSNs which is based on claimer-reporter-witness framework and combines a simple random walk with network division. RWND detects clone(s) by following a claimer-reporter-witness framework and a random walk is employed within each area for the selection of witness nodes. Splitting the network into levels and areas makes clone detection more efficient and the high security of witness nodes is ensured with moderate communication and memory overheads. Our simulation results show that RWND outperforms the existing witness node based strategies with moderate communication and memory overheads. PMID:25992913

  3. Working memory resources are shared across sensory modalities.

    PubMed

    Salmela, V R; Moisala, M; Alho, K

    2014-10-01

    A common assumption in the working memory literature is that the visual and auditory modalities have separate and independent memory stores. Recent evidence on visual working memory has suggested that resources are shared between representations, and that the precision of representations sets the limit for memory performance. We tested whether memory resources are also shared across sensory modalities. Memory precision for two visual (spatial frequency and orientation) and two auditory (pitch and tone duration) features was measured separately for each feature and for all possible feature combinations. Thus, only the memory load was varied, from one to four features, while keeping the stimuli similar. In Experiment 1, two gratings and two tones-both containing two varying features-were presented simultaneously. In Experiment 2, two gratings and two tones-each containing only one varying feature-were presented sequentially. The memory precision (delayed discrimination threshold) for a single feature was close to the perceptual threshold. However, as the number of features to be remembered was increased, the discrimination thresholds increased more than twofold. Importantly, the decrease in memory precision did not depend on the modality of the other feature(s), or on whether the features were in the same or in separate objects. Hence, simultaneously storing one visual and one auditory feature had an effect on memory precision equal to those of simultaneously storing two visual or two auditory features. The results show that working memory is limited by the precision of the stored representations, and that working memory can be described as a resource pool that is shared across modalities.

  4. Hidden Connectivity in Networks with Vulnerable Classes of Nodes

    NASA Astrophysics Data System (ADS)

    Krause, Sebastian M.; Danziger, Michael M.; Zlatić, Vinko

    2016-10-01

    In many complex systems representable as networks, nodes can be separated into different classes. Often these classes can be linked to a mutually shared vulnerability. Shared vulnerabilities may be due to a shared eavesdropper or correlated failures. In this paper, we show the impact of shared vulnerabilities on robust connectivity and how the heterogeneity of node classes can be exploited to maintain functionality by utilizing multiple paths. Percolation is the field of statistical physics that is generally used to analyze connectivity in complex networks, but in its existing forms, it cannot treat the heterogeneity of multiple vulnerable classes. To analyze the connectivity under these constraints, we describe each class as a color and develop a "color-avoiding" percolation. We present an analytic theory for random networks and a numerical algorithm for all networks, with which we can determine which nodes are color-avoiding connected and whether the maximal set percolates in the system. We find that the interaction of topology and color distribution implies a rich critical behavior, with critical values and critical exponents depending both on the topology and on the color distribution. Applying our physics-based theory to the Internet, we show how color-avoiding percolation can be used as the basis for new topologically aware secure communication protocols. Beyond applications to cybersecurity, our framework reveals a new layer of hidden structure in a wide range of natural and technological systems.

  5. Performance Modeling and Measurement of Parallelized Code for Distributed Shared Memory Multiprocessors

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry

    1998-01-01

    This paper presents a model to evaluate the performance and overhead of parallelizing sequential code using compiler directives for multiprocessing on distributed shared memory (DSM) systems. With increasing popularity of shared address space architectures, it is essential to understand their performance impact on programs that benefit from shared memory multiprocessing. We present a simple model to characterize the performance of programs that are parallelized using compiler directives for shared memory multiprocessing. We parallelized the sequential implementation of NAS benchmarks using native Fortran77 compiler directives for an Origin2000, which is a DSM system based on a cache-coherent Non Uniform Memory Access (ccNUMA) architecture. We report measurement based performance of these parallelized benchmarks from four perspectives: efficacy of parallelization process; scalability; parallelization overhead; and comparison with hand-parallelized and -optimized version of the same benchmarks. Our results indicate that sequential programs can conveniently be parallelized for DSM systems using compiler directives but realizing performance gains as predicted by the performance model depends primarily on minimizing architecture-specific data locality overhead.

  6. Distributed simulation using a real-time shared memory network

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Mattern, Duane L.; Wong, Edmond; Musgrave, Jeffrey L.

    1993-01-01

    The Advanced Control Technology Branch of the NASA Lewis Research Center performs research in the area of advanced digital controls for aeronautic and space propulsion systems. This work requires the real-time implementation of both control software and complex dynamical models of the propulsion system. We are implementing these systems in a distributed, multi-vendor computer environment. Therefore, a need exists for real-time communication and synchronization between the distributed multi-vendor computers. A shared memory network is a potential solution which offers several advantages over other real-time communication approaches. A candidate shared memory network was tested for basic performance. The shared memory network was then used to implement a distributed simulation of a ramjet engine. The accuracy and execution time of the distributed simulation was measured and compared to the performance of the non-partitioned simulation. The ease of partitioning the simulation, the minimal time required to develop for communication between the processors and the resulting execution time all indicate that the shared memory network is a real-time communication technique worthy of serious consideration.

  7. High Performance Data Transfer for Distributed Data Intensive Sciences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, Chin; Cottrell, R 'Les' A.; Hanushevsky, Andrew B.

    We report on the development of ZX software providing high performance data transfer and encryption. The design scales in: computation power, network interfaces, and IOPS while carefully balancing the available resources. Two U.S. patent-pending algorithms help tackle data sets containing lots of small files and very large files, and provide insensitivity to network latency. It has a cluster-oriented architecture, using peer-to-peer technologies to ease deployment, operation, usage, and resource discovery. Its unique optimizations enable effective use of flash memory. Using a pair of existing data transfer nodes at SLAC and NERSC, we compared its performance to that of bbcp andmore » GridFTP and determined that they were comparable. With a proof of concept created using two four-node clusters with multiple distributed multi-core CPUs, network interfaces and flash memory, we achieved 155Gbps memory-to-memory over a 2x100Gbps link aggregated channel and 70Gbps file-to-file with encryption over a 5000 mile 100Gbps link.« less

  8. The effect of visual and musical suspense on brain activation and memory during naturalistic viewing.

    PubMed

    Bezdek, Matthew A; Wenzel, William G; Schumacher, Eric H

    2017-10-01

    We tested the hypothesis that, during naturalistic viewing, moments of increasing narrative suspense narrow the scope of attentional focus. We also tested how changes in the emotional congruency of the music would affect brain responses to suspense, as well as subsequent memory for narrative events. In our study, participants viewed suspenseful film excerpts while brain activation was measured with functional magnetic resonance imaging. Results indicated that suspense produced a pattern of activation consistent with the attention-narrowing hypothesis. For example, we observed decreased activation in the anterior calcarine sulcus, which processes the visual periphery, and increased activity in nodes of the ventral attention network and decreased activity in nodes of the default mode network. Memory recall was more accurate for high suspense than low suspense moments, but did not differ by soundtrack congruency. These findings provide neural evidence that perceptual, attentional, and memory processes respond to suspense on a moment-by-moment basis. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. MIROS: A Hybrid Real-Time Energy-Efficient Operating System for the Resource-Constrained Wireless Sensor Nodes

    PubMed Central

    Liu, Xing; Hou, Kun Mean; de Vaulx, Christophe; Shi, Hongling; Gholami, Khalid El

    2014-01-01

    Operating system (OS) technology is significant for the proliferation of the wireless sensor network (WSN). With an outstanding OS; the constrained WSN resources (processor; memory and energy) can be utilized efficiently. Moreover; the user application development can be served soundly. In this article; a new hybrid; real-time; memory-efficient; energy-efficient; user-friendly and fault-tolerant WSN OS MIROS is designed and implemented. MIROS implements the hybrid scheduler and the dynamic memory allocator. Real-time scheduling can thus be achieved with low memory consumption. In addition; it implements a mid-layer software EMIDE (Efficient Mid-layer Software for User-Friendly Application Development Environment) to decouple the WSN application from the low-level system. The application programming process can consequently be simplified and the application reprogramming performance improved. Moreover; it combines both the software and the multi-core hardware techniques to conserve the energy resources; improve the node reliability; as well as achieve a new debugging method. To evaluate the performance of MIROS; it is compared with the other WSN OSes (TinyOS; Contiki; SOS; openWSN and mantisOS) from different OS concerns. The final evaluation results prove that MIROS is suitable to be used even on the tight resource-constrained WSN nodes. It can support the real-time WSN applications. Furthermore; it is energy efficient; user friendly and fault tolerant. PMID:25248069

  10. MIROS: a hybrid real-time energy-efficient operating system for the resource-constrained wireless sensor nodes.

    PubMed

    Liu, Xing; Hou, Kun Mean; de Vaulx, Christophe; Shi, Hongling; El Gholami, Khalid

    2014-09-22

    Operating system (OS) technology is significant for the proliferation of the wireless sensor network (WSN). With an outstanding OS; the constrained WSN resources (processor; memory and energy) can be utilized efficiently. Moreover; the user application development can be served soundly. In this article; a new hybrid; real-time; memory-efficient; energy-efficient; user-friendly and fault-tolerant WSN OS MIROS is designed and implemented. MIROS implements the hybrid scheduler and the dynamic memory allocator. Real-time scheduling can thus be achieved with low memory consumption. In addition; it implements a mid-layer software EMIDE (Efficient Mid-layer Software for User-Friendly Application Development Environment) to decouple the WSN application from the low-level system. The application programming process can consequently be simplified and the application reprogramming performance improved. Moreover; it combines both the software and the multi-core hardware techniques to conserve the energy resources; improve the node reliability; as well as achieve a new debugging method. To evaluate the performance of MIROS; it is compared with the other WSN OSes (TinyOS; Contiki; SOS; openWSN and mantisOS) from different OS concerns. The final evaluation results prove that MIROS is suitable to be used even on the tight resource-constrained WSN nodes. It can support the real-time WSN applications. Furthermore; it is energy efficient; user friendly and fault tolerant.

  11. Exploring the SCOAP3 Research Contributions of the United States

    NASA Astrophysics Data System (ADS)

    Marsteller, Matthew

    2016-03-01

    The Sponsoring Consortium for Open Access Publishing in Particle Physics (SCOAP3) is a successful global partnership of libraries, funding agencies and research centers. This presentation will inform the audience about SCOAP3 and also delve into descriptive statistics of the United States' intellectual contribution to particle physics via these open access journals. Exploration of the SCOAP3 particle physics literature using a variety of metrics tools such as Web of Science™, InCites™, Scopus® and SciVal will be shared. ORA or Sci2 will be used to visualize author collaboration networks.

  12. Multiprocessor shared-memory information exchange

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santoline, L.L.; Bowers, M.D.; Crew, A.W.

    1989-02-01

    In distributed microprocessor-based instrumentation and control systems, the inter-and intra-subsystem communication requirements ultimately form the basis for the overall system architecture. This paper describes a software protocol which addresses the intra-subsystem communications problem. Specifically the protocol allows for multiple processors to exchange information via a shared-memory interface. The authors primary goal is to provide a reliable means for information to be exchanged between central application processor boards (masters) and dedicated function processor boards (slaves) in a single computer chassis. The resultant Multiprocessor Shared-Memory Information Exchange (MSMIE) protocol, a standard master-slave shared-memory interface suitable for use in nuclear safety systems, ismore » designed to pass unidirectional buffers of information between the processors while providing a minimum, deterministic cycle time for this data exchange.« less

  13. Digital seismo-acoustic signal processing aboard a wireless sensor platform

    NASA Astrophysics Data System (ADS)

    Marcillo, O.; Johnson, J. B.; Lorincz, K.; Werner-Allen, G.; Welsh, M.

    2006-12-01

    We are developing a low power, low-cost wireless sensor array to conduct real-time signal processing of earthquakes at active volcanoes. The sensor array, which integrates data from both seismic and acoustic sensors, is based on Moteiv TMote Sky wireless sensor nodes (www.moteiv.com). The nodes feature a Texas Instruments MSP430 microcontroller, 48 Kbytes of program memory, 10 Kbytes of static RAM, 1 Mbyte of external flash memory, and a 2.4-GHz Chipcon CC2420 IEEE 802.15.4 radio. The TMote Sky is programmed in TinyOS. Basic signal processing occurs on an array of three peripheral sensor nodes. These nodes are tied into a dedicated GPS receiver node, which is focused on time synchronization, and a central communications node, which handles data integration and additional processing. The sensor nodes incorporate dual 12-bit digitizers sampling a seismic sensor and a pressure transducer at 100 samples per second. The wireless capabilities of the system allow flexible array geometry, with a maximum aperture of 200m. We have already developed the digital signal processing routines on board the Moteiv Tmote sensor nodes. The developed routines accomplish Real-time Seismic-Amplitude Measurement (RSAM), Seismic Spectral- Amplitude Measurement (SSAM), and a user-configured Short Term Averaging / Long Term Averaging (STA LTA ratio), which is used to calculate first arrivals. The processed data from individual nodes are transmitted back to a central node, where additional processing may be performed. Such processing will include back azimuth determination and other wave field analyses. Future on-board signal processing will focus on event characterization utilizing pattern recognition and spectral characterization. The processed data is intended as low bandwidth information which can be transmitted periodically and at low cost through satellite telemetry to a web server. The processing is limited by the computational capabilities (RAM, ROM) of the nodes. Nevertheless, we envision this product to be a useful tool for assessing the state of unrest at remote volcanoes.

  14. Non-Invasive Brain Stimulation: A New Strategy in Mild Cognitive Impairment?

    PubMed

    Birba, Agustina; Ibáñez, Agustín; Sedeño, Lucas; Ferrari, Jesica; García, Adolfo M; Zimerman, Máximo

    2017-01-01

    Non-invasive brain stimulation (NIBS) techniques can significantly modulate cognitive functions in healthy subjects and patients with neuropsychiatric disorders. Recently, they have been applied in patients with mild cognitive impairment (MCI) and subjective cognitive impairment (SCI) to prevent or delay the development of Alzheimer's disease (AD). Here we review this emerging empirical corpus and discuss therapeutic effects of NIBS on several target functions (e.g., memory for face-name associations and non-verbal recognition, attention, psychomotor speed, everyday memory). Available studies have yielded mixed results, possibly due to differences among their tasks, designs, and samples, let alone the latter's small sizes. Thus, the impact of NIBS on cognitive performance in MCI and SCI remains to be determined. To foster progress in this direction, we outline methodological approaches that could improve the efficacy and specificity of NIBS in both conditions. Furthermore, we discuss the need for multicenter studies, accurate diagnosis, and longitudinal approaches combining NIBS with specific training regimes. These tenets could cement biomedical developments supporting new treatments for MCI and preventive therapies for AD.

  15. Non-Invasive Brain Stimulation: A New Strategy in Mild Cognitive Impairment?

    PubMed Central

    Birba, Agustina; Ibáñez, Agustín; Sedeño, Lucas; Ferrari, Jesica; García, Adolfo M.; Zimerman, Máximo

    2017-01-01

    Non-invasive brain stimulation (NIBS) techniques can significantly modulate cognitive functions in healthy subjects and patients with neuropsychiatric disorders. Recently, they have been applied in patients with mild cognitive impairment (MCI) and subjective cognitive impairment (SCI) to prevent or delay the development of Alzheimer’s disease (AD). Here we review this emerging empirical corpus and discuss therapeutic effects of NIBS on several target functions (e.g., memory for face-name associations and non-verbal recognition, attention, psychomotor speed, everyday memory). Available studies have yielded mixed results, possibly due to differences among their tasks, designs, and samples, let alone the latter’s small sizes. Thus, the impact of NIBS on cognitive performance in MCI and SCI remains to be determined. To foster progress in this direction, we outline methodological approaches that could improve the efficacy and specificity of NIBS in both conditions. Furthermore, we discuss the need for multicenter studies, accurate diagnosis, and longitudinal approaches combining NIBS with specific training regimes. These tenets could cement biomedical developments supporting new treatments for MCI and preventive therapies for AD. PMID:28243198

  16. Memory access in shared virtual memory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berrendorf, R.

    1992-01-01

    Shared virtual memory (SVM) is a virtual memory layer with a single address space on top of a distributed real memory on parallel computers. We examine the behavior and performance of SVM running a parallel program with medium-grained, loop-level parallelism on top of it. A simulator for the underlying parallel architecture can be used to examine the behavior of SVM more deeply. The influence of several parameters, such as the number of processors, page size, cold or warm start, and restricted page replication, is studied.

  17. Memory access in shared virtual memory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berrendorf, R.

    1992-09-01

    Shared virtual memory (SVM) is a virtual memory layer with a single address space on top of a distributed real memory on parallel computers. We examine the behavior and performance of SVM running a parallel program with medium-grained, loop-level parallelism on top of it. A simulator for the underlying parallel architecture can be used to examine the behavior of SVM more deeply. The influence of several parameters, such as the number of processors, page size, cold or warm start, and restricted page replication, is studied.

  18. NAS Parallel Benchmark. Results 11-96: Performance Comparison of HPF and MPI Based NAS Parallel Benchmarks. 1.0

    NASA Technical Reports Server (NTRS)

    Saini, Subash; Bailey, David; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    High Performance Fortran (HPF), the high-level language for parallel Fortran programming, is based on Fortran 90. HALF was defined by an informal standards committee known as the High Performance Fortran Forum (HPFF) in 1993, and modeled on TMC's CM Fortran language. Several HPF features have since been incorporated into the draft ANSI/ISO Fortran 95, the next formal revision of the Fortran standard. HPF allows users to write a single parallel program that can execute on a serial machine, a shared-memory parallel machine, or a distributed-memory parallel machine. HPF eliminates the complex, error-prone task of explicitly specifying how, where, and when to pass messages between processors on distributed-memory machines, or when to synchronize processors on shared-memory machines. HPF is designed in a way that allows the programmer to code an application at a high level, and then selectively optimize portions of the code by dropping into message-passing or calling tuned library routines as 'extrinsics'. Compilers supporting High Performance Fortran features first appeared in late 1994 and early 1995 from Applied Parallel Research (APR) Digital Equipment Corporation, and The Portland Group (PGI). IBM introduced an HPF compiler for the IBM RS/6000 SP/2 in April of 1996. Over the past two years, these implementations have shown steady improvement in terms of both features and performance. The performance of various hardware/ programming model (HPF and MPI (message passing interface)) combinations will be compared, based on latest NAS (NASA Advanced Supercomputing) Parallel Benchmark (NPB) results, thus providing a cross-machine and cross-model comparison. Specifically, HPF based NPB results will be compared with MPI based NPB results to provide perspective on performance currently obtainable using HPF versus MPI or versus hand-tuned implementations such as those supplied by the hardware vendors. In addition we would also present NPB (Version 1.0) performance results for the following systems: DEC Alpha Server 8400 5/440, Fujitsu VPP Series (VX, VPP300, and VPP700), HP/Convex Exemplar SPP2000, IBM RS/6000 SP P2SC node (120 MHz) NEC SX-4/32, SGI/CRAY T3E, SGI Origin2000.

  19. Robinson and Camarda in Node 1 / Unity module

    NASA Image and Video Library

    2005-07-30

    ISS011-E-11357 (30 July 2005) --- Astronauts Stephen K. Robinson and Charles J. Camarda, STS-114 mission specialists, share a light moment while Robinson plays a guitar in the Unity node of the International Space Station.

  20. Documentation of preventive care for pressure ulcers initiated during annual evaluations in SCI.

    PubMed

    Guihan, Marylou; Murphy, Deidre; Rogers, Thea J; Parachuri, Ramadevi; Sae Richardson, Michael; Lee, Kenneth K; Bates-Jensen, Barbara M

    2016-05-01

    Community-acquired pressure ulcers (PrUs) are a frequent cause of hospitalization of Veterans with spinal cord injury (SCI). The Veterans Health Administration (VHA) recommends that SCI annual evaluations include assessment of PrU risk factors, a thorough skin inspection and sharing of recommendations for PrU prevention strategies. We characterized consistency of preventive skin care during annual evaluations for Veterans with SCI as a first step in identifying strategies to more actively promote PrU prevention care in other healthcare encounters. Retrospective cross-sectional observational design, including review of electronic medical records for 206 Veterans with SCI admitted to 2 VA SCI centers from January-December, 2011. Proportion of applicable skin health elements documented (number of applicable elements/skin health elements documented). Our sample was primarily white (78%) male (96.1%), and mean age = 61 years. 40% of participants' were hospitalized for PrU treatment, with a mean of 294 days (median = 345 days) from annual evaluation to the index admission. On average, Veterans received an average of 75.5% (IQR 68-86%) of applicable skin health elements. Documentation of applicable skin health elements was significantly higher during inpatient vs. outpatient annual evaluations (mean elements received = 80.3% and 64.3%, respectively, P > 0.001). No significant differences were observed in documentation of skin health elements by Veterans at high vs. low PrU risk. Additional PrU preventive care in the VHA outpatient setting may increase identification and detection of PrU risk factors and early PrU damage for Veterans with SCI in the community, allowing for earlier intervention.

  1. A Comparison of Global Indexing Schemes to Facilitate Earth Science Data Management

    NASA Astrophysics Data System (ADS)

    Griessbaum, N.; Frew, J.; Rilee, M. L.; Kuo, K. S.

    2017-12-01

    Recent advances in database technology have led to systems optimized for managing petabyte-scale multidimensional arrays. These array databases are a good fit for subsets of the Earth's surface that can be projected into a rectangular coordinate system with acceptable geometric fidelity. However, for global analyses, array databases must address the same distortions and discontinuities that apply to map projections in general. The array database SciDB supports enormous databases spread across thousands of computing nodes. Additionally, the following SciDB characteristics are particularly germane to the coordinate system problem: SciDB efficiently stores and manipulates sparse (i.e. mostly empty) arrays. SciDB arrays have 64-bit indexes. SciDB supports user-defined data types, functions, and operators. We have implemented two geospatial indexing schemes in SciDB. The simplest uses two array dimensions to represent longitude and latitude. For representation as 64-bit integers, the coordinates are multiplied by a scale factor large enough to yield an appropriate Earth surface resolution (e.g., a scale factor of 100,000 yields a resolution of approximately 1m at the equator). Aside from the longitudinal discontinuity, the principal disadvantage of this scheme is its fixed scale factor. The second scheme uses a single array dimension to represent the bit-codes for locations in a hierarchical triangular mesh (HTM) coordinate system. A HTM maps the Earth's surface onto an octahedron, and then recursively subdivides each triangular face to the desired resolution. Earth surface locations are represented as the concatenation of an octahedron face code and a quadtree code within the face. Unlike our integerized lat-lon scheme, the HTM allow for objects of different size (e.g., pixels with differing resolutions) to be represented in the same indexing scheme. We present an evaluation of the relative utility of these two schemes for managing and analyzing MODIS swath data.

  2. SITRUS: Semantic Infrastructure for Wireless Sensor Networks

    PubMed Central

    Bispo, Kalil A.; Rosa, Nelson S.; Cunha, Paulo R. F.

    2015-01-01

    Wireless sensor networks (WSNs) are made up of nodes with limited resources, such as processing, bandwidth, memory and, most importantly, energy. For this reason, it is essential that WSNs always work to reduce the power consumption as much as possible in order to maximize its lifetime. In this context, this paper presents SITRUS (semantic infrastructure for wireless sensor networks), which aims to reduce the power consumption of WSN nodes using ontologies. SITRUS consists of two major parts: a message-oriented middleware responsible for both an oriented message communication service and a reconfiguration service; and a semantic information processing module whose purpose is to generate a semantic database that provides the basis to decide whether a WSN node needs to be reconfigurated or not. In order to evaluate the proposed solution, we carried out an experimental evaluation to assess the power consumption and memory usage of WSN applications built atop SITRUS. PMID:26528974

  3. A History-based Estimation for LHCb job requirements

    NASA Astrophysics Data System (ADS)

    Rauschmayr, Nathalie

    2015-12-01

    The main goal of a Workload Management System (WMS) is to find and allocate resources for the given tasks. The more and better job information the WMS receives, the easier will be to accomplish its task, which directly translates into higher utilization of resources. Traditionally, the information associated with each job, like expected runtime, is defined beforehand by the Production Manager in best case and fixed arbitrary values by default. In the case of LHCb's Workload Management System no mechanisms are provided which automate the estimation of job requirements. As a result, much more CPU time is normally requested than actually needed. Particularly, in the context of multicore jobs this presents a major problem, since single- and multicore jobs shall share the same resources. Consequently, grid sites need to rely on estimations given by the VOs in order to not decrease the utilization of their worker nodes when making multicore job slots available. The main reason for going to multicore jobs is the reduction of the overall memory footprint. Therefore, it also needs to be studied how memory consumption of jobs can be estimated. A detailed workload analysis of past LHCb jobs is presented. It includes a study of job features and their correlation with runtime and memory consumption. Following the features, a supervised learning algorithm is developed based on a history based prediction. The aim is to learn over time how jobs’ runtime and memory evolve influenced due to changes in experiment conditions and software versions. It will be shown that estimation can be notably improved if experiment conditions are taken into account.

  4. Implementing Molecular Dynamics on Hybrid High Performance Computers - Three-Body Potentials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, W Michael; Yamada, Masako

    The use of coprocessors or accelerators such as graphics processing units (GPUs) has become popular in scientific computing applications due to their low cost, impressive floating-point capabilities, high memory bandwidth, and low electrical power re- quirements. Hybrid high-performance computers, defined as machines with nodes containing more than one type of floating-point processor (e.g. CPU and GPU), are now becoming more prevalent due to these advantages. Although there has been extensive research into methods to efficiently use accelerators to improve the performance of molecular dynamics (MD) employing pairwise potential energy models, little is reported in the literature for models that includemore » many-body effects. 3-body terms are required for many popular potentials such as MEAM, Tersoff, REBO, AIREBO, Stillinger-Weber, Bond-Order Potentials, and others. Because the per-atom simulation times are much higher for models incorporating 3-body terms, there is a clear need for efficient algo- rithms usable on hybrid high performance computers. Here, we report a shared-memory force-decomposition for 3-body potentials that avoids memory conflicts to allow for a deterministic code with substantial performance improvements on hybrid machines. We describe modifications necessary for use in distributed memory MD codes and show results for the simulation of water with Stillinger-Weber on the hybrid Titan supercomputer. We compare performance of the 3-body model to the SPC/E water model when using accelerators. Finally, we demonstrate that our approach can attain a speedup of 5.1 with acceleration on Titan for production simulations to study water droplet freezing on a surface.« less

  5. Checkpointing for a hybrid computing node

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cher, Chen-Yong

    2016-03-08

    According to an aspect, a method for checkpointing in a hybrid computing node includes executing a task in a processing accelerator of the hybrid computing node. A checkpoint is created in a local memory of the processing accelerator. The checkpoint includes state data to restart execution of the task in the processing accelerator upon a restart operation. Execution of the task is resumed in the processing accelerator after creating the checkpoint. The state data of the checkpoint are transferred from the processing accelerator to a main processor of the hybrid computing node while the processing accelerator is executing the task.

  6. Persistent Memory in Single Node Delay-Coupled Reservoir Computing.

    PubMed

    Kovac, André David; Koall, Maximilian; Pipa, Gordon; Toutounji, Hazem

    2016-01-01

    Delays are ubiquitous in biological systems, ranging from genetic regulatory networks and synaptic conductances, to predator/pray population interactions. The evidence is mounting, not only to the presence of delays as physical constraints in signal propagation speed, but also to their functional role in providing dynamical diversity to the systems that comprise them. The latter observation in biological systems inspired the recent development of a computational architecture that harnesses this dynamical diversity, by delay-coupling a single nonlinear element to itself. This architecture is a particular realization of Reservoir Computing, where stimuli are injected into the system in time rather than in space as is the case with classical recurrent neural network realizations. This architecture also exhibits an internal memory which fades in time, an important prerequisite to the functioning of any reservoir computing device. However, fading memory is also a limitation to any computation that requires persistent storage. In order to overcome this limitation, the current work introduces an extended version to the single node Delay-Coupled Reservoir, that is based on trained linear feedback. We show by numerical simulations that adding task-specific linear feedback to the single node Delay-Coupled Reservoir extends the class of solvable tasks to those that require nonfading memory. We demonstrate, through several case studies, the ability of the extended system to carry out complex nonlinear computations that depend on past information, whereas the computational power of the system with fading memory alone quickly deteriorates. Our findings provide the theoretical basis for future physical realizations of a biologically-inspired ultrafast computing device with extended functionality.

  7. Persistent Memory in Single Node Delay-Coupled Reservoir Computing

    PubMed Central

    Pipa, Gordon; Toutounji, Hazem

    2016-01-01

    Delays are ubiquitous in biological systems, ranging from genetic regulatory networks and synaptic conductances, to predator/pray population interactions. The evidence is mounting, not only to the presence of delays as physical constraints in signal propagation speed, but also to their functional role in providing dynamical diversity to the systems that comprise them. The latter observation in biological systems inspired the recent development of a computational architecture that harnesses this dynamical diversity, by delay-coupling a single nonlinear element to itself. This architecture is a particular realization of Reservoir Computing, where stimuli are injected into the system in time rather than in space as is the case with classical recurrent neural network realizations. This architecture also exhibits an internal memory which fades in time, an important prerequisite to the functioning of any reservoir computing device. However, fading memory is also a limitation to any computation that requires persistent storage. In order to overcome this limitation, the current work introduces an extended version to the single node Delay-Coupled Reservoir, that is based on trained linear feedback. We show by numerical simulations that adding task-specific linear feedback to the single node Delay-Coupled Reservoir extends the class of solvable tasks to those that require nonfading memory. We demonstrate, through several case studies, the ability of the extended system to carry out complex nonlinear computations that depend on past information, whereas the computational power of the system with fading memory alone quickly deteriorates. Our findings provide the theoretical basis for future physical realizations of a biologically-inspired ultrafast computing device with extended functionality. PMID:27783690

  8. Scaling to diversity: The DERECHOS distributed infrastructure for analyzing and sharing data

    NASA Astrophysics Data System (ADS)

    Rilee, M. L.; Kuo, K. S.; Clune, T.; Oloso, A.; Brown, P. G.

    2016-12-01

    Integrating Earth Science data from diverse sources such as satellite imagery and simulation output can be expensive and time-consuming, limiting scientific inquiry and the quality of our analyses. Reducing these costs will improve innovation and quality in science. The current Earth Science data infrastructure focuses on downloading data based on requests formed from the search and analysis of associated metadata. And while the data products provided by archives may use the best available data sharing technologies, scientist end-users generally do not have such resources (including staff) available to them. Furthermore, only once an end-user has received the data from multiple diverse sources and has integrated them can the actual analysis and synthesis begin. The cost of getting from idea to where synthesis can start dramatically slows progress. In this presentation we discuss a distributed computational and data storage framework that eliminates much of the aforementioned cost. The SciDB distributed array database is central as it is optimized for scientific computing involving very large arrays, performing better than less specialized frameworks like Spark. Adding spatiotemporal functions to the SciDB creates a powerful platform for analyzing and integrating massive, distributed datasets. SciDB allows Big Earth Data analysis to be performed "in place" without the need for expensive downloads and end-user resources. Spatiotemporal indexing technologies such as the hierarchical triangular mesh enable the compute and storage affinity needed to efficiently perform co-located and conditional analyses minimizing data transfers. These technologies automate the integration of diverse data sources using the framework, a critical step beyond current metadata search and analysis. Instead of downloading data into their idiosyncratic local environments, end-users can generate and share data products integrated from diverse multiple sources using a common shared environment, turning distributed active archive centers (DAACs) from warehouses into distributed active analysis centers.

  9. Episodic memory retrieval, parietal cortex, and the Default Mode Network: functional and topographic analyses

    PubMed Central

    Sestieri, Carlo; Corbetta, Maurizio; Romani, Gian Luca; Shulman, Gordon L.

    2011-01-01

    The default mode network (DMN) is often considered a functionally homogeneous system that is broadly associated with internally directed cognition (e.g. episodic memory, theory of mind, self-evaluation). However, few studies have examined how this network interacts with other networks during putative “default” processes such as episodic memory retrieval. Using fMRI, we investigated the topography and response profile of human parietal regions inside and outside the DMN, independently defined using task-evoked deactivations and resting state functional connectivity, during episodic memory retrieval. Memory retrieval activated posterior nodes of the DMN, particularly the angular gyrus, but also more anterior and dorsal parietal regions that were anatomically separate from the DMN. The two sets of parietal regions showed different resting-state functional connectivity and response profiles. During memory retrieval, responses in DMN regions peaked sooner than non-DMN regions, which in turn showed responses that were sustained until a final memory judgment was reached. Moreover, a parahippocampal region that showed strong resting-state connectivity with parietal DMN regions also exhibited a pattern of task-evoked activity similar to that exhibited by DMN regions. These results suggest that DMN parietal regions directly supported memory retrieval, whereas non-DMN parietal regions were more involved in post-retrieval processes such as memory-based decision making. Finally, a robust functional dissociation within the DMN was observed. While angular gyrus and posterior cingulate/precuneus were significantly activated during memory retrieval, an anterior DMN node in medial prefrontal cortex was strongly deactivated. This latter finding demonstrates functional heterogeneity rather than homogeneity within the DMN during episodic memory retrieval. PMID:21430142

  10. Episodic memory retrieval, parietal cortex, and the default mode network: functional and topographic analyses.

    PubMed

    Sestieri, Carlo; Corbetta, Maurizio; Romani, Gian Luca; Shulman, Gordon L

    2011-03-23

    The default mode network (DMN) is often considered a functionally homogeneous system that is broadly associated with internally directed cognition (e.g., episodic memory, theory of mind, self-evaluation). However, few studies have examined how this network interacts with other networks during putative "default" processes such as episodic memory retrieval. Using functional magnetic resonance imaging, we investigated the topography and response profile of human parietal regions inside and outside the DMN, independently defined using task-evoked deactivations and resting-state functional connectivity, during episodic memory retrieval. Memory retrieval activated posterior nodes of the DMN, particularly the angular gyrus, but also more anterior and dorsal parietal regions that were anatomically separate from the DMN. The two sets of parietal regions showed different resting-state functional connectivity and response profiles. During memory retrieval, responses in DMN regions peaked sooner than non-DMN regions, which in turn showed responses that were sustained until a final memory judgment was reached. Moreover, a parahippocampal region that showed strong resting-state connectivity with parietal DMN regions also exhibited a pattern of task-evoked activity similar to that exhibited by DMN regions. These results suggest that DMN parietal regions directly supported memory retrieval, whereas non-DMN parietal regions were more involved in postretrieval processes such as memory-based decision making. Finally, a robust functional dissociation within the DMN was observed. Whereas angular gyrus and posterior cingulate/precuneus were significantly activated during memory retrieval, an anterior DMN node in medial prefrontal cortex was strongly deactivated. This latter finding demonstrates functional heterogeneity rather than homogeneity within the DMN during episodic memory retrieval.

  11. Chaining direct memory access data transfer operations for compute nodes in a parallel computer

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.

    2010-09-28

    Methods, systems, and products are disclosed for chaining DMA data transfer operations for compute nodes in a parallel computer that include: receiving, by an origin DMA engine on an origin node in an origin injection FIFO buffer for the origin DMA engine, a RGET data descriptor specifying a DMA transfer operation data descriptor on the origin node and a second RGET data descriptor on the origin node, the second RGET data descriptor specifying a target RGET data descriptor on the target node, the target RGET data descriptor specifying an additional DMA transfer operation data descriptor on the origin node; creating, by the origin DMA engine, an RGET packet in dependence upon the RGET data descriptor, the RGET packet containing the DMA transfer operation data descriptor and the second RGET data descriptor; and transferring, by the origin DMA engine to a target DMA engine on the target node, the RGET packet.

  12. Signaling completion of a message transfer from an origin compute node to a target compute node

    DOEpatents

    Blocksome, Michael A [Rochester, MN

    2011-02-15

    Signaling completion of a message transfer from an origin node to a target node includes: sending, by an origin DMA engine, an RTS message, the RTS message specifying an application message for transfer to the target node from the origin node; receiving, by the origin DMA engine, a remote get message containing a data descriptor for the message and a completion notification descriptor, the completion notification descriptor specifying a local memory FIFO data transfer operation for transferring data locally on the origin node; inserting, by the origin DMA engine in an injection FIFO buffer, the data descriptor followed by the completion notification descriptor; transferring, by the origin DMA engine to the target node, the message in dependence upon the data descriptor; and notifying, by the origin DMA engine, the application that transfer of the message is complete in dependence upon the completion notification descriptor.

  13. Low latency, high bandwidth data communications between compute nodes in a parallel computer

    DOEpatents

    Blocksome, Michael A

    2014-04-01

    Methods, systems, and products are disclosed for data transfers between nodes in a parallel computer that include: receiving, by an origin DMA on an origin node, a buffer identifier for a buffer containing data for transfer to a target node; sending, by the origin DMA to the target node, a RTS message; transferring, by the origin DMA, a data portion to the target node using a memory FIFO operation that specifies one end of the buffer from which to begin transferring the data; receiving, by the origin DMA, an acknowledgement of the RTS message from the target node; and transferring, by the origin DMA in response to receiving the acknowledgement, any remaining data portion to the target node using a direct put operation that specifies the other end of the buffer from which to begin transferring the data, including initiating the direct put operation without invoking an origin processing core.

  14. Low latency, high bandwidth data communications between compute nodes in a parallel computer

    DOEpatents

    Blocksome, Michael A

    2014-04-22

    Methods, systems, and products are disclosed for data transfers between nodes in a parallel computer that include: receiving, by an origin DMA on an origin node, a buffer identifier for a buffer containing data for transfer to a target node; sending, by the origin DMA to the target node, a RTS message; transferring, by the origin DMA, a data portion to the target node using a memory FIFO operation that specifies one end of the buffer from which to begin transferring the data; receiving, by the origin DMA, an acknowledgement of the RTS message from the target node; and transferring, by the origin DMA in response to receiving the acknowledgement, any remaining data portion to the target node using a direct put operation that specifies the other end of the buffer from which to begin transferring the data, including initiating the direct put operation without invoking an origin processing core.

  15. Low latency, high bandwidth data communications between compute nodes in a parallel computer

    DOEpatents

    Blocksome, Michael A

    2013-07-02

    Methods, systems, and products are disclosed for data transfers between nodes in a parallel computer that include: receiving, by an origin DMA on an origin node, a buffer identifier for a buffer containing data for transfer to a target node; sending, by the origin DMA to the target node, a RTS message; transferring, by the origin DMA, a data portion to the target node using a memory FIFO operation that specifies one end of the buffer from which to begin transferring the data; receiving, by the origin DMA, an acknowledgement of the RTS message from the target node; and transferring, by the origin DMA in response to receiving the acknowledgement, any remaining data portion to the target node using a direct put operation that specifies the other end of the buffer from which to begin transferring the data, including initiating the direct put operation without invoking an origin processing core.

  16. Working Memory Span Development: A Time-Based Resource-Sharing Model Account

    ERIC Educational Resources Information Center

    Barrouillet, Pierre; Gavens, Nathalie; Vergauwe, Evie; Gaillard, Vinciane; Camos, Valerie

    2009-01-01

    The time-based resource-sharing model (P. Barrouillet, S. Bernardin, & V. Camos, 2004) assumes that during complex working memory span tasks, attention is frequently and surreptitiously switched from processing to reactivate decaying memory traces before their complete loss. Three experiments involving children from 5 to 14 years of age…

  17. Direct access inter-process shared memory

    DOEpatents

    Brightwell, Ronald B; Pedretti, Kevin; Hudson, Trammell B

    2013-10-22

    A technique for directly sharing physical memory between processes executing on processor cores is described. The technique includes loading a plurality of processes into the physical memory for execution on a corresponding plurality of processor cores sharing the physical memory. An address space is mapped to each of the processes by populating a first entry in a top level virtual address table for each of the processes. The address space of each of the processes is cross-mapped into each of the processes by populating one or more subsequent entries of the top level virtual address table with the first entry in the top level virtual address table from other processes.

  18. Memory Network For Distributed Data Processors

    NASA Technical Reports Server (NTRS)

    Bolen, David; Jensen, Dean; Millard, ED; Robinson, Dave; Scanlon, George

    1992-01-01

    Universal Memory Network (UMN) is modular, digital data-communication system enabling computers with differing bus architectures to share 32-bit-wide data between locations up to 3 km apart with less than one millisecond of latency. Makes it possible to design sophisticated real-time and near-real-time data-processing systems without data-transfer "bottlenecks". This enterprise network permits transmission of volume of data equivalent to an encyclopedia each second. Facilities benefiting from Universal Memory Network include telemetry stations, simulation facilities, power-plants, and large laboratories or any facility sharing very large volumes of data. Main hub of UMN is reflection center including smaller hubs called Shared Memory Interfaces.

  19. Grouping and binding in visual short-term memory.

    PubMed

    Quinlan, Philip T; Cohen, Dale J

    2012-09-01

    Findings of 2 experiments are reported that challenge the current understanding of visual short-term memory (VSTM). In both experiments, a single study display, containing 6 colored shapes, was presented briefly and then probed with a single colored shape. At stake is how VSTM retains a record of different objects that share common features: In the 1st experiment, 2 study items sometimes shared a common feature (either a shape or a color). The data revealed a color sharing effect, in which memory was much better for items that shared a common color than for items that did not. The 2nd experiment showed that the size of the color sharing effect depended on whether a single pair of items shared a common color or whether 2 pairs of items were so defined-memory for all items improved when 2 color groups were presented. In explaining performance, an account is advanced in which items compete for a fixed number of slots, but then memory recall for any given stored item is prone to error. A critical assumption is that items that share a common color are stored together in a slot as a chunk. The evidence provides further support for the idea that principles of perceptual organization may determine the manner in which items are stored in VSTM. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  20. Livermore Big Artificial Neural Network Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Essen, Brian Van; Jacobs, Sam; Kim, Hyojin

    2016-07-01

    LBANN is a toolkit that is designed to train artificial neural networks efficiently on high performance computing architectures. It is optimized to take advantages of key High Performance Computing features to accelerate neural network training. Specifically it is optimized for low-latency, high bandwidth interconnects, node-local NVRAM, node-local GPU accelerators, and high bandwidth parallel file systems. It is built on top of the open source Elemental distributed-memory dense and spars-direct linear algebra and optimization library that is released under the BSD license. The algorithms contained within LBANN are drawn from the academic literature and implemented to work within a distributed-memory framework.

  1. Building Intrusion Detection with a Wireless Sensor Network

    NASA Astrophysics Data System (ADS)

    Wälchli, Markus; Braun, Torsten

    This paper addresses the detection and reporting of abnormal building access with a wireless sensor network. A common office room, offering space for two working persons, has been monitored with ten sensor nodes and a base station. The task of the system is to report suspicious office occupation such as office searching by thieves. On the other hand, normal office occupation should not throw alarms. In order to save energy for communication, the system provides all nodes with some adaptive short-term memory. Thus, a set of sensor activation patterns can be temporarily learned. The local memory is implemented as an Adaptive Resonance Theory (ART) neural network. Unknown event patterns detected on sensor node level are reported to the base station, where the system-wide anomaly detection is performed. The anomaly detector is lightweight and completely self-learning. The system can be run autonomously or it could be used as a triggering system to turn on an additional high-resolution system on demand. Our building monitoring system has proven to work reliably in different evaluated scenarios. Communication costs of up to 90% could be saved compared to a threshold-based approach without local memory.

  2. Best bang for your buck: GPU nodes for GROMACS biomolecular simulations

    PubMed Central

    Páll, Szilárd; Fechner, Martin; Esztermann, Ansgar; de Groot, Bert L.; Grubmüller, Helmut

    2015-01-01

    The molecular dynamics simulation package GROMACS runs efficiently on a wide variety of hardware from commodity workstations to high performance computing clusters. Hardware features are well‐exploited with a combination of single instruction multiple data, multithreading, and message passing interface (MPI)‐based single program multiple data/multiple program multiple data parallelism while graphics processing units (GPUs) can be used as accelerators to compute interactions off‐loaded from the CPU. Here, we evaluate which hardware produces trajectories with GROMACS 4.6 or 5.0 in the most economical way. We have assembled and benchmarked compute nodes with various CPU/GPU combinations to identify optimal compositions in terms of raw trajectory production rate, performance‐to‐price ratio, energy efficiency, and several other criteria. Although hardware prices are naturally subject to trends and fluctuations, general tendencies are clearly visible. Adding any type of GPU significantly boosts a node's simulation performance. For inexpensive consumer‐class GPUs this improvement equally reflects in the performance‐to‐price ratio. Although memory issues in consumer‐class GPUs could pass unnoticed as these cards do not support error checking and correction memory, unreliable GPUs can be sorted out with memory checking tools. Apart from the obvious determinants for cost‐efficiency like hardware expenses and raw performance, the energy consumption of a node is a major cost factor. Over the typical hardware lifetime until replacement of a few years, the costs for electrical power and cooling can become larger than the costs of the hardware itself. Taking that into account, nodes with a well‐balanced ratio of CPU and consumer‐class GPU resources produce the maximum amount of GROMACS trajectory over their lifetime. © 2015 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. PMID:26238484

  3. Best bang for your buck: GPU nodes for GROMACS biomolecular simulations.

    PubMed

    Kutzner, Carsten; Páll, Szilárd; Fechner, Martin; Esztermann, Ansgar; de Groot, Bert L; Grubmüller, Helmut

    2015-10-05

    The molecular dynamics simulation package GROMACS runs efficiently on a wide variety of hardware from commodity workstations to high performance computing clusters. Hardware features are well-exploited with a combination of single instruction multiple data, multithreading, and message passing interface (MPI)-based single program multiple data/multiple program multiple data parallelism while graphics processing units (GPUs) can be used as accelerators to compute interactions off-loaded from the CPU. Here, we evaluate which hardware produces trajectories with GROMACS 4.6 or 5.0 in the most economical way. We have assembled and benchmarked compute nodes with various CPU/GPU combinations to identify optimal compositions in terms of raw trajectory production rate, performance-to-price ratio, energy efficiency, and several other criteria. Although hardware prices are naturally subject to trends and fluctuations, general tendencies are clearly visible. Adding any type of GPU significantly boosts a node's simulation performance. For inexpensive consumer-class GPUs this improvement equally reflects in the performance-to-price ratio. Although memory issues in consumer-class GPUs could pass unnoticed as these cards do not support error checking and correction memory, unreliable GPUs can be sorted out with memory checking tools. Apart from the obvious determinants for cost-efficiency like hardware expenses and raw performance, the energy consumption of a node is a major cost factor. Over the typical hardware lifetime until replacement of a few years, the costs for electrical power and cooling can become larger than the costs of the hardware itself. Taking that into account, nodes with a well-balanced ratio of CPU and consumer-class GPU resources produce the maximum amount of GROMACS trajectory over their lifetime. © 2015 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc.

  4. panMetaDocs and DataSync - providing a convenient way to share and publish research data

    NASA Astrophysics Data System (ADS)

    Ulbricht, D.; Klump, J. F.

    2013-12-01

    In recent years research institutions, geological surveys and funding organizations started to build infrastructures to facilitate the re-use of research data from previous work. At present, several intermeshed activities are coordinated to make data systems of the earth sciences interoperable and recorded data discoverable. Driven by governmental authorities, ISO19115/19139 emerged as metadata standards for discovery of data and services. Established metadata transport protocols like OAI-PMH and OGC-CSW are used to disseminate metadata to data portals. With the persistent identifiers like DOI and IGSN research data and corresponding physical samples can be given unambiguous names and thus become citable. In summary, these activities focus primarily on 'ready to give away'-data, already stored in an institutional repository and described with appropriate metadata. Many datasets are not 'born' in this state but are produced in small and federated research projects. To make access and reuse of these 'small data' easier, these data should be centrally stored and version controlled from the very beginning of activities. We developed DataSync [1] as supplemental application to the panMetaDocs [2] data exchange platform as a data management tool for small science projects. DataSync is a JAVA-application that runs on a local computer and synchronizes directory trees into an eSciDoc-repository [3] by creating eSciDoc-objects via eSciDocs' REST API. DataSync can be installed on multiple computers and is in this way able to synchronize files of a research team over the internet. XML Metadata can be added as separate files that are managed together with data files as versioned eSciDoc-objects. A project-customized instance of panMetaDocs is provided to show a web-based overview of the previously uploaded file collection and to allow further annotation with metadata inside the eSciDoc-repository. PanMetaDocs is a PHP based web application to assist the creation of metadata in any XML-based metadata schema. To reduce manual entries of metadata to a minimum and make use of contextual information in a project setting, metadata fields can be populated with static or dynamic content. Access rights can be defined to control visibility and access to stored objects. Notifications about recently updated datasets are available by RSS and e-mail and the entire inventory can be harvested via OAI-PMH. panMetaDocs is optimized to be harvested by panFMP [4]. panMetaDocs is able to mint dataset DOIs though DataCite and uses eSciDocs' REST API to transfer eSciDoc-objects from a non-public 'pending'-status to the published status 'released', which makes data and metadata of the published object available worldwide through the internet. The application scenario presented here shows the adoption of open source applications to data sharing and publication of data. An eSciDoc-repository is used as storage for data and metadata. DataSync serves as a file ingester and distributor, whereas panMetaDocs' main function is to annotate the dataset files with metadata to make them ready for publication and sharing with your own team, or with the scientific community.

  5. Small mammal MRI imaging in spinal cord injury: a novel practical technique for using a 1.5 T MRI.

    PubMed

    Levene, Howard B; Mohamed, Feroze B; Faro, Scott H; Seshadri, Asha B; Loftus, Christopher M; Tuma, Ronald F; Jallo, Jack I

    2008-07-30

    The field of spinal cord injury research is an active one. The pathophysiology of SCI is not yet entirely revealed. As such, animal models are required for the exploration of new therapies and treatments. We present a novel technique using available hospital MRI machines to examine SCI in a mouse SCI model. The model is a 60 kdyne direct contusion injury in a mouse thoracic spine. No new electronic equipment is required. A 1.5T MRI machine with a human wrist coil is employed. A standard multisection 2D fast spin-echo (FSE) T2-weighted sequence is used for imaging the mouse. The contrast-to-noise ratio (CNR) between the injured and normal area of the spinal cord showed a three-fold increase in the contrast between these two regions. The MRI findings could be correlated with kinematic outcome scores of ambulation, such as BBB or BMS. The ability to follow a SCI in the same animal over time should improve the quality of data while reducing the quantity of animals required in SCI research. It is the aim of the authors to share this non-invasive technique and to make it available to the scientific research community.

  6. Reducing I/O variability using dynamic I/O path characterization in petascale storage systems

    DOE PAGES

    Son, Seung Woo; Sehrish, Saba; Liao, Wei-keng; ...

    2016-11-01

    In petascale systems with a million CPU cores, scalable and consistent I/O performance is becoming increasingly difficult to sustain mainly because of I/O variability. Furthermore, the I/O variability is caused by concurrently running processes/jobs competing for I/O or a RAID rebuild when a disk drive fails. We present a mechanism that stripes across a selected subset of I/O nodes with the lightest workload at runtime to achieve the highest I/O bandwidth available in the system. In this paper, we propose a probing mechanism to enable application-level dynamic file striping to mitigate I/O variability. We also implement the proposed mechanism inmore » the high-level I/O library that enables memory-to-file data layout transformation and allows transparent file partitioning using subfiling. Subfiling is a technique that partitions data into a set of files of smaller size and manages file access to them, making data to be treated as a single, normal file to users. Here, we demonstrate that our bandwidth probing mechanism can successfully identify temporally slower I/O nodes without noticeable runtime overhead. Experimental results on NERSC’s systems also show that our approach isolates I/O variability effectively on shared systems and improves overall collective I/O performance with less variation.« less

  7. Shared memories reveal shared structure in neural activity across individuals

    PubMed Central

    Chen, J.; Leong, Y.C.; Honey, C.J.; Yong, C.H.; Norman, K.A.; Hasson, U.

    2016-01-01

    Our lives revolve around sharing experiences and memories with others. When different people recount the same events, how similar are their underlying neural representations? Participants viewed a fifty-minute movie, then verbally described the events during functional MRI, producing unguided detailed descriptions lasting up to forty minutes. As each person spoke, event-specific spatial patterns were reinstated in default-network, medial-temporal, and high-level visual areas. Individual event patterns were both highly discriminable from one another and similar between people, suggesting consistent spatial organization. In many high-order areas, patterns were more similar between people recalling the same event than between recall and perception, indicating systematic reshaping of percept into memory. These results reveal the existence of a common spatial organization for memories in high-level cortical areas, where encoded information is largely abstracted beyond sensory constraints; and that neural patterns during perception are altered systematically across people into shared memory representations for real-life events. PMID:27918531

  8. Combining Distributed and Shared Memory Models: Approach and Evolution of the Global Arrays Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nieplocha, Jarek; Harrison, Robert J.; Kumar, Mukul

    2002-07-29

    Both shared memory and distributed memory models have advantages and shortcomings. Shared memory model is much easier to use but it ignores data locality/placement. Given the hierarchical nature of the memory subsystems in the modern computers this characteristic might have a negative impact on performance and scalability. Various techniques, such as code restructuring to increase data reuse and introducing blocking in data accesses, can address the problem and yield performance competitive with message passing[Singh], however at the cost of compromising the ease of use feature. Distributed memory models such as message passing or one-sided communication offer performance and scalability butmore » they compromise the ease-of-use. In this context, the message-passing model is sometimes referred to as?assembly programming for the scientific computing?. The Global Arrays toolkit[GA1, GA2] attempts to offer the best features of both models. It implements a shared-memory programming model in which data locality is managed explicitly by the programmer. This management is achieved by explicit calls to functions that transfer data between a global address space (a distributed array) and local storage. In this respect, the GA model has similarities to the distributed shared-memory models that provide an explicit acquire/release protocol. However, the GA model acknowledges that remote data is slower to access than local data and allows data locality to be explicitly specified and hence managed. The GA model exposes to the programmer the hierarchical memory of modern high-performance computer systems, and by recognizing the communication overhead for remote data transfer, it promotes data reuse and locality of reference. This paper describes the characteristics of the Global Arrays programming model, capabilities of the toolkit, and discusses its evolution.« less

  9. Parallel computing for probabilistic fatigue analysis

    NASA Technical Reports Server (NTRS)

    Sues, Robert H.; Lua, Yuan J.; Smith, Mark D.

    1993-01-01

    This paper presents the results of Phase I research to investigate the most effective parallel processing software strategies and hardware configurations for probabilistic structural analysis. We investigate the efficiency of both shared and distributed-memory architectures via a probabilistic fatigue life analysis problem. We also present a parallel programming approach, the virtual shared-memory paradigm, that is applicable across both types of hardware. Using this approach, problems can be solved on a variety of parallel configurations, including networks of single or multiprocessor workstations. We conclude that it is possible to effectively parallelize probabilistic fatigue analysis codes; however, special strategies will be needed to achieve large-scale parallelism to keep large number of processors busy and to treat problems with the large memory requirements encountered in practice. We also conclude that distributed-memory architecture is preferable to shared-memory for achieving large scale parallelism; however, in the future, the currently emerging hybrid-memory architectures will likely be optimal.

  10. High order parallel numerical schemes for solving incompressible flows

    NASA Technical Reports Server (NTRS)

    Lin, Avi; Milner, Edward J.; Liou, May-Fun; Belch, Richard A.

    1992-01-01

    The use of parallel computers for numerically solving flow fields has gained much importance in recent years. This paper introduces a new high order numerical scheme for computational fluid dynamics (CFD) specifically designed for parallel computational environments. A distributed MIMD system gives the flexibility of treating different elements of the governing equations with totally different numerical schemes in different regions of the flow field. The parallel decomposition of the governing operator to be solved is the primary parallel split. The primary parallel split was studied using a hypercube like architecture having clusters of shared memory processors at each node. The approach is demonstrated using examples of simple steady state incompressible flows. Future studies should investigate the secondary split because, depending on the numerical scheme that each of the processors applies and the nature of the flow in the specific subdomain, it may be possible for a processor to seek better, or higher order, schemes for its particular subcase.

  11. Mnemonic transmission, social contagion, and emergence of collective memory: Influence of emotional valence, group structure, and information distribution.

    PubMed

    Choi, Hae-Yoon; Kensinger, Elizabeth A; Rajaram, Suparna

    2017-09-01

    Social transmission of memory and its consequence on collective memory have generated enduring interdisciplinary interest because of their widespread significance in interpersonal, sociocultural, and political arenas. We tested the influence of 3 key factors-emotional salience of information, group structure, and information distribution-on mnemonic transmission, social contagion, and collective memory. Participants individually studied emotionally salient (negative or positive) and nonemotional (neutral) picture-word pairs that were completely shared, partially shared, or unshared within participant triads, and then completed 3 consecutive recalls in 1 of 3 conditions: individual-individual-individual (control), collaborative-collaborative (identical group; insular structure)-individual, and collaborative-collaborative (reconfigured group; diverse structure)-individual. Collaboration enhanced negative memories especially in insular group structure and especially for shared information, and promoted collective forgetting of positive memories. Diverse group structure reduced this negativity effect. Unequally distributed information led to social contagion that creates false memories; diverse structure propagated a greater variety of false memories whereas insular structure promoted confidence in false recognition and false collective memory. A simultaneous assessment of network structure, information distribution, and emotional valence breaks new ground to specify how network structure shapes the spread of negative memories and false memories, and the emergence of collective memory. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  12. Effects of cacheing on multitasking efficiency and programming strategy on an ELXSI 6400

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Montry, G.R.; Benner, R.E.

    1985-12-01

    The impact of a cache/shared memory architecture, and, in particular, the cache coherency problem, upon concurrent algorithm and program development is discussed. In this context, a simple set of programming strategies are proposed which streamline code development and improve code performance when multitasking in a cache/shared memory or distributed memory environment.

  13. Implementation of a parallel unstructured Euler solver on shared and distributed memory architectures

    NASA Technical Reports Server (NTRS)

    Mavriplis, D. J.; Das, Raja; Saltz, Joel; Vermeland, R. E.

    1992-01-01

    An efficient three dimensional unstructured Euler solver is parallelized on a Cray Y-MP C90 shared memory computer and on an Intel Touchstone Delta distributed memory computer. This paper relates the experiences gained and describes the software tools and hardware used in this study. Performance comparisons between two differing architectures are made.

  14. Common Data Elements for Spinal Cord Injury Clinical Research: A National Institute for Neurological Disorders and Stroke Project

    PubMed Central

    Biering-Sørensen, Fin; Alai, Sherita; Anderson, Kim; Charlifue, Susan; Chen, Yuying; DeVivo, Michael; Flanders, Adam E.; Jones, Linda; Kleitman, Naomi; Lans, Aria; Noonan, Vanessa K.; Odenkirchen, Joanne; Steeves, John; Tansey, Keith; Widerström-Noga, Eva; Jakeman, Lyn B.

    2015-01-01

    Objective To develop a comprehensive set of common data elements (CDEs), data definitions, case report forms and guidelines for use in spinal cord injury (SCI) clinical research, as part of the CDE project at the National Institute of Neurological Disorders and Stroke (NINDS) of the USA National Institutes of Health. Setting International Working Groups Methods Nine working groups composed of international experts reviewed existing CDEs and instruments, created new elements when needed, and provided recommendations for SCI clinical research. The project was carried out in collaboration with and cross-referenced to development of the International Spinal Cord Society (ISCoS) International SCI Data Sets. The recommendations were compiled, subjected to internal review, and posted online for external public comment. The final version was reviewed by all working groups and the NINDS CDE team prior to release. Results The NINDS SCI CDEs and supporting documents are publically available on the NINDS CDE website and the ISCoS website. The CDEs span the continuum of SCI care and the full range of domains of the International Classification of Functioning, Disability and Health. Conclusions Widespread use of common data elements can facilitate SCI clinical research and trial design, data sharing, and retrospective analyses. Continued international collaboration will enable consistent data collection and reporting, and will help ensure that the data elements are updated, reviewed and broadcast as additional evidence is obtained. PMID:25665542

  15. Exploring collective memory and place attachment using social media data

    NASA Astrophysics Data System (ADS)

    Felasari, Sushardjanti; Setyabudi, Herybert; Budiyanto Setyohadi, Djoko; Dewi, Sinta

    2017-12-01

    This paper describes how collective memory and level of place attachment can be explored using social media data to develop a sustainable travel destination in the city of Yogyakarta. Yogyakarta is famous as a tourist destination city for tourism in Indonesia. One of the reasons why people visit an object at travel destination is to recall the memory of the place. Memory is important for creating memorable space and places as it differentiates one place to another. Memorable places could grow as a symbol and an identity of a district in the city. This paper explores the collective memories recorded as status in the social media. The study identifies the distribution of such locations or nodes representing the memory footprint of the city of Yogyakarta, which can be achieved by determining the quality level of nodes based on the level of place attachment. Analysis is done by looking at the number of caption status by location and time. Qualitative description is used to present the level of place attachment based on the content of the status captioned. The study shows that level of place attachment seems not to be influenced by the popularity of an object. However it affects how strong a travel destination might be sustainable in future development.

  16. Field results from a new die-to-database reticle inspection platform

    NASA Astrophysics Data System (ADS)

    Broadbent, William; Yokoyama, Ichiro; Yu, Paul; Seki, Kazunori; Nomura, Ryohei; Schmalfuss, Heiko; Heumann, Jan; Sier, Jean-Paul

    2007-05-01

    A new die-to-database high-resolution reticle defect inspection platform, TeraScanHR, has been developed for advanced production use with the 45nm logic node, and extendable for development use with the 32nm node (also the comparable memory nodes). These nodes will use predominantly ArF immersion lithography although EUV may also be used. According to recent surveys, the predominant reticle types for the 45nm node are 6% simple tri-tone and COG. Other advanced reticle types may also be used for these nodes including: dark field alternating, Mask Enhancer, complex tri-tone, high transmission, CPL, etc. Finally, aggressive model based OPC will typically be used which will include many small structures such as jogs, serifs, and SRAF (sub-resolution assist features) with accompanying very small gaps between adjacent structures. The current generation of inspection systems is inadequate to meet these requirements. The architecture and performance of the new TeraScanHR reticle inspection platform is described. This new platform is designed to inspect the aforementioned reticle types in die-to-database and die-to-die modes using both transmitted and reflected illumination. Recent results from field testing at two of the three beta sites are shown (Toppan Printing in Japan and the Advanced Mask Technology Center in Germany). The results include applicable programmed defect test reticles and advanced 45nm product reticles (also comparable memory reticles). The results show high sensitivity and low false detections being achieved. The platform can also be configured for the current 65nm, 90nm, and 130nm nodes.

  17. Node Redeployment Algorithm Based on Stratified Connected Tree for Underwater Sensor Networks

    PubMed Central

    Liu, Jun; Jiang, Peng; Wu, Feng; Yu, Shanen; Song, Chunyue

    2016-01-01

    During the underwater sensor networks (UWSNs) operation, node drift with water environment causes network topology changes. Periodic node location examination and adjustment are needed to maintain good network monitoring quality as long as possible. In this paper, a node redeployment algorithm based on stratified connected tree for UWSNs is proposed. At every network adjustment moment, self-examination and adjustment on node locations are performed firstly. If a node is outside the monitored space, it returns to the last location recorded in its memory along straight line. Later, the network topology is stratified into a connected tree that takes the sink node as the root node by broadcasting ready information level by level, which can improve the network connectivity rate. Finally, with synthetically considering network coverage and connectivity rates, and node movement distance, the sink node performs centralized optimization on locations of leaf nodes in the stratified connected tree. Simulation results show that the proposed redeployment algorithm can not only keep the number of nodes in the monitored space as much as possible and maintain good network coverage and connectivity rates during network operation, but also reduce node movement distance during node redeployment and prolong the network lifetime. PMID:28029124

  18. Dynamics of T cell, antigen-presenting cell, and pathogen interactions during recall responses in the lymph node.

    PubMed

    Chtanova, Tatyana; Han, Seong-Ji; Schaeffer, Marie; van Dooren, Giel G; Herzmark, Paul; Striepen, Boris; Robey, Ellen A

    2009-08-21

    Memory T cells circulate through lymph nodes where they are poised to respond rapidly upon re-exposure to a pathogen; however, the dynamics of memory T cell, antigen-presenting cell, and pathogen interactions during recall responses are largely unknown. We used a mouse model of infection with the intracellular protozoan parasite, Toxoplasma gondii, in conjunction with two-photon microscopy, to address this question. After challenge, memory T cells migrated more rapidly than naive T cells, relocalized toward the subcapsular sinus (SCS) near invaded macrophages, and engaged in prolonged interactions with infected cells. Parasite invasion of T cells occurred by direct transfer of the parasite from the target cell into the T cell and corresponded to an antigen-specific increase in the rate of T cell invasion. Our results provide insight into cellular interactions during recall responses and suggest a mechanism of pathogen subversion of the immune response.

  19. Proceedings of a Workshop on Cognitive Testing Methodology (11-12 June 1984)

    DTIC Science & Technology

    1986-01-01

    Becker , and G.E. Handlemann. 1979. Hippocampus, space, and memory. Behav. Brain Sci. 2:313-365. Osbourne, D.P., E.R. Brown, and C.T. Randt. 1982...the B complex. J. Appl. Psychol. 30:359-379. Chernorutskii, M.V. 1943. Problem of alimentary dystrophy . Stud. Lenirg:ad Physicians 3:3-13. Cited in A

  20. NCI and the Republic of Peru Sign Statement of Intent

    Cancer.gov

    The U.S. National Cancer Institute and the Republic of Peru signed a statement of intent to share an interest in fostering collaborative biomedical research in oncology and a common goal in educating and training the next generation of cancer research sci

  1. Sleep Benefits Memory for Semantic Category Structure While Preserving Exemplar-Specific Information.

    PubMed

    Schapiro, Anna C; McDevitt, Elizabeth A; Chen, Lang; Norman, Kenneth A; Mednick, Sara C; Rogers, Timothy T

    2017-11-01

    Semantic memory encompasses knowledge about both the properties that typify concepts (e.g. robins, like all birds, have wings) as well as the properties that individuate conceptually related items (e.g. robins, in particular, have red breasts). We investigate the impact of sleep on new semantic learning using a property inference task in which both kinds of information are initially acquired equally well. Participants learned about three categories of novel objects possessing some properties that were shared among category exemplars and others that were unique to an exemplar, with exposure frequency varying across categories. In Experiment 1, memory for shared properties improved and memory for unique properties was preserved across a night of sleep, while memory for both feature types declined over a day awake. In Experiment 2, memory for shared properties improved across a nap, but only for the lower-frequency category, suggesting a prioritization of weakly learned information early in a sleep period. The increase was significantly correlated with amount of REM, but was also observed in participants who did not enter REM, suggesting involvement of both REM and NREM sleep. The results provide the first evidence that sleep improves memory for the shared structure of object categories, while simultaneously preserving object-unique information.

  2. PIYAS-proceeding to intelligent service oriented memory allocation for flash based data centric sensor devices in wireless sensor networks.

    PubMed

    Rizvi, Sanam Shahla; Chung, Tae-Sun

    2010-01-01

    Flash memory has become a more widespread storage medium for modern wireless devices because of its effective characteristics like non-volatility, small size, light weight, fast access speed, shock resistance, high reliability and low power consumption. Sensor nodes are highly resource constrained in terms of limited processing speed, runtime memory, persistent storage, communication bandwidth and finite energy. Therefore, for wireless sensor networks supporting sense, store, merge and send schemes, an efficient and reliable file system is highly required with consideration of sensor node constraints. In this paper, we propose a novel log structured external NAND flash memory based file system, called Proceeding to Intelligent service oriented memorY Allocation for flash based data centric Sensor devices in wireless sensor networks (PIYAS). This is the extended version of our previously proposed PIYA [1]. The main goals of the PIYAS scheme are to achieve instant mounting and reduced SRAM space by keeping memory mapping information to a very low size of and to provide high query response throughput by allocation of memory to the sensor data by network business rules. The scheme intelligently samples and stores the raw data and provides high in-network data availability by keeping the aggregate data for a longer period of time than any other scheme has done before. We propose effective garbage collection and wear-leveling schemes as well. The experimental results show that PIYAS is an optimized memory management scheme allowing high performance for wireless sensor networks.

  3. DMA engine for repeating communication patterns

    DOEpatents

    Chen, Dong; Gara, Alan G.; Giampapa, Mark E.; Heidelberger, Philip; Steinmacher-Burow, Burkhard; Vranas, Pavlos

    2010-09-21

    A parallel computer system is constructed as a network of interconnected compute nodes to operate a global message-passing application for performing communications across the network. Each of the compute nodes includes one or more individual processors with memories which run local instances of the global message-passing application operating at each compute node to carry out local processing operations independent of processing operations carried out at other compute nodes. Each compute node also includes a DMA engine constructed to interact with the application via Injection FIFO Metadata describing multiple Injection FIFOs where each Injection FIFO may containing an arbitrary number of message descriptors in order to process messages with a fixed processing overhead irrespective of the number of message descriptors included in the Injection FIFO.

  4. Wide memory window in graphene oxide charge storage nodes

    NASA Astrophysics Data System (ADS)

    Wang, Shuai; Pu, Jing; Chan, Daniel S. H.; Cho, Byung Jin; Loh, Kian Ping

    2010-04-01

    Solution-processable, isolated graphene oxide (GO) monolayers have been used as a charge trapping dielectric in TaN gate/Al2O3/isolated GO sheets/SiO2/p-Si memory device (TANOS). The TANOS type structure serves as memory device with the threshold voltage controlled by the amount of charge trapped in the GO sheet. Capacitance-Voltage hysteresis curves reveal a 7.5 V memory window using the sweep voltage of -5-14 V. Thermal reduction in the GO to graphene reduces the memory window to 1.4 V. The unique charge trapping properties of GO points to the potential applications in flexible organic memory devices.

  5. Documentation of preventive care for pressure ulcers initiated during annual evaluations in SCI

    PubMed Central

    2016-01-01

    Objective Community-acquired pressure ulcers (PrUs) are a frequent cause of hospitalization of Veterans with spinal cord injury (SCI). The Veterans Health Administration (VHA) recommends that SCI annual evaluations include assessment of PrU risk factors, a thorough skin inspection and sharing of recommendations for PrU prevention strategies. We characterized consistency of preventive skin care during annual evaluations for Veterans with SCI as a first step in identifying strategies to more actively promote PrU prevention care in other healthcare encounters. Design/setting/participants Retrospective cross-sectional observational design, including review of electronic medical records for 206 Veterans with SCI admitted to 2 VA SCI centers from January-December, 2011. Outcome measures Proportion of applicable skin health elements documented (number of applicable elements/skin health elements documented). Results Our sample was primarily white (78%) male (96.1%), and mean age = 61 years. 40% of participants’ were hospitalized for PrU treatment, with a mean of 294 days (median = 345 days) from annual evaluation to the index admission. On average, Veterans received an average of 75.5% (IQR 68-86%) of applicable skin health elements. Documentation of applicable skin health elements was significantly higher during inpatient vs. outpatient annual evaluations (mean elements received = 80.3% and 64.3%, respectively, P > 0.001). No significant differences were observed in documentation of skin health elements by Veterans at high vs. low PrU risk. Conclusion Additional PrU preventive care in the VHA outpatient setting may increase identification and detection of PrU risk factors and early PrU damage for Veterans with SCI in the community, allowing for earlier intervention. PMID:26763668

  6. Active LifestyLe Rehabilitation interventions in aging spinal cord injury (ALLRISC): a multicentre research program.

    PubMed

    van der Woude, L H V; de Groot, S; Postema, K; Bussmann, J B J; Janssen, T W J; Post, M W M

    2013-06-01

    With today's specialized medical care, life expectancy of persons with a spinal cord injury (SCI) has considerably improved. With increasing age and time since injury, many individuals with SCI, however, show a serious inactive lifestyle, associated with deconditioning and secondary health conditions (SHCs) (e.g. pressure sores, urinary and respiratory tract infections, osteoporosis, upper-extremity pain, obesity, diabetes, cardiovascular disease) and resulting in reduced participation and quality of life (QoL). Avoiding this downward spiral, is crucial. To understand possible deconditioning and SHCs in persons aging with a SCI in the context of active lifestyle, fitness, participation and QoL and to examine interventions that enhance active lifestyle, fitness, participation and QoL and help prevent some of the SHCs. A multicentre multidisciplinary research program (Active LifestyLe Rehabilitation Interventions in aging Spinal Cord injury, ALLRISC) in the setting of the long-standing Dutch SCI-rehabilitation clinical research network. ALLRISC is a four-study research program addressing inactive lifestyle, deconditioning, and SHCs and their associations in people aging with SCI. The program consists of a cross-sectional study (n = 300) and three randomized clinical trials. All studies share a focus on fitness, active lifestyle, SHCs and deconditioning and outcome measures on these and other (participation, QoL) domains. It is hypothesized that a self-management program, low-intensity wheelchair exercise and hybrid functional electrical stimulation-supported leg and handcycling are effective interventions to enhance active life style and fitness, help to prevent some of the important SHCs in chronic SCI and improve participation and QoL. ALLRISC aims to provide evidence-based preventive components of a rehabilitation aftercare system that preserves functioning in aging persons with SCI.

  7. Automatic Generation of Directive-Based Parallel Programs for Shared Memory Parallel Systems

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Yan, Jerry; Frumkin, Michael

    2000-01-01

    The shared-memory programming model is a very effective way to achieve parallelism on shared memory parallel computers. As great progress was made in hardware and software technologies, performance of parallel programs with compiler directives has demonstrated large improvement. The introduction of OpenMP directives, the industrial standard for shared-memory programming, has minimized the issue of portability. Due to its ease of programming and its good performance, the technique has become very popular. In this study, we have extended CAPTools, a computer-aided parallelization toolkit, to automatically generate directive-based, OpenMP, parallel programs. We outline techniques used in the implementation of the tool and present test results on the NAS parallel benchmarks and ARC3D, a CFD application. This work demonstrates the great potential of using computer-aided tools to quickly port parallel programs and also achieve good performance.

  8. Using Nested Contractions and a Hierarchical Tensor Format To Compute Vibrational Spectra of Molecules with Seven Atoms.

    PubMed

    Thomas, Phillip S; Carrington, Tucker

    2015-12-31

    We propose a method for solving the vibrational Schrödinger equation with which one can compute hundreds of energy levels of seven-atom molecules using at most a few gigabytes of memory. It uses nested contractions in conjunction with the reduced-rank block power method (RRBPM) described in J. Chem. Phys. 2014, 140, 174111. Successive basis contractions are organized into a tree, the nodes of which are associated with eigenfunctions of reduced-dimension Hamiltonians. The RRBPM is used recursively to compute eigenfunctions of nodes in bases of products of reduced-dimension eigenfunctions of nodes with fewer coordinates. The corresponding vectors are tensors in what is called CP-format. The final wave functions are therefore represented in a hierarchical CP-format. Computational efficiency and accuracy are significantly improved by representing the Hamiltonian in the same hierarchical format as the wave function. We demonstrate that with this hierarchical RRBPM it is possible to compute energy levels of a 64-D coupled-oscillator model Hamiltonian and also of acetonitrile (CH3CN) and ethylene oxide (C2H4O), for which we use quartic potentials. The most accurate acetonitrile calculation uses 139 MB of memory and takes 3.2 h on a single processor. The most accurate ethylene oxide calculation uses 6.1 GB of memory and takes 14 d on 63 processors. The hierarchical RRBPM shatters the memory barrier that impedes the calculation of vibrational spectra.

  9. Sam2bam: High-Performance Framework for NGS Data Preprocessing Tools

    PubMed Central

    Cheng, Yinhe; Tzeng, Tzy-Hwa Kathy

    2016-01-01

    This paper introduces a high-throughput software tool framework called sam2bam that enables users to significantly speed up pre-processing for next-generation sequencing data. The sam2bam is especially efficient on single-node multi-core large-memory systems. It can reduce the runtime of data pre-processing in marking duplicate reads on a single node system by 156–186x compared with de facto standard tools. The sam2bam consists of parallel software components that can fully utilize multiple processors, available memory, high-bandwidth storage, and hardware compression accelerators, if available. The sam2bam provides file format conversion between well-known genome file formats, from SAM to BAM, as a basic feature. Additional features such as analyzing, filtering, and converting input data are provided by using plug-in tools, e.g., duplicate marking, which can be attached to sam2bam at runtime. We demonstrated that sam2bam could significantly reduce the runtime of next generation sequencing (NGS) data pre-processing from about two hours to about one minute for a whole-exome data set on a 16-core single-node system using up to 130 GB of memory. The sam2bam could reduce the runtime of NGS data pre-processing from about 20 hours to about nine minutes for a whole-genome sequencing data set on the same system using up to 711 GB of memory. PMID:27861637

  10. Neural correlates of confidence during item recognition and source memory retrieval: evidence for both dual-process and strength memory theories.

    PubMed

    Hayes, Scott M; Buchler, Norbou; Stokes, Jared; Kragel, James; Cabeza, Roberto

    2011-12-01

    Although the medial-temporal lobes (MTL), PFC, and parietal cortex are considered primary nodes in the episodic memory network, there is much debate regarding the contributions of MTL, PFC, and parietal subregions to recollection versus familiarity (dual-process theory) and the feasibility of accounts on the basis of a single memory strength process (strength theory). To investigate these issues, the current fMRI study measured activity during retrieval of memories that differed quantitatively in terms of strength (high vs. low-confidence trials) and qualitatively in terms of recollection versus familiarity (source vs. item memory tasks). Support for each theory varied depending on which node of the episodic memory network was considered. Results from MTL best fit a dual-process account, as a dissociation was found between a right hippocampal region showing high-confidence activity during the source memory task and bilateral rhinal regions showing high-confidence activity during the item memory task. Within PFC, several left-lateralized regions showed greater activity for source than item memory, consistent with recollective orienting, whereas a right-lateralized ventrolateral area showed low-confidence activity in both tasks, consistent with monitoring processes. Parietal findings were generally consistent with strength theory, with dorsal areas showing low-confidence activity and ventral areas showing high-confidence activity in both tasks. This dissociation fits with an attentional account of parietal functions during episodic retrieval. The results suggest that both dual-process and strength theories are partly correct, highlighting the need for an integrated model that links to more general cognitive theories to account for observed neural activity during episodic memory retrieval.

  11. Neural Correlates of Confidence during Item Recognition and Source Memory Retrieval: Evidence for Both Dual-process and Strength Memory Theories

    PubMed Central

    Hayes, Scott M.; Buchler, Norbou; Stokes, Jared; Kragel, James; Cabeza, Roberto

    2012-01-01

    Although the medial-temporal lobes (MTL), PFC, and parietal cortex are considered primary nodes in the episodic memory network, there is much debate regarding the contributions of MTL, PFC, and parietal subregions to recollection versus familiarity (dual-process theory) and the feasibility of accounts on the basis of a single memory strength process (strength theory). To investigate these issues, the current fMRI study measured activity during retrieval of memories that differed quantitatively in terms of strength (high vs. low-confidence trials) and qualitatively in terms of recollection versus familiarity (source vs. item memory tasks). Support for each theory varied depending on which node of the episodic memory network was considered. Results from MTL best fit a dual-process account, as a dissociation was found between a right hippocampal region showing high-confidence activity during the source memory task and bilateral rhinal regions showing high-confidence activity during the item memory task. Within PFC, several left-lateralized regions showed greater activity for source than item memory, consistent with recollective orienting, whereas a right-lateralized ventrolateral area showed low-confidence activity in both tasks, consistent with monitoring processes. Parietal findings were generally consistent with strength theory, with dorsal areas showing low-confidence activity and ventral areas showing high-confidence activity in both tasks. This dissociation fits with an attentional account of parietal functions during episodic retrieval. The results suggest that both dual-process and strength theories are partly correct, highlighting the need for an integrated model that links to more general cognitive theories to account for observed neural activity during episodic memory retrieval. PMID:21736454

  12. Interaction of ApoE3 and ApoE4 isoforms with an ITM2b/BRI2 mutation linked to the Alzheimer disease-like Danish dementia: Effects on learning and memory

    PubMed Central

    Biundo, Fabrizio; Ishiwari, Keita; Del Prete, Dolores; D’Adamio, Luciano

    2015-01-01

    Mutations in Amyloid β Precursor Protein (APP) and in genes that regulate APP processing – such as PSEN1/2 and ITM2b/BRI2 – cause familial dementia, such Familial Alzheimer disease (FAD), Familial Danish (FDD) and British (FBD) dementias. The ApoE gene is the major genetic risk factor for sporadic AD. Three major variants of ApoE exist in humans (ApoE2, ApoE3, and ApoE4), with the ApoE4 allele being strongly associated with AD. ITM2b/BRI2 is also a candidate regulatory node genes predicted to mediate the common patterns of gene expression shared by healthy ApoE4 carriers and late-onset AD patients not carrying ApoE4. This evidence provides a direct link between ITM2b/BRI2 and ApoE4. To test whether ApoE4 and pathogenic ITM2b/BRI2 interact to modulate learning and memory, we crossed a mouse carrying the ITM2b/BRI2 mutations that causes FDD knocked-in the endogenous mouse Itm2b/Bri2 gene (FDDKI mice) with human ApoE3 and ApoE4 targeted replacement mice. The resultant ApoE3, FDDKI/ApoE3, ApoE4, FDDKI/ApoE4 male mice were assessed longitudinally for learning and memory at 4, 6, 12, and 16– 17 months of age. The results showed that ApoE4-carrying mice displayed spatial working/short-term memory deficits relative to ApoE3-carrying mice starting in early middle age, while long-term spatial memory of ApoE4 mice was not adversely affected even at 16–17 months, and that the FDD mutation impaired working/short-term spatial memory in ApoE3-carrying mice and produced impaired long-term spatial memory in ApoE4-carrying mice in middle age. The present results suggest that the FDD mutation may differentially affect learning and memory in ApoE4 carriers and non-carriers. PMID:26528887

  13. Interaction of ApoE3 and ApoE4 isoforms with an ITM2b/BRI2 mutation linked to the Alzheimer disease-like Danish dementia: Effects on learning and memory.

    PubMed

    Biundo, Fabrizio; Ishiwari, Keita; Del Prete, Dolores; D'Adamio, Luciano

    2015-12-01

    Mutations in Amyloid β Precursor Protein (APP) and in genes that regulate APP processing--such as PSEN1/2 and ITM2b/BRI2--cause familial dementia, such Familial Alzheimer disease (FAD), Familial Danish (FDD) and British (FBD) dementias. The ApoE gene is the major genetic risk factor for sporadic AD. Three major variants of ApoE exist in humans (ApoE2, ApoE3, and ApoE4), with the ApoE4 allele being strongly associated with AD. ITM2b/BRI2 is also a candidate regulatory node genes predicted to mediate the common patterns of gene expression shared by healthy ApoE4 carriers and late-onset AD patients not carrying ApoE4. This evidence provides a direct link between ITM2b/BRI2 and ApoE4. To test whether ApoE4 and pathogenic ITM2b/BRI2 interact to modulate learning and memory, we crossed a mouse carrying the ITM2b/BRI2 mutations that causes FDD knocked-in the endogenous mouse Itm2b/Bri2 gene (FDDKI mice) with human ApoE3 and ApoE4 targeted replacement mice. The resultant ApoE3, FDDKI/ApoE3, ApoE4, FDDKI/ApoE4 male mice were assessed longitudinally for learning and memory at 4, 6, 12, and 16-17 months of age. The results showed that ApoE4-carrying mice displayed spatial working/short-term memory deficits relative to ApoE3-carrying mice starting in early middle age, while long-term spatial memory of ApoE4 mice was not adversely affected even at 16-17 months, and that the FDD mutation impaired working/short-term spatial memory in ApoE3-carrying mice and produced impaired long-term spatial memory in ApoE4-carrying mice in middle age. The present results suggest that the FDD mutation may differentially affect learning and memory in ApoE4 carriers and non-carriers. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. "I am just thankful": the experience of gratitude following traumatic spinal cord injury.

    PubMed

    Chun, Sanghee; Lee, Youngkhill

    2013-01-01

    This study explored the experience of gratitude in everyday life following traumatic spinal cord injury (SCI) by applying thematic analysis to personal experience narratives. Fifteen participants including two negative cases with SCI shared individual experiences of gratitude according to five themes: (a) everyday life, (b) family support, (c) new opportunities, (d) positive sense of self, and (e) gratitude to God. The findings demonstrated that participants benefited from their efforts to appraise challenging life experiences as positive. Therapists could apply intentional and guided gratitude interventions so that individuals would practice and embrace gratitude in adjusting, coping, and adapting positively to various life changes following trauma.

  15. Efficiently sphere-decodable physical layer transmission schemes for wireless storage networks

    NASA Astrophysics Data System (ADS)

    Lu, Hsiao-Feng Francis; Barreal, Amaro; Karpuk, David; Hollanti, Camilla

    2016-12-01

    Three transmission schemes over a new type of multiple-access channel (MAC) model with inter-source communication links are proposed and investigated in this paper. This new channel model is well motivated by, e.g., wireless distributed storage networks, where communication to repair a lost node takes place from helper nodes to a repairing node over a wireless channel. Since in many wireless networks nodes can come and go in an arbitrary manner, there must be an inherent capability of inter-node communication between every pair of nodes. Assuming that communication is possible between every pair of helper nodes, the newly proposed schemes are based on various smart time-sharing and relaying strategies. In other words, certain helper nodes will be regarded as relays, thereby converting the conventional uncooperative multiple-access channel to a multiple-access relay channel (MARC). The diversity-multiplexing gain tradeoff (DMT) of the system together with efficient sphere-decodability and low structural complexity in terms of the number of antennas required at each end is used as the main design objectives. While the optimal DMT for the new channel model is fully open, it is shown that the proposed schemes outperform the DMT of the simple time-sharing protocol and, in some cases, even the optimal uncooperative MAC DMT. While using a wireless distributed storage network as a motivating example throughout the paper, the MAC transmission techniques proposed here are completely general and as such applicable to any MAC communication with inter-source communication links.

  16. Structurally Integrated Versus Structurally Segregated Memory Representations: Implications for the Design of Instructional Materials.

    ERIC Educational Resources Information Center

    Hayes-Roth, Barbara

    Two kinds of memory organization are distinguished: segregrated versus integrated. In segregated memory organizations, related learned propositions have separate memory representations. In integrated memory organizations, memory representations of related propositions share common subrepresentations. Segregated memory organizations facilitate…

  17. Getting connected: Both associative and semantic links structure semantic memory for newly learned persons.

    PubMed

    Wiese, Holger; Schweinberger, Stefan R

    2015-01-01

    The present study examined whether semantic memory for newly learned people is structured by visual co-occurrence, shared semantics, or both. Participants were trained with pairs of simultaneously presented (i.e., co-occurring) preexperimentally unfamiliar faces, which either did or did not share additionally provided semantic information (occupation, place of living, etc.). Semantic information could also be shared between faces that did not co-occur. A subsequent priming experiment revealed faster responses for both co-occurrence/no shared semantics and no co-occurrence/shared semantics conditions, than for an unrelated condition. Strikingly, priming was strongest in the co-occurrence/shared semantics condition, suggesting additive effects of these factors. Additional analysis of event-related brain potentials yielded priming in the N400 component only for combined effects of visual co-occurrence and shared semantics, with more positive amplitudes in this than in the unrelated condition. Overall, these findings suggest that both semantic relatedness and visual co-occurrence are important when novel information is integrated into person-related semantic memory.

  18. Memory T and memory B cells share a transcriptional program of self-renewal with long-term hematopoietic stem cells

    PubMed Central

    Luckey, Chance John; Bhattacharya, Deepta; Goldrath, Ananda W.; Weissman, Irving L.; Benoist, Christophe; Mathis, Diane

    2006-01-01

    The only cells of the hematopoietic system that undergo self-renewal for the lifetime of the organism are long-term hematopoietic stem cells and memory T and B cells. To determine whether there is a shared transcriptional program among these self-renewing populations, we first compared the gene-expression profiles of naïve, effector and memory CD8+ T cells with those of long-term hematopoietic stem cells, short-term hematopoietic stem cells, and lineage-committed progenitors. Transcripts augmented in memory CD8+ T cells relative to naïve and effector T cells were selectively enriched in long-term hematopoietic stem cells and were progressively lost in their short-term and lineage-committed counterparts. Furthermore, transcripts selectively decreased in memory CD8+ T cells were selectively down-regulated in long-term hematopoietic stem cells and progressively increased with differentiation. To confirm that this pattern was a general property of immunologic memory, we turned to independently generated gene expression profiles of memory, naïve, germinal center, and plasma B cells. Once again, memory-enriched and -depleted transcripts were also appropriately augmented and diminished in long-term hematopoietic stem cells, and their expression correlated with progressive loss of self-renewal function. Thus, there appears to be a common signature of both up- and down-regulated transcripts shared between memory T cells, memory B cells, and long-term hematopoietic stem cells. This signature was not consistently enriched in neural or embryonic stem cell populations and, therefore, appears to be restricted to the hematopoeitic system. These observations provide evidence that the shared phenotype of self-renewal in the hematopoietic system is linked at the molecular level. PMID:16492737

  19. Categorical and associative relations increase false memory relative to purely associative relations.

    PubMed

    Coane, Jennifer H; McBride, Dawn M; Termonen, Miia-Liisa; Cutting, J Cooper

    2016-01-01

    The goal of the present study was to examine the contributions of associative strength and similarity in terms of shared features to the production of false memories in the Deese/Roediger-McDermott list-learning paradigm. Whereas the activation/monitoring account suggests that false memories are driven by automatic associative activation from list items to nonpresented lures, combined with errors in source monitoring, other accounts (e.g., fuzzy trace theory, global-matching models) emphasize the importance of semantic-level similarity, and thus predict that shared features between list and lure items will increase false memory. Participants studied lists of nine items related to a nonpresented lure. Half of the lists consisted of items that were associated but did not share features with the lure, and the other half included items that were equally associated but also shared features with the lure (in many cases, these were taxonomically related items). The two types of lists were carefully matched in terms of a variety of lexical and semantic factors, and the same lures were used across list types. In two experiments, false recognition of the critical lures was greater following the study of lists that shared features with the critical lure, suggesting that similarity at a categorical or taxonomic level contributes to false memory above and beyond associative strength. We refer to this phenomenon as a "feature boost" that reflects additive effects of shared meaning and association strength and is generally consistent with accounts of false memory that have emphasized thematic or feature-level similarity among studied and nonstudied representations.

  20. A simplified memory network model based on pattern formations

    NASA Astrophysics Data System (ADS)

    Xu, Kesheng; Zhang, Xiyun; Wang, Chaoqing; Liu, Zonghua

    2014-12-01

    Many experiments have evidenced the transition with different time scales from short-term memory (STM) to long-term memory (LTM) in mammalian brains, while its theoretical understanding is still under debate. To understand its underlying mechanism, it has recently been shown that it is possible to have a long-period rhythmic synchronous firing in a scale-free network, provided the existence of both the high-degree hubs and the loops formed by low-degree nodes. We here present a simplified memory network model to show that the self-sustained synchronous firing can be observed even without these two necessary conditions. This simplified network consists of two loops of coupled excitable neurons with different synaptic conductance and with one node being the sensory neuron to receive an external stimulus signal. This model can be further used to show how the diversity of firing patterns can be selectively formed by varying the signal frequency, duration of the stimulus and network topology, which corresponds to the patterns of STM and LTM with different time scales. A theoretical analysis is presented to explain the underlying mechanism of firing patterns.

  1. System and method for programmable bank selection for banked memory subsystems

    DOEpatents

    Blumrich, Matthias A.; Chen, Dong; Gara, Alan G.; Giampapa, Mark E.; Hoenicke, Dirk; Ohmacht, Martin; Salapura, Valentina; Sugavanam, Krishnan

    2010-09-07

    A programmable memory system and method for enabling one or more processor devices access to shared memory in a computing environment, the shared memory including one or more memory storage structures having addressable locations for storing data. The system comprises: one or more first logic devices associated with a respective one or more processor devices, each first logic device for receiving physical memory address signals and programmable for generating a respective memory storage structure select signal upon receipt of pre-determined address bit values at selected physical memory address bit locations; and, a second logic device responsive to each of the respective select signal for generating an address signal used for selecting a memory storage structure for processor access. The system thus enables each processor device of a computing environment memory storage access distributed across the one or more memory storage structures.

  2. High Performance Programming Using Explicit Shared Memory Model on Cray T3D1

    NASA Technical Reports Server (NTRS)

    Simon, Horst D.; Saini, Subhash; Grassi, Charles

    1994-01-01

    The Cray T3D system is the first-phase system in Cray Research, Inc.'s (CRI) three-phase massively parallel processing (MPP) program. This system features a heterogeneous architecture that closely couples DEC's Alpha microprocessors and CRI's parallel-vector technology, i.e., the Cray Y-MP and Cray C90. An overview of the Cray T3D hardware and available programming models is presented. Under Cray Research adaptive Fortran (CRAFT) model four programming methods (data parallel, work sharing, message-passing using PVM, and explicit shared memory model) are available to the users. However, at this time data parallel and work sharing programming models are not available to the user community. The differences between standard PVM and CRI's PVM are highlighted with performance measurements such as latencies and communication bandwidths. We have found that the performance of neither standard PVM nor CRI s PVM exploits the hardware capabilities of the T3D. The reasons for the bad performance of PVM as a native message-passing library are presented. This is illustrated by the performance of NAS Parallel Benchmarks (NPB) programmed in explicit shared memory model on Cray T3D. In general, the performance of standard PVM is about 4 to 5 times less than obtained by using explicit shared memory model. This degradation in performance is also seen on CM-5 where the performance of applications using native message-passing library CMMD on CM-5 is also about 4 to 5 times less than using data parallel methods. The issues involved (such as barriers, synchronization, invalidating data cache, aligning data cache etc.) while programming in explicit shared memory model are discussed. Comparative performance of NPB using explicit shared memory programming model on the Cray T3D and other highly parallel systems such as the TMC CM-5, Intel Paragon, Cray C90, IBM-SP1, etc. is presented.

  3. Climate impact on spreading of airborne infectious diseases. Complex network based modeling of climate influences on influenza like illnesses

    NASA Astrophysics Data System (ADS)

    Brenner, Frank; Marwan, Norbert; Hoffmann, Peter

    2017-06-01

    In this study we combined a wide range of data sets to simulate the outbreak of an airborne infectious disease that is directly transmitted from human to human. The basis is a complex network whose structures are inspired by global air traffic data (from openflights.org) containing information about airports, airport locations, direct flight connections and airplane types. Disease spreading inside every node is realized with a Susceptible-Exposed-Infected-Recovered (SEIR) compartmental model. Disease transmission rates in our model are depending on the climate environment and therefore vary in time and from node to node. To implement the correlation between water vapor pressure and influenza transmission rate [J. Shaman, M. Kohn, Proc. Natl. Acad. Sci. 106, 3243 (2009)], we use global available climate reanalysis data (WATCH-Forcing-Data-ERA-Interim, WFDEI). During our sensitivity analysis we found that disease spreading dynamics are strongly depending on network properties, the climatic environment of the epidemic outbreak location, and the season during the year in which the outbreak is happening.

  4. Design and study of geosciences data share platform :platform framework, data interoperability, share approach

    NASA Astrophysics Data System (ADS)

    Lu, H.; Yi, D.

    2010-12-01

    The Deep Exploration is one of the important approaches to the Geoscience research. Since 1980s we had started it and achieved a lot of data. Researchers usually integrate both data of space exploration and deep exploration to study geological structures and represent the Earth’s subsurface, and analyze and explain on the base of integrated data. Due to the different exploration approach it results the heterogeneity of data, and therefore the data achievement is always of the import issue to make the researchers confused. The problem of data share and interaction has to be solved during the development of the SinoProbe research project. Through the research of domestic and overseas well-known exploration project and geosciences data platform, the subject explores the solution of data share and interaction. Based on SOA we present the deep exploration data share framework which comprises three level: data level is used for the solution of data store and the integration of the heterogeneous data; medial level provides the data service of geophysics, geochemistry, etc. by the means of Web service, and carry out kinds of application combination by the use of GIS middleware and Eclipse RCP; interaction level provides professional and non-professional customer the access to different accuracy data. The framework adopts GeoSciML data interaction approach. GeoSciML is a geosciences information markup language, as an application of the OpenGIS Consortium’s (OGC) Geography Markup Language (GML). It transfers heterogeneous data into one earth frame and implements inter-operation. We dissertate in this article the solution how to integrate the heterogeneous data and share the data in the project of SinoProbe.

  5. Compression in wearable sensor nodes: impacts of node topology.

    PubMed

    Imtiaz, Syed Anas; Casson, Alexander J; Rodriguez-Villegas, Esther

    2014-04-01

    Wearable sensor nodes monitoring the human body must operate autonomously for very long periods of time. Online and low-power data compression embedded within the sensor node is therefore essential to minimize data storage/transmission overheads. This paper presents a low-power MSP430 compressive sensing implementation for providing such compression, focusing particularly on the impact of the sensor node architecture on the compression performance. Compression power performance is compared for four different sensor nodes incorporating different strategies for wireless transmission/on-sensor-node local storage of data. The results demonstrate that the compressive sensing used must be designed differently depending on the underlying node topology, and that the compression strategy should not be guided only by signal processing considerations. We also provide a practical overview of state-of-the-art sensor node topologies. Wireless transmission of data is often preferred as it offers increased flexibility during use, but in general at the cost of increased power consumption. We demonstrate that wireless sensor nodes can highly benefit from the use of compressive sensing and now can achieve power consumptions comparable to, or better than, the use of local memory.

  6. Social Networks and High Healthcare Utilization: Building Resilience Through Analysis

    DTIC Science & Technology

    2016-09-01

    of Social Network Analysis Patients Developing targeted intervention programs based on the individual’s needs may potentially help improve the...network structure is found in the patterns of interconnection that develop between nodes. It is this linking through common nodes, “the AB link shares...transitivity is responsible for the clustering of nodes that form “communities” of people based on geography, common interests, or other group

  7. Shared Memory Parallelization of an Implicit ADI-type CFD Code

    NASA Technical Reports Server (NTRS)

    Hauser, Th.; Huang, P. G.

    1999-01-01

    A parallelization study designed for ADI-type algorithms is presented using the OpenMP specification for shared-memory multiprocessor programming. Details of optimizations specifically addressed to cache-based computer architectures are described and performance measurements for the single and multiprocessor implementation are summarized. The paper demonstrates that optimization of memory access on a cache-based computer architecture controls the performance of the computational algorithm. A hybrid MPI/OpenMP approach is proposed for clusters of shared memory machines to further enhance the parallel performance. The method is applied to develop a new LES/DNS code, named LESTool. A preliminary DNS calculation of a fully developed channel flow at a Reynolds number of 180, Re(sub tau) = 180, has shown good agreement with existing data.

  8. Address tracing for parallel machines

    NASA Technical Reports Server (NTRS)

    Stunkel, Craig B.; Janssens, Bob; Fuchs, W. Kent

    1991-01-01

    Recently implemented parallel system address-tracing methods based on several metrics are surveyed. The issues specific to collection of traces for both shared and distributed memory parallel computers are highlighted. Five general categories of address-trace collection methods are examined: hardware-captured, interrupt-based, simulation-based, altered microcode-based, and instrumented program-based traces. The problems unique to shared memory and distributed memory multiprocessors are examined separately.

  9. Lattice QCD calculation using VPP500

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Seyong; Ohta, Shigemi

    1995-02-01

    A new vector parallel supercomputer, Fujitsu VPP500, was installed at RIKEN earlier this year. It consists of 30 vector computers, each with 1.6 GFLOPS peak speed and 256 MB memory, connected by a crossbar switch with 400 MB/s peak data transfer rate each way between any pair of nodes. The authors developed a Fortran lattice QCD simulation code for it. It runs at about 1.1 GFLOPS sustained per node for Metropolis pure-gauge update, and about 0.8 GFLOPS sustained per node for conjugate gradient inversion of staggered fermion matrix.

  10. Shared processing in multiple object tracking and visual working memory in the absence of response order and task order confounds

    PubMed Central

    Howe, Piers D. L.

    2017-01-01

    To understand how the visual system represents multiple moving objects and how those representations contribute to tracking, it is essential that we understand how the processes of attention and working memory interact. In the work described here we present an investigation of that interaction via a series of tracking and working memory dual-task experiments. Previously, it has been argued that tracking is resistant to disruption by a concurrent working memory task and that any apparent disruption is in fact due to observers making a response to the working memory task, rather than due to competition for shared resources. Contrary to this, in our experiments we find that when task order and response order confounds are avoided, all participants show a similar decrease in both tracking and working memory performance. However, if task and response order confounds are not adequately controlled for we find substantial individual differences, which could explain the previous conflicting reports on this topic. Our results provide clear evidence that tracking and working memory tasks share processing resources. PMID:28410383

  11. Shared processing in multiple object tracking and visual working memory in the absence of response order and task order confounds.

    PubMed

    Lapierre, Mark D; Cropper, Simon J; Howe, Piers D L

    2017-01-01

    To understand how the visual system represents multiple moving objects and how those representations contribute to tracking, it is essential that we understand how the processes of attention and working memory interact. In the work described here we present an investigation of that interaction via a series of tracking and working memory dual-task experiments. Previously, it has been argued that tracking is resistant to disruption by a concurrent working memory task and that any apparent disruption is in fact due to observers making a response to the working memory task, rather than due to competition for shared resources. Contrary to this, in our experiments we find that when task order and response order confounds are avoided, all participants show a similar decrease in both tracking and working memory performance. However, if task and response order confounds are not adequately controlled for we find substantial individual differences, which could explain the previous conflicting reports on this topic. Our results provide clear evidence that tracking and working memory tasks share processing resources.

  12. Visual and spatial working memory are not that dissociated after all: a time-based resource-sharing account.

    PubMed

    Vergauwe, Evie; Barrouillet, Pierre; Camos, Valérie

    2009-07-01

    Examinations of interference between visual and spatial materials in working memory have suggested domain- and process-based fractionations of visuo-spatial working memory. The present study examined the role of central time-based resource sharing in visuo-spatial working memory and assessed its role in obtained interference patterns. Visual and spatial storage were combined with both visual and spatial on-line processing components in computer-paced working memory span tasks (Experiment 1) and in a selective interference paradigm (Experiment 2). The cognitive load of the processing components was manipulated to investigate its impact on concurrent maintenance for both within-domain and between-domain combinations of processing and storage components. In contrast to both domain- and process-based fractionations of visuo-spatial working memory, the results revealed that recall performance was determined by the cognitive load induced by the processing of items, rather than by the domain to which those items pertained. These findings are interpreted as evidence for a time-based resource-sharing mechanism in visuo-spatial working memory.

  13. Neem leaf glycoprotein promotes dual generation of central and effector memory CD8(+) T cells against sarcoma antigen vaccine to induce protective anti-tumor immunity.

    PubMed

    Ghosh, Sarbari; Sarkar, Madhurima; Ghosh, Tithi; Guha, Ipsita; Bhuniya, Avishek; Saha, Akata; Dasgupta, Shayani; Barik, Subhasis; Bose, Anamika; Baral, Rathindranath

    2016-03-01

    We have previously shown that Neem Leaf Glycoprotein (NLGP) mediates sustained tumor protection by activating host immune response. Now we report that adjuvant help from NLGP predominantly generates CD44(+)CD62L(high)CCR7(high) central memory (TCM; in lymph node) and CD44(+)CD62L(low)CCR7(low) effector memory (TEM; in spleen) CD8(+) T cells of Swiss mice after vaccination with sarcoma antigen (SarAg). Generated TCM and TEM participated either to replenish memory cell pool for sustained disease free states or in rapid tumor eradication respectively. TCM generated after SarAg+NLGP vaccination underwent significant proliferation and IL-2 secretion following SarAg re-stimulation. Furthermore, SarAg+NLGP vaccination helps in greater survival of the memory precursor effector cells at the peak of the effector response and their maintenance as mature memory cells, in comparison to single modality treatment. Such response is corroborated with the reduced phosphorylation of FOXO in the cytosol and increased KLF2 in the nucleus associated with enhanced CD62L, CCR7 expression of lymph node-resident CD8(+) T cells. However, spleen-resident CD8(+) T memory cells show superior efficacy for immediate memory-to-effector cell conversion. The data support in all aspects that SarAg+NLGP demonstrate superiority than SarAg vaccination alone that benefits the host by rapid effector functions whenever required, whereas, central-memory cells are thought to replenish the memory cell pool for ultimate sustained disease free survival till 60 days following post-vaccination tumor inoculation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Resource-Efficient, Hierarchical Auto-Tuning of a Hybrid Lattice Boltzmann Computation on the Cray XT4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Computational Research Division, Lawrence Berkeley National Laboratory; NERSC, Lawrence Berkeley National Laboratory; Computer Science Department, University of California, Berkeley

    2009-05-04

    We apply auto-tuning to a hybrid MPI-pthreads lattice Boltzmann computation running on the Cray XT4 at National Energy Research Scientific Computing Center (NERSC). Previous work showed that multicore-specific auto-tuning can improve the performance of lattice Boltzmann magnetohydrodynamics (LBMHD) by a factor of 4x when running on dual- and quad-core Opteron dual-socket SMPs. We extend these studies to the distributed memory arena via a hybrid MPI/pthreads implementation. In addition to conventional auto-tuning at the local SMP node, we tune at the message-passing level to determine the optimal aspect ratio as well as the correct balance between MPI tasks and threads permore » MPI task. Our study presents a detailed performance analysis when moving along an isocurve of constant hardware usage: fixed total memory, total cores, and total nodes. Overall, our work points to approaches for improving intra- and inter-node efficiency on large-scale multicore systems for demanding scientific applications.« less

  15. Affinity-aware checkpoint restart

    DOE PAGES

    Saini, Ajay; Rezaei, Arash; Mueller, Frank; ...

    2014-12-08

    Current checkpointing techniques employed to overcome faults for HPC applications result in inferior application performance after restart from a checkpoint for a number of applications. This is due to a lack of page and core affinity awareness of the checkpoint/restart (C/R) mechanism, i.e., application tasks originally pinned to cores may be restarted on different cores, and in case of non-uniform memory architectures (NUMA), quite common today, memory pages associated with tasks on a NUMA node may be associated with a different NUMA node after restart. Here, this work contributes a novel design technique for C/R mechanisms to preserve task-to-core mapsmore » and NUMA node specific page affinities across restarts. Experimental results with BLCR, a C/R mechanism, enhanced with affinity awareness demonstrate significant performance benefits of 37%-73% for the NAS Parallel Benchmark codes and 6-12% for NAMD with negligible overheads instead of up to nearly four times longer an execution times without affinity-aware restarts on 16 cores.« less

  16. A Node Linkage Approach for Sequential Pattern Mining

    PubMed Central

    Navarro, Osvaldo; Cumplido, René; Villaseñor-Pineda, Luis; Feregrino-Uribe, Claudia; Carrasco-Ochoa, Jesús Ariel

    2014-01-01

    Sequential Pattern Mining is a widely addressed problem in data mining, with applications such as analyzing Web usage, examining purchase behavior, and text mining, among others. Nevertheless, with the dramatic increase in data volume, the current approaches prove inefficient when dealing with large input datasets, a large number of different symbols and low minimum supports. In this paper, we propose a new sequential pattern mining algorithm, which follows a pattern-growth scheme to discover sequential patterns. Unlike most pattern growth algorithms, our approach does not build a data structure to represent the input dataset, but instead accesses the required sequences through pseudo-projection databases, achieving better runtime and reducing memory requirements. Our algorithm traverses the search space in a depth-first fashion and only preserves in memory a pattern node linkage and the pseudo-projections required for the branch being explored at the time. Experimental results show that our new approach, the Node Linkage Depth-First Traversal algorithm (NLDFT), has better performance and scalability in comparison with state of the art algorithms. PMID:24933123

  17. Affinity-aware checkpoint restart

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saini, Ajay; Rezaei, Arash; Mueller, Frank

    Current checkpointing techniques employed to overcome faults for HPC applications result in inferior application performance after restart from a checkpoint for a number of applications. This is due to a lack of page and core affinity awareness of the checkpoint/restart (C/R) mechanism, i.e., application tasks originally pinned to cores may be restarted on different cores, and in case of non-uniform memory architectures (NUMA), quite common today, memory pages associated with tasks on a NUMA node may be associated with a different NUMA node after restart. Here, this work contributes a novel design technique for C/R mechanisms to preserve task-to-core mapsmore » and NUMA node specific page affinities across restarts. Experimental results with BLCR, a C/R mechanism, enhanced with affinity awareness demonstrate significant performance benefits of 37%-73% for the NAS Parallel Benchmark codes and 6-12% for NAMD with negligible overheads instead of up to nearly four times longer an execution times without affinity-aware restarts on 16 cores.« less

  18. Ultra-fast fluence optimization for beam angle selection algorithms

    NASA Astrophysics Data System (ADS)

    Bangert, M.; Ziegenhein, P.; Oelfke, U.

    2014-03-01

    Beam angle selection (BAS) including fluence optimization (FO) is among the most extensive computational tasks in radiotherapy. Precomputed dose influence data (DID) of all considered beam orientations (up to 100 GB for complex cases) has to be handled in the main memory and repeated FOs are required for different beam ensembles. In this paper, the authors describe concepts accelerating FO for BAS algorithms using off-the-shelf multiprocessor workstations. The FO runtime is not dominated by the arithmetic load of the CPUs but by the transportation of DID from the RAM to the CPUs. On multiprocessor workstations, however, the speed of data transportation from the main memory to the CPUs is non-uniform across the RAM; every CPU has a dedicated memory location (node) with minimum access time. We apply a thread node binding strategy to ensure that CPUs only access DID from their preferred node. Ideal load balancing for arbitrary beam ensembles is guaranteed by distributing the DID of every candidate beam equally to all nodes. Furthermore we use a custom sorting scheme of the DID to minimize the overall data transportation. The framework is implemented on an AMD Opteron workstation. One FO iteration comprising dose, objective function, and gradient calculation takes between 0.010 s (9 beams, skull, 0.23 GB DID) and 0.070 s (9 beams, abdomen, 1.50 GB DID). Our overall FO time is < 1 s for small cases, larger cases take ~ 4 s. BAS runs including FOs for 1000 different beam ensembles take ~ 15-70 min, depending on the treatment site. This enables an efficient clinical evaluation of different BAS algorithms.

  19. Why are you telling me that? A conceptual model of the social function of autobiographical memory.

    PubMed

    Alea, Nicole; Bluck, Susan

    2003-03-01

    In an effort to stimulate and guide empirical work within a functional framework, this paper provides a conceptual model of the social functions of autobiographical memory (AM) across the lifespan. The model delineates the processes and variables involved when AMs are shared to serve social functions. Components of the model include: lifespan contextual influences, the qualitative characteristics of memory (emotionality and level of detail recalled), the speaker's characteristics (age, gender, and personality), the familiarity and similarity of the listener to the speaker, the level of responsiveness during the memory-sharing process, and the nature of the social relationship in which the memory sharing occurs (valence and length of the relationship). These components are shown to influence the type of social function served and/or, the extent to which social functions are served. Directions for future empirical work to substantiate the model and hypotheses derived from the model are provided.

  20. Assessment and Evaluation of Primary Prevention in Spinal Cord Injury

    PubMed Central

    2013-01-01

    Although the incidence of spinal cord injury (SCI) is low, the consequences of this disabling condition are extremely significant for the individual, family, and the community. Sequelae occur in the physical, psychosocial, sexual, and financial arenas, making global prevention of SCI crucial. Understanding how to assess and evaluate primary prevention programs is an important competency for SCI professionals. Assessing a program’s success requires measuring processes, outcomes, and impact. Effective evaluation can lead future efforts for program design while ensuring accountability for the program itself. The intended impact of primary prevention programs for SCI is to decrease the number of individuals who sustain traumatic injury; many programs have process and outcome goals as well. An understanding of the basic types of evaluation, evaluation design, and the overall process of program evaluation is essential for ensuring that these programs are efficacious. All health care professionals have the opportunity to put prevention at the forefront of their practice. With the current paucity of available data, it is important that clinicians share their program design, their successes, and their failures so that all can benefit and future injury can be prevented. PMID:23678281

  1. A Multi-Level Parallelization Concept for High-Fidelity Multi-Block Solvers

    NASA Technical Reports Server (NTRS)

    Hatay, Ferhat F.; Jespersen, Dennis C.; Guruswamy, Guru P.; Rizk, Yehia M.; Byun, Chansup; Gee, Ken; VanDalsem, William R. (Technical Monitor)

    1997-01-01

    The integration of high-fidelity Computational Fluid Dynamics (CFD) analysis tools with the industrial design process benefits greatly from the robust implementations that are transportable across a wide range of computer architectures. In the present work, a hybrid domain-decomposition and parallelization concept was developed and implemented into the widely-used NASA multi-block Computational Fluid Dynamics (CFD) packages implemented in ENSAERO and OVERFLOW. The new parallel solver concept, PENS (Parallel Euler Navier-Stokes Solver), employs both fine and coarse granularity in data partitioning as well as data coalescing to obtain the desired load-balance characteristics on the available computer platforms. This multi-level parallelism implementation itself introduces no changes to the numerical results, hence the original fidelity of the packages are identically preserved. The present implementation uses the Message Passing Interface (MPI) library for interprocessor message passing and memory accessing. By choosing an appropriate combination of the available partitioning and coalescing capabilities only during the execution stage, the PENS solver becomes adaptable to different computer architectures from shared-memory to distributed-memory platforms with varying degrees of parallelism. The PENS implementation on the IBM SP2 distributed memory environment at the NASA Ames Research Center obtains 85 percent scalable parallel performance using fine-grain partitioning of single-block CFD domains using up to 128 wide computational nodes. Multi-block CFD simulations of complete aircraft simulations achieve 75 percent perfect load-balanced executions using data coalescing and the two levels of parallelism. SGI PowerChallenge, SGI Origin 2000, and a cluster of workstations are the other platforms where the robustness of the implementation is tested. The performance behavior on the other computer platforms with a variety of realistic problems will be included as this on-going study progresses.

  2. Destination memory impairment in older people.

    PubMed

    Gopie, Nigel; Craik, Fergus I M; Hasher, Lynn

    2010-12-01

    Older adults are assumed to have poor destination memory-knowing to whom they tell particular information-and anecdotes about them repeating stories to the same people are cited as informal evidence for this claim. Experiment 1 assessed young and older adults' destination memory by having participants tell facts (e.g., "A dime has 118 ridges around its edge") to pictures of famous people (e.g., Oprah Winfrey). Surprise recognition memory tests, which also assessed confidence, revealed that older adults, compared to young adults, were disproportionately impaired on destination memory relative to spared memory for the individual components (i.e., facts, faces) of the episode. Older adults also were more confident that they had not told a fact to a particular person when they actually had (i.e., a miss); this presumably causes them to repeat information more often than young adults. When the direction of information transfer was reversed in Experiment 2, such that the famous people shared information with the participants (i.e., a source memory experiment), age-related memory differences disappeared. In contrast to the destination memory experiment, older adults in the source memory experiment were more confident than young adults that someone had shared a fact with them when a different person actually had shared the fact (i.e., a false alarm). Overall, accuracy and confidence jointly influence age-related changes to destination memory, a fundamental component of successful communication. (c) 2010 APA, all rights reserved).

  3. Distributed generation of shared RSA keys in mobile ad hoc networks

    NASA Astrophysics Data System (ADS)

    Liu, Yi-Liang; Huang, Qin; Shen, Ying

    2005-12-01

    Mobile Ad Hoc Networks is a totally new concept in which mobile nodes are able to communicate together over wireless links in an independent manner, independent of fixed physical infrastructure and centralized administrative infrastructure. However, the nature of Ad Hoc Networks makes them very vulnerable to security threats. Generation and distribution of shared keys for CA (Certification Authority) is challenging for security solution based on distributed PKI(Public-Key Infrastructure)/CA. The solutions that have been proposed in the literature and some related issues are discussed in this paper. The solution of a distributed generation of shared threshold RSA keys for CA is proposed in the present paper. During the process of creating an RSA private key share, every CA node only has its own private security. Distributed arithmetic is used to create the CA's private share locally, and that the requirement of centralized management institution is eliminated. Based on fully considering the Mobile Ad Hoc network's characteristic of self-organization, it avoids the security hidden trouble that comes by holding an all private security share of CA, with which the security and robustness of system is enhanced.

  4. Facilitating arrhythmia simulation: the method of quantitative cellular automata modeling and parallel running

    PubMed Central

    Zhu, Hao; Sun, Yan; Rajagopal, Gunaretnam; Mondry, Adrian; Dhar, Pawan

    2004-01-01

    Background Many arrhythmias are triggered by abnormal electrical activity at the ionic channel and cell level, and then evolve spatio-temporally within the heart. To understand arrhythmias better and to diagnose them more precisely by their ECG waveforms, a whole-heart model is required to explore the association between the massively parallel activities at the channel/cell level and the integrative electrophysiological phenomena at organ level. Methods We have developed a method to build large-scale electrophysiological models by using extended cellular automata, and to run such models on a cluster of shared memory machines. We describe here the method, including the extension of a language-based cellular automaton to implement quantitative computing, the building of a whole-heart model with Visible Human Project data, the parallelization of the model on a cluster of shared memory computers with OpenMP and MPI hybrid programming, and a simulation algorithm that links cellular activity with the ECG. Results We demonstrate that electrical activities at channel, cell, and organ levels can be traced and captured conveniently in our extended cellular automaton system. Examples of some ECG waveforms simulated with a 2-D slice are given to support the ECG simulation algorithm. A performance evaluation of the 3-D model on a four-node cluster is also given. Conclusions Quantitative multicellular modeling with extended cellular automata is a highly efficient and widely applicable method to weave experimental data at different levels into computational models. This process can be used to investigate complex and collective biological activities that can be described neither by their governing differentiation equations nor by discrete parallel computation. Transparent cluster computing is a convenient and effective method to make time-consuming simulation feasible. Arrhythmias, as a typical case, can be effectively simulated with the methods described. PMID:15339335

  5. Mucosal immunization in macaques upregulates the innate APOBEC 3G anti-viral factor in CD4(+) memory T cells.

    PubMed

    Wang, Yufei; Bergmeier, Lesley A; Stebbings, Richard; Seidl, Thomas; Whittall, Trevor; Singh, Mahavir; Berry, Neil; Almond, Neil; Lehner, Thomas

    2009-02-05

    APOBEC3G is an innate intracellular anti-viral factor which deaminates retroviral cytidine to uridine. In vivo studies of APOBEC3G (A3G) were carried out in rhesus macaques, following mucosal immunization with SIV antigens and CCR5 peptides, linked to the 70kDa heat shock protein. A progressive increase in A3G mRNA was elicited in PBMC after each immunization (p<0.0002 to p< or =0.02), which was maintained for at least 17 weeks. Analysis of memory T cells showed a significant increase in A3G mRNA and protein in CD4(+)CCR5(+) memory T cells in circulating (p=0.0001), splenic (p=0.0001), iliac lymph nodes (p=0.002) and rectal (p=0.01) cells of the immunized compared with unimmunized macaques. Mucosal challenge with SIVmac 251 showed a significant increase in A3G mRNA in the CD4(+)CCR5(+) circulating cells (p<0.01) and the draining iliac lymph node cells (p<0.05) in the immunized uninfected macaques, consistent with a protective effect exerted by A3G. The results suggest that mucosal immunization in a non-human primate can induce features of a memory response to an innate anti-viral factor in CCR5(+)CD4(+) memory and CD4(+)CD95(+)CCR7(-) effector memory T cells.

  6. Memory T cells and vaccines.

    PubMed

    Esser, Mark T; Marchese, Rocio D; Kierstead, Lisa S; Tussey, Lynda G; Wang, Fubao; Chirmule, Narendra; Washabaugh, Michael W

    2003-01-17

    T lymphocytes play a central role in the generation of a protective immune response in many microbial infections. After immunization, dendritic cells take up microbial antigens and traffic to draining lymph nodes where they present processed antigens to naïve T cells. These naïve T cells are stimulated to proliferate and differentiate into effector and memory T cells. Activated, effector and memory T cells provide B cell help in the lymph nodes and traffic to sites of infection where they secrete anti-microbial cytokines and kill infected cells. At least two types of memory cells have been defined in humans based on their functional and migratory properties. T central-memory (T(CM)) cells are found predominantly in lymphoid organs and can not be immediately activated, whereas T effector-memory (T(EM)) cells are found predominantly in peripheral tissue and sites of inflammation and exhibit rapid effector function. Most currently licensed vaccines induce antibody responses capable of mediating long-term protection against lytic viruses such as influenza and small pox. In contrast, vaccines against chronic pathogens that require cell-mediated immune responses to control, such as malaria, Mycobacterium tuberculosis (TB), human immunodeficiency virus (HIV) and hepatitis C virus (HCV), are currently not available or are ineffective. Understanding the mechanisms by which long-lived cellular immune responses are generated following vaccination should facilitate the development of safe and effective vaccines against these emerging diseases. Here, we review the current literature with respect to memory T cells and their implications to vaccine development.

  7. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets

    DOE PAGES

    Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De; ...

    2017-01-28

    Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less

  8. Accelerating Smith-Waterman Algorithm for Biological Database Search on CUDA-Compatible GPUs

    NASA Astrophysics Data System (ADS)

    Munekawa, Yuma; Ino, Fumihiko; Hagihara, Kenichi

    This paper presents a fast method capable of accelerating the Smith-Waterman algorithm for biological database search on a cluster of graphics processing units (GPUs). Our method is implemented using compute unified device architecture (CUDA), which is available on the nVIDIA GPU. As compared with previous methods, our method has four major contributions. (1) The method efficiently uses on-chip shared memory to reduce the data amount being transferred between off-chip video memory and processing elements in the GPU. (2) It also reduces the number of data fetches by applying a data reuse technique to query and database sequences. (3) A pipelined method is also implemented to overlap GPU execution with database access. (4) Finally, a master/worker paradigm is employed to accelerate hundreds of database searches on a cluster system. In experiments, the peak performance on a GeForce GTX 280 card reaches 8.32 giga cell updates per second (GCUPS). We also find that our method reduces the amount of data fetches to 1/140, achieving approximately three times higher performance than a previous CUDA-based method. Our 32-node cluster version is approximately 28 times faster than a single GPU version. Furthermore, the effective performance reaches 75.6 giga instructions per second (GIPS) using 32 GeForce 8800 GTX cards.

  9. Fast distributed large-pixel-count hologram computation using a GPU cluster.

    PubMed

    Pan, Yuechao; Xu, Xuewu; Liang, Xinan

    2013-09-10

    Large-pixel-count holograms are one essential part for big size holographic three-dimensional (3D) display, but the generation of such holograms is computationally demanding. In order to address this issue, we have built a graphics processing unit (GPU) cluster with 32.5 Tflop/s computing power and implemented distributed hologram computation on it with speed improvement techniques, such as shared memory on GPU, GPU level adaptive load balancing, and node level load distribution. Using these speed improvement techniques on the GPU cluster, we have achieved 71.4 times computation speed increase for 186M-pixel holograms. Furthermore, we have used the approaches of diffraction limits and subdivision of holograms to overcome the GPU memory limit in computing large-pixel-count holograms. 745M-pixel and 1.80G-pixel holograms were computed in 343 and 3326 s, respectively, for more than 2 million object points with RGB colors. Color 3D objects with 1.02M points were successfully reconstructed from 186M-pixel hologram computed in 8.82 s with all the above three speed improvement techniques. It is shown that distributed hologram computation using a GPU cluster is a promising approach to increase the computation speed of large-pixel-count holograms for large size holographic display.

  10. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De

    Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less

  11. An Efficient Scheme of Quantum Wireless Multi-hop Communication using Coefficient Matrix

    NASA Astrophysics Data System (ADS)

    Zhao, Bei; Zha, Xin-Wei; Duan, Ya-Jun; Sun, Xin-Mei

    2015-08-01

    By defining the coefficient matrix, a new quantum teleportation scheme in quantum wireless multi-hop network is proposed. With the help of intermediate nodes, an unknown qubit state can be teleported between two distant nodes which do not share entanglement in advance. Arbitrary Bell pairs and entanglement swapping are utilized for establishing quantum channel among intermediate nodes. Using collapsed matrix, the initial quantum state can be perfectly recovered at the destination.

  12. Direct memory access transfer completion notification

    DOEpatents

    Archer, Charles J. , Blocksome; Michael A. , Parker; Jeffrey, J [Rochester, MN

    2011-02-15

    Methods, systems, and products are disclosed for DMA transfer completion notification that include: inserting, by an origin DMA on an origin node in an origin injection FIFO, a data descriptor for an application message; inserting, by the origin DMA, a reflection descriptor in the origin injection FIFO, the reflection descriptor specifying a remote get operation for injecting a completion notification descriptor in a reflection injection FIFO on a reflection node; transferring, by the origin DMA to a target node, the message in dependence upon the data descriptor; in response to completing the message transfer, transferring, by the origin DMA to the reflection node, the completion notification descriptor in dependence upon the reflection descriptor; receiving, by the origin DMA from the reflection node, a completion packet; and notifying, by the origin DMA in response to receiving the completion packet, the origin node's processing core that the message transfer is complete.

  13. Organic memory device with self-assembly monolayered aptamer conjugated nanoparticles

    NASA Astrophysics Data System (ADS)

    Oh, Sewook; Kim, Minkeun; Kim, Yejin; Jung, Hunsang; Yoon, Tae-Sik; Choi, Young-Jin; Jung Kang, Chi; Moon, Myeong-Ju; Jeong, Yong-Yeon; Park, In-Kyu; Ho Lee, Hyun

    2013-08-01

    An organic memory structure using monolayered aptamer conjugated gold nanoparticles (Au NPs) as charge storage nodes was demonstrated. Metal-pentacene-insulator-semiconductor device was adopted for the non-volatile memory effect through self assembly monolayer of A10-aptamer conjugated Au NPs, which was formed on functionalized insulator surface with prostate-specific membrane antigen protein. The capacitance versus voltage (C-V) curves obtained for the monolayered Au NPs capacitor exhibited substantial flat-band voltage shift (ΔVFB) or memory window of 3.76 V under (+/-)7 V voltage sweep. The memory device format can be potentially expanded to a highly specific capacitive sensor for the aptamer-specific biomolecule detection.

  14. Managing coherence via put/get windows

    DOEpatents

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton on Hudson, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Hoenicke, Dirk [Ossining, NY; Ohmacht, Martin [Yorktown Heights, NY

    2011-01-11

    A method and apparatus for managing coherence between two processors of a two processor node of a multi-processor computer system. Generally the present invention relates to a software algorithm that simplifies and significantly speeds the management of cache coherence in a message passing parallel computer, and to hardware apparatus that assists this cache coherence algorithm. The software algorithm uses the opening and closing of put/get windows to coordinate the activated required to achieve cache coherence. The hardware apparatus may be an extension to the hardware address decode, that creates, in the physical memory address space of the node, an area of virtual memory that (a) does not actually exist, and (b) is therefore able to respond instantly to read and write requests from the processing elements.

  15. Managing coherence via put/get windows

    DOEpatents

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton on Hudson, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Hoenicke, Dirk [Ossining, NY; Ohmacht, Martin [Yorktown Heights, NY

    2012-02-21

    A method and apparatus for managing coherence between two processors of a two processor node of a multi-processor computer system. Generally the present invention relates to a software algorithm that simplifies and significantly speeds the management of cache coherence in a message passing parallel computer, and to hardware apparatus that assists this cache coherence algorithm. The software algorithm uses the opening and closing of put/get windows to coordinate the activated required to achieve cache coherence. The hardware apparatus may be an extension to the hardware address decode, that creates, in the physical memory address space of the node, an area of virtual memory that (a) does not actually exist, and (b) is therefore able to respond instantly to read and write requests from the processing elements.

  16. Interference due to shared features between action plans is influenced by working memory span.

    PubMed

    Fournier, Lisa R; Behmer, Lawrence P; Stubblefield, Alexandra M

    2014-12-01

    In this study, we examined the interactions between the action plans that we hold in memory and the actions that we carry out, asking whether the interference due to shared features between action plans is due to selection demands imposed on working memory. Individuals with low and high working memory spans learned arbitrary motor actions in response to two different visual events (A and B), presented in a serial order. They planned a response to the first event (A) and while maintaining this action plan in memory they then executed a speeded response to the second event (B). Afterward, they executed the action plan for the first event (A) maintained in memory. Speeded responses to the second event (B) were delayed when it shared an action feature (feature overlap) with the first event (A), relative to when it did not (no feature overlap). The size of the feature-overlap delay was greater for low-span than for high-span participants. This indicates that interference due to overlapping action plans is greater when fewer working memory resources are available, suggesting that this interference is due to selection demands imposed on working memory. Thus, working memory plays an important role in managing current and upcoming action plans, at least for newly learned tasks. Also, managing multiple action plans is compromised in individuals who have low versus high working memory spans.

  17. Destination Memory Impairment in Older People

    PubMed Central

    Gopie, Nigel; Craik, Fergus I. M.; Hasher, Lynn

    2012-01-01

    Older adults are assumed to have poor destination memory— knowing to whom they tell particular information—and anecdotes about them repeating stories to the same people are cited as informal evidence for this claim. Experiment 1 assessed young and older adults’ destination memory by having participants tell facts (e.g., “A dime has 118 ridges around its edge”) to pictures of famous people (e.g., Oprah Winfrey). Surprise recognition memory tests, which also assessed confidence, revealed that older adults, compared to young adults, were disproportionately impaired on destination memory relative to spared memory for the individual components (i.e., facts, faces) of the episode. Older adults also were more confident that they had not told a fact to a particular person when they actually had (i.e., a miss); this presumably causes them to repeat information more often than young adults. When the direction of information transfer was reversed in Experiment 2, such that the famous people shared information with the participants (i.e., a source memory experiment), age-related memory differences disappeared. In contrast to the destination memory experiment, older adults in the source memory experiment were more confident than young adults that someone had shared a fact with them when a different person actually had shared the fact (i.e., a false alarm). Overall, accuracy and confidence jointly influence age-related changes to destination memory, a fundamental component of successful communication. PMID:20718537

  18. Soft error rate simulation and initial design considerations of neutron intercepting silicon chip (NISC)

    NASA Astrophysics Data System (ADS)

    Celik, Cihangir

    Advances in microelectronics result in sub-micrometer electronic technologies as predicted by Moore's Law, 1965, which states the number of transistors in a given space would double every two years. The most available memory architectures today have submicrometer transistor dimensions. The International Technology Roadmap for Semiconductors (ITRS), a continuation of Moore's Law, predicts that Dynamic Random Access Memory (DRAM) will have an average half pitch size of 50 nm and Microprocessor Units (MPU) will have an average gate length of 30 nm over the period of 2008-2012. Decreases in the dimensions satisfy the producer and consumer requirements of low power consumption, more data storage for a given space, faster clock speed, and portability of integrated circuits (IC), particularly memories. On the other hand, these properties also lead to a higher susceptibility of IC designs to temperature, magnetic interference, power supply, and environmental noise, and radiation. Radiation can directly or indirectly affect device operation. When a single energetic particle strikes a sensitive node in the micro-electronic device, it can cause a permanent or transient malfunction in the device. This behavior is called a Single Event Effect (SEE). SEEs are mostly transient errors that generate an electric pulse which alters the state of a logic node in the memory device without having a permanent effect on the functionality of the device. This is called a Single Event Upset (SEU) or Soft Error . Contrary to SEU, Single Event Latchup (SEL), Single Event Gate Rapture (SEGR), or Single Event Burnout (SEB) they have permanent effects on the device operation and a system reset or recovery is needed to return to proper operations. The rate at which a device or system encounters soft errors is defined as Soft Error Rate (SER). The semiconductor industry has been struggling with SEEs and is taking necessary measures in order to continue to improve system designs in nano-scale technologies. Prevention of SEEs has been studied and applied in the semiconductor industry by including radiation protection precautions in the system architecture or by using corrective algorithms in the system operation. Decreasing 10B content (20%of natural boron) in the natural boron of Borophosphosilicate glass (BPSG) layers that are conventionally used in the fabrication of semiconductor devices was one of the major radiation protection approaches for the system architecture. Neutron interaction in the BPSG layer was the origin of the SEEs because of the 10B (n,alpha) 7Li reaction products. Both of the particles produced have the capability of ionization in the silicon substrate region, whose thickness is comparable to the ranges of these particles. Using the soft error phenomenon in exactly the opposite manner of the semiconductor industry can provide a new neutron detection system based on the SERs in the semiconductor memories. By investigating the soft error mechanisms in the available semiconductor memories and enhancing the soft error occurrences in these devices, one can convert all memory using intelligent systems into portable, power efficient, directiondependent neutron detectors. The Neutron Intercepting Silicon Chip (NISC) project aims to achieve this goal by introducing 10B-enriched BPSG layers to the semiconductor memory architectures. This research addresses the development of a simulation tool, the NISC Soft Error Analysis Tool (NISCSAT), for soft error modeling and analysis in the semiconductor memories to provide basic design considerations for the NISC. NISCSAT performs particle transport and calculates the soft error probabilities, or SER, depending on energy depositions of the particles in a given memory node model of the NISC. Soft error measurements were performed with commercially available, off-the-shelf semiconductor memories and microprocessors to observe soft error variations with the neutron flux and memory supply voltage. Measurement results show that soft errors in the memories increase proportionally with the neutron flux, whereas they decrease with increasing the supply voltages. NISC design considerations include the effects of device scaling, 10B content in the BPSG layer, incoming neutron energy, and critical charge of the node for this dissertation. NISCSAT simulations were performed with various memory node models to account these effects. Device scaling simulations showed that any further increase in the thickness of the BPSG layer beyond 2 mum causes self-shielding of the incoming neutrons due to the BPSG layer and results in lower detection efficiencies. Moreover, if the BPSG layer is located more than 4 mum apart from the depletion region in the node, there are no soft errors in the node due to the fact that both of the reaction products have lower ranges in the silicon or any possible node layers. Calculation results regarding the critical charge indicated that the mean charge deposition of the reaction products in the sensitive volume of the node is about 15 fC. It is evident that the NISC design should have a memory architecture with a critical charge of 15 fC or less to obtain higher detection efficiencies. Moreover, the sensitive volume should be placed in close proximity to the BPSG layers so that its location would be within the range of alpha and 7Li particles. Results showed that the distance between the BPSG layer and the sensitive volume should be less than 2 mum to increase the detection efficiency of the NISC. Incoming neutron energy was also investigated by simulations and the results obtained from these simulations showed that NISC neutron detection efficiency is related with the neutron cross-sections of 10B (n,alpha) 7Li reaction, e.g., ratio of the thermal (0.0253 eV) to fast (2 MeV) neutron detection efficiencies is approximately equal to 8000:1. Environmental conditions and their effects on the NISC performance were also studied in this research. Cosmic rays were modeled and simulated via NISCSAT to investigate detection reliability of the NISC. Simulation results show that cosmic rays account for less than 2 % of the soft errors for the thermal neutron detection. On the other hand, fast neutron detection by the NISC, which already has a poor efficiency due to the low neutron cross-sections, becomes almost impossible at higher altitudes where the cosmic ray fluxes and their energies are higher. NISCSAT simulations regarding soft error dependency of the NISC for temperature and electromagnetic fields show that there are no significant effects in the NISC detection efficiency. Furthermore, the detection efficiency of the NISC decreases with both air humidity and use of moderators since the incoming neutrons scatter away before reaching the memory surface.

  19. Beyond the Laboratory, Into the Clinic: What Dogs with Disk Disease Have Taught Us About Photobiomodulation for Spinal Cord Injury.

    PubMed

    Robinson, Narda G

    2017-11-01

    For spinal-cord-injured (SCI) patients, integrative medicine approaches such as photomedicine and acupuncture can renew hope and offer previously unrecognized ways to help regain function and improve quality of life. By understanding the mechanisms of action that these two modalities share, practitioners can better target specific attributes of spinal cord pathophysiology that are limiting recovery. Naturally occurring intervertebral disk disease (IVDD) in dogs affords unparalleled translational opportunities to develop treatment strategies involving photobiomodulation and acupuncture. Insights derived through clinical trials of dogs with IVDD have the potential to raise the standard of care for both human and canine SCI patients.

  20. Multihop teleportation of two-qubit state via the composite GHZ-Bell channel

    NASA Astrophysics Data System (ADS)

    Zou, Zhen-Zhen; Yu, Xu-Tao; Gong, Yan-Xiao; Zhang, Zai-Chen

    2017-01-01

    A multihop teleportation protocol in quantum communication network is introduced to teleport an arbitrary two-qubit state, between two nodes without directly sharing entanglement pairs. Quantum channels are built among neighbor nodes based on a five-qubit entangled system composed of GHZ and Bell pairs. The von Neumann measurements in all intermediate nodes and the source node are implemented, and then the measurement outcomes are sent to the destination node independently. After collecting all the measurement outcomes at the destination node, an efficient method is proposed to calculate the unitary operations for transforming the receiver's states to the state teleported. Therefore, only adopting the proper unitary operations at the destination node, the desired quantum state can be recovered perfectly. The transmission flexibility and efficiency of quantum network with composite GHZ-Bell channel are improved by transmitting measurement outcomes of all nodes in parallelism and reducing hop-by-hop teleportation delay.

  1. A simplified computational memory model from information processing.

    PubMed

    Zhang, Lanhua; Zhang, Dongsheng; Deng, Yuqin; Ding, Xiaoqian; Wang, Yan; Tang, Yiyuan; Sun, Baoliang

    2016-11-23

    This paper is intended to propose a computational model for memory from the view of information processing. The model, called simplified memory information retrieval network (SMIRN), is a bi-modular hierarchical functional memory network by abstracting memory function and simulating memory information processing. At first meta-memory is defined to express the neuron or brain cortices based on the biology and graph theories, and we develop an intra-modular network with the modeling algorithm by mapping the node and edge, and then the bi-modular network is delineated with intra-modular and inter-modular. At last a polynomial retrieval algorithm is introduced. In this paper we simulate the memory phenomena and functions of memorization and strengthening by information processing algorithms. The theoretical analysis and the simulation results show that the model is in accordance with the memory phenomena from information processing view.

  2. Contrasting effects of strong ties on SIR and SIS processes in temporal networks

    NASA Astrophysics Data System (ADS)

    Sun, Kaiyuan; Baronchelli, Andrea; Perra, Nicola

    2015-12-01

    Most real networks are characterized by connectivity patterns that evolve in time following complex, non-Markovian, dynamics. Here we investigate the impact of this ubiquitous feature by studying the Susceptible-Infected-Recovered (SIR) and Susceptible-Infected-Susceptible (SIS) epidemic models on activity driven networks with and without memory (i.e., Markovian and non-Markovian). We find that memory inhibits the spreading process in SIR models by shifting the epidemic threshold to larger values and reducing the final fraction of recovered nodes. On the contrary, in SIS processes memory reduces the epidemic threshold and, for a wide range of disease parameters, increases the fraction of nodes affected by the disease in the endemic state. The heterogeneity in tie strengths, and the frequent repetition of strong ties it entails, allows in fact less virulent SIS-like diseases to survive in tightly connected local clusters that serve as reservoir for the virus. We validate this picture by studying both processes on two real temporal networks.

  3. Building a Terabyte Memory Bandwidth Compute Node with Four Consumer Electronics GPUs

    NASA Astrophysics Data System (ADS)

    Omlin, Samuel; Räss, Ludovic; Podladchikov, Yuri

    2014-05-01

    GPUs released for consumer electronics are generally built with the same chip architectures as the GPUs released for professional usage. With regards to scientific computing, there are no obvious important differences in functionality or performance between the two types of releases, yet the price can differ up to one order of magnitude. For example, the consumer electronics release of the most recent NVIDIA Kepler architecture (GK110), named GeForce GTX TITAN, performed equally well in conducted memory bandwidth tests as the professional release, named Tesla K20; the consumer electronics release costs about one third of the professional release. We explain how to design and assemble a well adjusted computer with four high-end consumer electronics GPUs (GeForce GTX TITAN) combining more than 1 terabyte/s memory bandwidth. We compare the system's performance and precision with the one of hardware released for professional usage. The system can be used as a powerful workstation for scientific computing or as a compute node in a home-built GPU cluster.

  4. Multiplexed memory-insensitive quantum repeaters.

    PubMed

    Collins, O A; Jenkins, S D; Kuzmich, A; Kennedy, T A B

    2007-02-09

    Long-distance quantum communication via distant pairs of entangled quantum bits (qubits) is the first step towards secure message transmission and distributed quantum computing. To date, the most promising proposals require quantum repeaters to mitigate the exponential decrease in communication rate due to optical fiber losses. However, these are exquisitely sensitive to the lifetimes of their memory elements. We propose a multiplexing of quantum nodes that should enable the construction of quantum networks that are largely insensitive to the coherence times of the quantum memory elements.

  5. Oak Ridge Institutional Cluster Autotune Test Drive Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jibonananda, Sanyal; New, Joshua Ryan

    2014-02-01

    The Oak Ridge Institutional Cluster (OIC) provides general purpose computational resources for the ORNL staff to run computation heavy jobs that are larger than desktop applications but do not quite require the scale and power of the Oak Ridge Leadership Computing Facility (OLCF). This report details the efforts made and conclusions derived in performing a short test drive of the cluster resources on Phase 5 of the OIC. EnergyPlus was used in the analysis as a candidate user program and the overall software environment was evaluated against anticipated challenges experienced with resources such as the shared memory-Nautilus (JICS) and Titanmore » (OLCF). The OIC performed within reason and was found to be acceptable in the context of running EnergyPlus simulations. The number of cores per node and the availability of scratch space per node allow non-traditional desktop focused applications to leverage parallel ensemble execution. Although only individual runs of EnergyPlus were executed, the software environment on the OIC appeared suitable to run ensemble simulations with some modifications to the Autotune workflow. From a standpoint of general usability, the system supports common Linux libraries, compilers, standard job scheduling software (Torque/Moab), and the OpenMPI library (the only MPI library) for MPI communications. The file system is a Panasas file system which literature indicates to be an efficient file system.« less

  6. HeinzelCluster: accelerated reconstruction for FORE and OSEM3D.

    PubMed

    Vollmar, S; Michel, C; Treffert, J T; Newport, D F; Casey, M; Knöss, C; Wienhard, K; Liu, X; Defrise, M; Heiss, W D

    2002-08-07

    Using iterative three-dimensional (3D) reconstruction techniques for reconstruction of positron emission tomography (PET) is not feasible on most single-processor machines due to the excessive computing time needed, especially so for the large sinogram sizes of our high-resolution research tomograph (HRRT). In our first approach to speed up reconstruction time we transform the 3D scan into the format of a two-dimensional (2D) scan with sinograms that can be reconstructed independently using Fourier rebinning (FORE) and a fast 2D reconstruction method. On our dedicated reconstruction cluster (seven four-processor systems, Intel PIII@700 MHz, switched fast ethernet and Myrinet, Windows NT Server), we process these 2D sinograms in parallel. We have achieved a speedup > 23 using 26 processors and also compared results for different communication methods (RPC, Syngo, Myrinet GM). The other approach is to parallelize OSEM3D (implementation of C Michel), which has produced the best results for HRRT data so far and is more suitable for an adequate treatment of the sinogram gaps that result from the detector geometry of the HRRT. We have implemented two levels of parallelization for four dedicated cluster (a shared memory fine-grain level on each node utilizing all four processors and a coarse-grain level allowing for 15 nodes) reducing the time for one core iteration from over 7 h to about 35 min.

  7. Obstructive Sleep Apnea is Associated With Early but Possibly Modifiable Alzheimer's Disease Biomarkers Changes.

    PubMed

    Liguori, Claudio; Mercuri, Nicola Biagio; Izzi, Francesca; Romigi, Andrea; Cordella, Alberto; Sancesario, Giuseppe; Placidi, Fabio

    2017-05-01

    Obstructive sleep apnea (OSA) is a common sleep disorder. The, literature lacks studies examining sleep, cognition, and Alzheimer's Disease (AD) cerebrospinal fluid (CSF) biomarkers in OSA patients. Therefore, we first studied cognitive performances, polysomnographic sleep, and CSF β-amyloid42, tau proteins, and lactate levels in patients affected by subjective cognitive impairment (SCI) divided in three groups: OSA patients (showing an Apnea-Hypopnea Index [AHI] ≥15/hr), controls (showing an AHI < 15/hr), and patients with OSA treated by continuous positive airway pressure (CPAP). We compared results among 25 OSA, 10 OSA-CPAP, and 15 controls who underwent a protocol counting neuropsychological testing in the morning, 48-hr polysomnography followed by CSF analysis. OSA patients showed lower CSF Aβ42 concentrations, higher CSF lactate levels, and higher t-tau/Aβ42 ratio compared to controls and OSA-CPAP patients. OSA patients also showed reduced sleep quality and continuity and lower performances at memory, intelligence, and executive tests than controls and OSA-CPAP patients. We found significant relationships among higher CSF tau proteins levels, sleep impairment, and increased CSF lactate levels in the OSA group. Moreover, lower CSF Aβ42 levels correlate with memory impairment and nocturnal oxygen saturation parameters in OSA patients. We hypothesize that OSA reducing sleep quality and producing intermittent hypoxia lowers CSF Aβ42 levels, increases CSF lactate levels, and alters cognitive performances in SCI patients, thus inducing early AD clinical and neuropathological biomarkers changes. Notably, controls as well as OSA-CPAP SCI patients did not show clinical and biochemical AD markers. Therefore, OSA may induce early but possibly CPAP-modifiable AD biomarkers changes. © Sleep Research Society 2017. Published by Oxford University Press on behalf of the Sleep Research Society. All rights reserved. For permissions, please e-mail journals.permissions@oup.com.

  8. Low latency memory access and synchronization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blumrich, Matthias A.; Chen, Dong; Coteus, Paul W.

    A low latency memory system access is provided in association with a weakly-ordered multiprocessor system. Each processor in the multiprocessor shares resources, and each shared resource has an associated lock within a locking device that provides support for synchronization between the multiple processors in the multiprocessor and the orderly sharing of the resources. A processor only has permission to access a resource when it owns the lock associated with that resource, and an attempt by a processor to own a lock requires only a single load operation, rather than a traditional atomic load followed by store, such that the processormore » only performs a read operation and the hardware locking device performs a subsequent write operation rather than the processor. A simple prefetching for non-contiguous data structures is also disclosed. A memory line is redefined so that in addition to the normal physical memory data, every line includes a pointer that is large enough to point to any other line in the memory, wherein the pointers to determine which memory line to prefetch rather than some other predictive algorithm. This enables hardware to effectively prefetch memory access patterns that are non-contiguous, but repetitive.« less

  9. Low latency memory access and synchronization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blumrich, Matthias A.; Chen, Dong; Coteus, Paul W.

    A low latency memory system access is provided in association with a weakly-ordered multiprocessor system. Bach processor in the multiprocessor shares resources, and each shared resource has an associated lock within a locking device that provides support for synchronization between the multiple processors in the multiprocessor and the orderly sharing of the resources. A processor only has permission to access a resource when it owns the lock associated with that resource, and an attempt by a processor to own a lock requires only a single load operation, rather than a traditional atomic load followed by store, such that the processormore » only performs a read operation and the hardware locking device performs a subsequent write operation rather than the processor. A simple prefetching for non-contiguous data structures is also disclosed. A memory line is redefined so that in addition to the normal physical memory data, every line includes a pointer that is large enough to point to any other line in the memory, wherein the pointers to determine which memory line to prefetch rather than some other predictive algorithm. This enables hardware to effectively prefetch memory access patterns that are non-contiguous, but repetitive.« less

  10. Location-Unbound Color-Shape Binding Representations in Visual Working Memory.

    PubMed

    Saiki, Jun

    2016-02-01

    The mechanism by which nonspatial features, such as color and shape, are bound in visual working memory, and the role of those features' location in their binding, remains unknown. In the current study, I modified a redundancy-gain paradigm to investigate these issues. A set of features was presented in a two-object memory display, followed by a single object probe. Participants judged whether the probe contained any features of the memory display, regardless of its location. Response time distributions revealed feature coactivation only when both features of a single object in the memory display appeared together in the probe, regardless of the response time benefit from the probe and memory objects sharing the same location. This finding suggests that a shared location is necessary in the formation of bound representations but unnecessary in their maintenance. Electroencephalography data showed that amplitude modulations reflecting location-unbound feature coactivation were different from those reflecting the location-sharing benefit, consistent with the behavioral finding that feature-location binding is unnecessary in the maintenance of color-shape binding. © The Author(s) 2015.

  11. Shared reality in intergroup communication: Increasing the epistemic authority of an out-group audience.

    PubMed

    Echterhoff, Gerald; Kopietz, René; Higgins, E Tory

    2017-06-01

    Communicators typically tune messages to their audience's attitude. Such audience tuning biases communicators' memory for the topic toward the audience's attitude to the extent that they create a shared reality with the audience. To investigate shared reality in intergroup communication, we first established that a reduced memory bias after tuning messages to an out-group (vs. in-group) audience is a subtle index of communicators' denial of shared reality to that out-group audience (Experiments 1a and 1b). We then examined whether the audience-tuning memory bias might emerge when the out-group audience's epistemic authority is enhanced, either by increasing epistemic expertise concerning the communication topic or by creating epistemic consensus among members of a multiperson out-group audience. In Experiment 2, when Germans communicated to a Turkish audience with an attitude about a Turkish (vs. German) target, the audience-tuning memory bias appeared. In Experiment 3, when the audience of German communicators consisted of 3 Turks who all held the same attitude toward the target, the memory bias again appeared. The association between message valence and memory valence was consistently higher when the audience's epistemic authority was high (vs. low). An integrative analysis across all studies also suggested that the memory bias increases with increasing strength of epistemic inputs (epistemic expertise, epistemic consensus, and audience-tuned message production). The findings suggest novel ways of overcoming intergroup biases in intergroup relations. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  12. The Developmental Influence of Primary Memory Capacity on Working Memory and Academic Achievement

    PubMed Central

    2015-01-01

    In this study, we investigate the development of primary memory capacity among children. Children between the ages of 5 and 8 completed 3 novel tasks (split span, interleaved lists, and a modified free-recall task) that measured primary memory by estimating the number of items in the focus of attention that could be spontaneously recalled in serial order. These tasks were calibrated against traditional measures of simple and complex span. Clear age-related changes in these primary memory estimates were observed. There were marked individual differences in primary memory capacity, but each novel measure was predictive of simple span performance. Among older children, each measure shared variance with reading and mathematics performance, whereas for younger children, the interleaved lists task was the strongest single predictor of academic ability. We argue that these novel tasks have considerable potential for the measurement of primary memory capacity and provide new, complementary ways of measuring the transient memory processes that predict academic performance. The interleaved lists task also shared features with interference control tasks, and our findings suggest that young children have a particular difficulty in resisting distraction and that variance in the ability to resist distraction is also shared with measures of educational attainment. PMID:26075630

  13. The developmental influence of primary memory capacity on working memory and academic achievement.

    PubMed

    Hall, Debbora; Jarrold, Christopher; Towse, John N; Zarandi, Amy L

    2015-08-01

    In this study, we investigate the development of primary memory capacity among children. Children between the ages of 5 and 8 completed 3 novel tasks (split span, interleaved lists, and a modified free-recall task) that measured primary memory by estimating the number of items in the focus of attention that could be spontaneously recalled in serial order. These tasks were calibrated against traditional measures of simple and complex span. Clear age-related changes in these primary memory estimates were observed. There were marked individual differences in primary memory capacity, but each novel measure was predictive of simple span performance. Among older children, each measure shared variance with reading and mathematics performance, whereas for younger children, the interleaved lists task was the strongest single predictor of academic ability. We argue that these novel tasks have considerable potential for the measurement of primary memory capacity and provide new, complementary ways of measuring the transient memory processes that predict academic performance. The interleaved lists task also shared features with interference control tasks, and our findings suggest that young children have a particular difficulty in resisting distraction and that variance in the ability to resist distraction is also shared with measures of educational attainment. (c) 2015 APA, all rights reserved).

  14. Conditional load and store in a shared memory

    DOEpatents

    Blumrich, Matthias A; Ohmacht, Martin

    2015-02-03

    A method, system and computer program product for implementing load-reserve and store-conditional instructions in a multi-processor computing system. The computing system includes a multitude of processor units and a shared memory cache, and each of the processor units has access to the memory cache. In one embodiment, the method comprises providing the memory cache with a series of reservation registers, and storing in these registers addresses reserved in the memory cache for the processor units as a result of issuing load-reserve requests. In this embodiment, when one of the processor units makes a request to store data in the memory cache using a store-conditional request, the reservation registers are checked to determine if an address in the memory cache is reserved for that processor unit. If an address in the memory cache is reserved for that processor, the data are stored at this address.

  15. Entanglement of spin waves among four quantum memories.

    PubMed

    Choi, K S; Goban, A; Papp, S B; van Enk, S J; Kimble, H J

    2010-11-18

    Quantum networks are composed of quantum nodes that interact coherently through quantum channels, and open a broad frontier of scientific opportunities. For example, a quantum network can serve as a 'web' for connecting quantum processors for computation and communication, or as a 'simulator' allowing investigations of quantum critical phenomena arising from interactions among the nodes mediated by the channels. The physical realization of quantum networks generically requires dynamical systems capable of generating and storing entangled states among multiple quantum memories, and efficiently transferring stored entanglement into quantum channels for distribution across the network. Although such capabilities have been demonstrated for diverse bipartite systems, entangled states have not been achieved for interconnects capable of 'mapping' multipartite entanglement stored in quantum memories to quantum channels. Here we demonstrate measurement-induced entanglement stored in four atomic memories; user-controlled, coherent transfer of the atomic entanglement to four photonic channels; and characterization of the full quadripartite entanglement using quantum uncertainty relations. Our work therefore constitutes an advance in the distribution of multipartite entanglement across quantum networks. We also show that our entanglement verification method is suitable for studying the entanglement order of condensed-matter systems in thermal equilibrium.

  16. Categorizing words through semantic memory navigation

    NASA Astrophysics Data System (ADS)

    Borge-Holthoefer, J.; Arenas, A.

    2010-03-01

    Semantic memory is the cognitive system devoted to storage and retrieval of conceptual knowledge. Empirical data indicate that semantic memory is organized in a network structure. Everyday experience shows that word search and retrieval processes provide fluent and coherent speech, i.e. are efficient. This implies either that semantic memory encodes, besides thousands of words, different kind of links for different relationships (introducing greater complexity and storage costs), or that the structure evolves facilitating the differentiation between long-lasting semantic relations from incidental, phenomenological ones. Assuming the latter possibility, we explore a mechanism to disentangle the underlying semantic backbone which comprises conceptual structure (extraction of categorical relations between pairs of words), from the rest of information present in the structure. To this end, we first present and characterize an empirical data set modeled as a network, then we simulate a stochastic cognitive navigation on this topology. We schematize this latter process as uncorrelated random walks from node to node, which converge to a feature vectors network. By doing so we both introduce a novel mechanism for information retrieval, and point at the problem of category formation in close connection to linguistic and non-linguistic experience.

  17. Quantum teleportation between remote atomic-ensemble quantum memories.

    PubMed

    Bao, Xiao-Hui; Xu, Xiao-Fan; Li, Che-Ming; Yuan, Zhen-Sheng; Lu, Chao-Yang; Pan, Jian-Wei

    2012-12-11

    Quantum teleportation and quantum memory are two crucial elements for large-scale quantum networks. With the help of prior distributed entanglement as a "quantum channel," quantum teleportation provides an intriguing means to faithfully transfer quantum states among distant locations without actual transmission of the physical carriers [Bennett CH, et al. (1993) Phys Rev Lett 70(13):1895-1899]. Quantum memory enables controlled storage and retrieval of fast-flying photonic quantum bits with stationary matter systems, which is essential to achieve the scalability required for large-scale quantum networks. Combining these two capabilities, here we realize quantum teleportation between two remote atomic-ensemble quantum memory nodes, each composed of ∼10(8) rubidium atoms and connected by a 150-m optical fiber. The spin wave state of one atomic ensemble is mapped to a propagating photon and subjected to Bell state measurements with another single photon that is entangled with the spin wave state of the other ensemble. Two-photon detection events herald the success of teleportation with an average fidelity of 88(7)%. Besides its fundamental interest as a teleportation between two remote macroscopic objects, our technique may be useful for quantum information transfer between different nodes in quantum networks and distributed quantum computing.

  18. Enabling Secure High-Performance Wireless Ad Hoc Networking

    DTIC Science & Technology

    2003-05-29

    destinations, consuming energy and available bandwidth. An attacker may similarly create a routing black hole, in which all packets are dropped: by sending...of the vertex cut, for example by forwarding only routing packets and not data packets, such that the nodes waste energy forwarding packets to the...with limited resources, including network bandwidth and the CPU processing capacity, memory, and battery power ( energy ) of each individual node in the

  19. Deployment of 802.15.4 Sensor Networks for C4ISR Operations

    DTIC Science & Technology

    2006-06-01

    43 Figure 20.MSP410CA Dense Grid Monitoring (Crossbow User’s Manual, 2005). ....................................44 Figure 21.(a)MICA2 without...Deployment of Sensor Grid (COASTS OPORD, 2006). ...56 Figure 27.Topology View of Two Nodes and Base Station .......57 Figure 28.Nodes Employing Multi...Random Access Memory TCP/IP Transmission Control Protocol/Internet Protocol TinyOS Tiny Micro Threading Operating System UARTs Universal

  20. Performance Analysis of Multilevel Parallel Applications on Shared Memory Architectures

    NASA Technical Reports Server (NTRS)

    Biegel, Bryan A. (Technical Monitor); Jost, G.; Jin, H.; Labarta J.; Gimenez, J.; Caubet, J.

    2003-01-01

    Parallel programming paradigms include process level parallelism, thread level parallelization, and multilevel parallelism. This viewgraph presentation describes a detailed performance analysis of these paradigms for Shared Memory Architecture (SMA). This analysis uses the Paraver Performance Analysis System. The presentation includes diagrams of a flow of useful computations.

  1. Measuring Transactiving Memory Systems Using Network Analysis

    ERIC Educational Resources Information Center

    King, Kylie Goodell

    2017-01-01

    Transactive memory systems (TMSs) describe the structures and processes that teams use to share information, work together, and accomplish shared goals. First introduced over three decades ago, TMSs have been measured in a variety of ways. This dissertation proposes the use of network analysis in measuring TMS. This is accomplished by describing…

  2. Operator Influence of Unexploded Ordnance Sensor Technologies

    DTIC Science & Technology

    2007-03-01

    chart display ActiveX control Mscomct2.dll – date/time display ActiveX control Pnpscr.dll – Systran SCRAMNet replicated shared memory device...response value database rgm_p2.dll – Phase 2 shared memory API and implementation Commercial components StripM.ocx – strip chart display ActiveX

  3. Runtime support for parallelizing data mining algorithms

    NASA Astrophysics Data System (ADS)

    Jin, Ruoming; Agrawal, Gagan

    2002-03-01

    With recent technological advances, shared memory parallel machines have become more scalable, and offer large main memories and high bus bandwidths. They are emerging as good platforms for data warehousing and data mining. In this paper, we focus on shared memory parallelization of data mining algorithms. We have developed a series of techniques for parallelization of data mining algorithms, including full replication, full locking, fixed locking, optimized full locking, and cache-sensitive locking. Unlike previous work on shared memory parallelization of specific data mining algorithms, all of our techniques apply to a large number of common data mining algorithms. In addition, we propose a reduction-object based interface for specifying a data mining algorithm. We show how our runtime system can apply any of the technique we have developed starting from a common specification of the algorithm.

  4. Concurrent working memory load can facilitate selective attention: evidence for specialized load.

    PubMed

    Park, Soojin; Kim, Min-Shik; Chun, Marvin M

    2007-10-01

    Load theory predicts that concurrent working memory load impairs selective attention and increases distractor interference (N. Lavie, A. Hirst, J. W. de Fockert, & E. Viding). Here, the authors present new evidence that the type of concurrent working memory load determines whether load impairs selective attention or not. Working memory load was paired with a same/different matching task that required focusing on targets while ignoring distractors. When working memory items shared the same limited-capacity processing mechanisms with targets in the matching task, distractor interference increased. However, when working memory items shared processing with distractors in the matching task, distractor interference decreased, facilitating target selection. A specialized load account is proposed to describe the dissociable effects of working memory load on selective processing depending on whether the load overlaps with targets or with distractors. (c) 2007 APA

  5. Parallel Calculations in LS-DYNA

    NASA Astrophysics Data System (ADS)

    Vartanovich Mkrtychev, Oleg; Aleksandrovich Reshetov, Andrey

    2017-11-01

    Nowadays, structural mechanics exhibits a trend towards numeric solutions being found for increasingly extensive and detailed tasks, which requires that capacities of computing systems be enhanced. Such enhancement can be achieved by different means. E.g., in case a computing system is represented by a workstation, its components can be replaced and/or extended (CPU, memory etc.). In essence, such modification eventually entails replacement of the entire workstation, i.e. replacement of certain components necessitates exchange of others (faster CPUs and memory devices require buses with higher throughput etc.). Special consideration must be given to the capabilities of modern video cards. They constitute powerful computing systems capable of running data processing in parallel. Interestingly, the tools originally designed to render high-performance graphics can be applied for solving problems not immediately related to graphics (CUDA, OpenCL, Shaders etc.). However, not all software suites utilize video cards’ capacities. Another way to increase capacity of a computing system is to implement a cluster architecture: to add cluster nodes (workstations) and to increase the network communication speed between the nodes. The advantage of this approach is extensive growth due to which a quite powerful system can be obtained by combining not particularly powerful nodes. Moreover, separate nodes may possess different capacities. This paper considers the use of a clustered computing system for solving problems of structural mechanics with LS-DYNA software. To establish a range of dependencies a mere 2-node cluster has proven sufficient.

  6. Research on Information Sharing Mechanism of Network Organization Based on Evolutionary Game

    NASA Astrophysics Data System (ADS)

    Wang, Lin; Liu, Gaozhi

    2018-02-01

    This article first elaborates the concept and effect of network organization, and the ability to share information is analyzed, secondly introduces the evolutionary game theory, network organization for information sharing all kinds of limitations, establishes the evolutionary game model, analyzes the dynamic evolution of network organization of information sharing, through reasoning and evolution. The network information sharing by the initial state and two sides of the game payoff matrix of excess profits and information is the information sharing of cost and risk sharing are the influence of network organization node information sharing decision.

  7. Transactive memory systems scale for couples: development and validation

    PubMed Central

    Hewitt, Lauren Y.; Roberts, Lynne D.

    2015-01-01

    People in romantic relationships can develop shared memory systems by pooling their cognitive resources, allowing each person access to more information but with less cognitive effort. Research examining such memory systems in romantic couples largely focuses on remembering word lists or performing lab-based tasks, but these types of activities do not capture the processes underlying couples’ transactive memory systems, and may not be representative of the ways in which romantic couples use their shared memory systems in everyday life. We adapted an existing measure of transactive memory systems for use with romantic couples (TMSS-C), and conducted an initial validation study. In total, 397 participants who each identified as being a member of a romantic relationship of at least 3 months duration completed the study. The data provided a good fit to the anticipated three-factor structure of the components of couples’ transactive memory systems (specialization, credibility and coordination), and there was reasonable evidence of both convergent and divergent validity, as well as strong evidence of test–retest reliability across a 2-week period. The TMSS-C provides a valuable tool that can quickly and easily capture the underlying components of romantic couples’ transactive memory systems. It has potential to help us better understand this intriguing feature of romantic relationships, and how shared memory systems might be associated with other important features of romantic relationships. PMID:25999873

  8. Evolutionary Metal Oxide Clusters for Novel Applications: Toward High-Density Data Storage in Nonvolatile Memories.

    PubMed

    Chen, Xiaoli; Zhou, Ye; Roy, Vellaisamy A L; Han, Su-Ting

    2018-01-01

    Because of current fabrication limitations, miniaturizing nonvolatile memory devices for managing the explosive increase in big data is challenging. Molecular memories constitute a promising candidate for next-generation memories because their properties can be readily modulated through chemical synthesis. Moreover, these memories can be fabricated through mild solution processing, which can be easily scaled up. Among the various materials, polyoxometalate (POM) molecules have attracted considerable attention for use as novel data-storage nodes for nonvolatile memories. Here, an overview of recent advances in the development of POMs for nonvolatile memories is presented. The general background knowledge of the structure and property diversity of POMs is also summarized. Finally, the challenges and perspectives in the application of POMs in memories are discussed. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. CBM First-level Event Selector Input Interface Demonstrator

    NASA Astrophysics Data System (ADS)

    Hutter, Dirk; de Cuveland, Jan; Lindenstruth, Volker

    2017-10-01

    CBM is a heavy-ion experiment at the future FAIR facility in Darmstadt, Germany. Featuring self-triggered front-end electronics and free-streaming read-out, event selection will exclusively be done by the First Level Event Selector (FLES). Designed as an HPC cluster with several hundred nodes its task is an online analysis and selection of the physics data at a total input data rate exceeding 1 TByte/s. To allow efficient event selection, the FLES performs timeslice building, which combines the data from all given input links to self-contained, potentially overlapping processing intervals and distributes them to compute nodes. Partitioning the input data streams into specialized containers allows performing this task very efficiently. The FLES Input Interface defines the linkage between the FEE and the FLES data transport framework. A custom FPGA PCIe board, the FLES Interface Board (FLIB), is used to receive data via optical links and transfer them via DMA to the host’s memory. The current prototype of the FLIB features a Kintex-7 FPGA and provides up to eight 10 GBit/s optical links. A custom FPGA design has been developed for this board. DMA transfers and data structures are optimized for subsequent timeslice building. Index tables generated by the FPGA enable fast random access to the written data containers. In addition the DMA target buffers can directly serve as InfiniBand RDMA source buffers without copying the data. The usage of POSIX shared memory for these buffers allows data access from multiple processes. An accompanying HDL module has been developed to integrate the FLES link into the front-end FPGA designs. It implements the front-end logic interface as well as the link protocol. Prototypes of all Input Interface components have been implemented and integrated into the FLES test framework. This allows the implementation and evaluation of the foreseen CBM read-out chain.

  10. Division of attention as a function of the number of steps, visual shifts, and memory load

    NASA Technical Reports Server (NTRS)

    Chechile, R. A.; Butler, K.; Gutowski, W.; Palmer, E. A.

    1986-01-01

    The effects on divided attention of visual shifts and long-term memory retrieval during a monitoring task are considered. A concurrent vigilance task was standardized under all experimental conditions. The results show that subjects can perform nearly perfectly on all of the time-shared tasks if long-term memory retrieval is not required for monitoring. With the requirement of memory retrieval, however, there was a large decrease in accuracy for all of the time-shared activities. It was concluded that the attentional demand of longterm memory retrieval is appreciable (even for a well-learned motor sequence), and thus memory retrieval results in a sizable reduction in the capability of subjects to divide their attention. A selected bibliography on the divided attention literature is provided.

  11. Memory-Efficient Analysis of Dense Functional Connectomes.

    PubMed

    Loewe, Kristian; Donohue, Sarah E; Schoenfeld, Mircea A; Kruse, Rudolf; Borgelt, Christian

    2016-01-01

    The functioning of the human brain relies on the interplay and integration of numerous individual units within a complex network. To identify network configurations characteristic of specific cognitive tasks or mental illnesses, functional connectomes can be constructed based on the assessment of synchronous fMRI activity at separate brain sites, and then analyzed using graph-theoretical concepts. In most previous studies, relatively coarse parcellations of the brain were used to define regions as graphical nodes. Such parcellated connectomes are highly dependent on parcellation quality because regional and functional boundaries need to be relatively consistent for the results to be interpretable. In contrast, dense connectomes are not subject to this limitation, since the parcellation inherent to the data is used to define graphical nodes, also allowing for a more detailed spatial mapping of connectivity patterns. However, dense connectomes are associated with considerable computational demands in terms of both time and memory requirements. The memory required to explicitly store dense connectomes in main memory can render their analysis infeasible, especially when considering high-resolution data or analyses across multiple subjects or conditions. Here, we present an object-based matrix representation that achieves a very low memory footprint by computing matrix elements on demand instead of explicitly storing them. In doing so, memory required for a dense connectome is reduced to the amount needed to store the underlying time series data. Based on theoretical considerations and benchmarks, different matrix object implementations and additional programs (based on available Matlab functions and Matlab-based third-party software) are compared with regard to their computational efficiency. The matrix implementation based on on-demand computations has very low memory requirements, thus enabling analyses that would be otherwise infeasible to conduct due to insufficient memory. An open source software package containing the created programs is available for download.

  12. Memory-Efficient Analysis of Dense Functional Connectomes

    PubMed Central

    Loewe, Kristian; Donohue, Sarah E.; Schoenfeld, Mircea A.; Kruse, Rudolf; Borgelt, Christian

    2016-01-01

    The functioning of the human brain relies on the interplay and integration of numerous individual units within a complex network. To identify network configurations characteristic of specific cognitive tasks or mental illnesses, functional connectomes can be constructed based on the assessment of synchronous fMRI activity at separate brain sites, and then analyzed using graph-theoretical concepts. In most previous studies, relatively coarse parcellations of the brain were used to define regions as graphical nodes. Such parcellated connectomes are highly dependent on parcellation quality because regional and functional boundaries need to be relatively consistent for the results to be interpretable. In contrast, dense connectomes are not subject to this limitation, since the parcellation inherent to the data is used to define graphical nodes, also allowing for a more detailed spatial mapping of connectivity patterns. However, dense connectomes are associated with considerable computational demands in terms of both time and memory requirements. The memory required to explicitly store dense connectomes in main memory can render their analysis infeasible, especially when considering high-resolution data or analyses across multiple subjects or conditions. Here, we present an object-based matrix representation that achieves a very low memory footprint by computing matrix elements on demand instead of explicitly storing them. In doing so, memory required for a dense connectome is reduced to the amount needed to store the underlying time series data. Based on theoretical considerations and benchmarks, different matrix object implementations and additional programs (based on available Matlab functions and Matlab-based third-party software) are compared with regard to their computational efficiency. The matrix implementation based on on-demand computations has very low memory requirements, thus enabling analyses that would be otherwise infeasible to conduct due to insufficient memory. An open source software package containing the created programs is available for download. PMID:27965565

  13. A simplified computational memory model from information processing

    PubMed Central

    Zhang, Lanhua; Zhang, Dongsheng; Deng, Yuqin; Ding, Xiaoqian; Wang, Yan; Tang, Yiyuan; Sun, Baoliang

    2016-01-01

    This paper is intended to propose a computational model for memory from the view of information processing. The model, called simplified memory information retrieval network (SMIRN), is a bi-modular hierarchical functional memory network by abstracting memory function and simulating memory information processing. At first meta-memory is defined to express the neuron or brain cortices based on the biology and graph theories, and we develop an intra-modular network with the modeling algorithm by mapping the node and edge, and then the bi-modular network is delineated with intra-modular and inter-modular. At last a polynomial retrieval algorithm is introduced. In this paper we simulate the memory phenomena and functions of memorization and strengthening by information processing algorithms. The theoretical analysis and the simulation results show that the model is in accordance with the memory phenomena from information processing view. PMID:27876847

  14. Studying an Eulerian Computer Model on Different High-performance Computer Platforms and Some Applications

    NASA Astrophysics Data System (ADS)

    Georgiev, K.; Zlatev, Z.

    2010-11-01

    The Danish Eulerian Model (DEM) is an Eulerian model for studying the transport of air pollutants on large scale. Originally, the model was developed at the National Environmental Research Institute of Denmark. The model computational domain covers Europe and some neighbour parts belong to the Atlantic Ocean, Asia and Africa. If DEM model is to be applied by using fine grids, then its discretization leads to a huge computational problem. This implies that such a model as DEM must be run only on high-performance computer architectures. The implementation and tuning of such a complex large-scale model on each different computer is a non-trivial task. Here, some comparison results of running of this model on different kind of vector (CRAY C92A, Fujitsu, etc.), parallel computers with distributed memory (IBM SP, CRAY T3E, Beowulf clusters, Macintosh G4 clusters, etc.), parallel computers with shared memory (SGI Origin, SUN, etc.) and parallel computers with two levels of parallelism (IBM SMP, IBM BlueGene/P, clusters of multiprocessor nodes, etc.) will be presented. The main idea in the parallel version of DEM is domain partitioning approach. Discussions according to the effective use of the cache and hierarchical memories of the modern computers as well as the performance, speed-ups and efficiency achieved will be done. The parallel code of DEM, created by using MPI standard library, appears to be highly portable and shows good efficiency and scalability on different kind of vector and parallel computers. Some important applications of the computer model output are presented in short.

  15. Investigation of Inter-Node B Macro Diversity for Single-Carrier Based Radio Access in Evolved UTRA Uplink

    NASA Astrophysics Data System (ADS)

    Kawai, Hiroyuki; Morimoto, Akihito; Higuchi, Kenichi; Sawahashi, Mamoru

    This paper investigates the gain of inter-Node B macro diversity for a scheduled-based shared channel using single-carrier FDMA radio access in the Evolved UTRA (UMTS Terrestrial Radio Access) uplink based on system-level simulations. More specifically, we clarify the gain of inter-Node B soft handover (SHO) with selection combining at the radio frame length level (=10msec) compared to that for hard handover (HHO) for a scheduled-based shared data channel, considering the gains of key packet-specific techniques including channel-dependent scheduling, adaptive modulation and coding (AMC), hybrid automatic repeat request (ARQ) with packet combining, and slow transmission power control (TPC). Simulation results show that the inter-Node B SHO increases the user throughput at the cell edge by approximately 10% for a short cell radius such as 100-300m due to the diversity gain from a sudden change in other-cell interference, which is a feature specific to full scheduled-based packet access. However, it is also shown that the gain of inter-Node B SHO compared to that for HHO is small in a macrocell environment when the cell radius is longer than approximately 500m due to the gains from hybrid ARQ with packet combining, slow TPC, and proportional fairness based channel-dependent scheduling.

  16. Designing interactivity on consumer health websites: PARAFORUM for spinal cord injury.

    PubMed

    Rubinelli, Sara; Collm, Alexandra; Glässel, Andrea; Diesner, Fabian; Kinast, Johannes; Stucki, Gerold; Brach, Mirjam

    2013-12-01

    This paper addresses the issue of interactivity on health consumer websites powered by health organizations, by presenting the design of PARAFORUM, an interactive website in the field of spinal cord injury (SCI). The design of PARAFORUM is based on different streams of research in online health communication, web-based communities, open innovation communities and formative evaluation with stakeholders. PARAFORUM implements a model of diversified interactivity based on individuals with SCI and their families, health professionals, and researchers sharing their expertise in SCI. In addition to traditional health professional/researcher-to-consumer and peer-to-peer interactions, through PARAFORUM consumers, health professionals and researchers can co-design ideas for the enhancement of practice and research on SCI. There is the need to reflect on the conceptualization and operationalization of interactivity on consumer health websites. Interactions between different users can make these websites important platforms for promoting self-management of chronic conditions, organizational innovation, and participatory research. Interactivity on consumer health websites is a main resource for health communication. Health organizations are invited to build interactive websites, by considering, however, that the exploitation of interactivity require users' collaboration, processes and standards for managing content, creating and translating knowledge, and conducting internet-based studies. Copyright © 2013 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  17. A comparison of hydrostatic weighing and air displacement plethysmography in adults with spinal cord injury.

    PubMed

    Clasey, Jody L; Gater, David R

    2005-11-01

    To compare (1) total body volume (V(b)) and density (D(b)) measurements obtained by hydrostatic weighing (HW) and air displacement plethysmography (ADP) in adults with spinal cord injury (SCI); (2) measured and predicted thoracic gas volume (V(TG)); and (3) differences in percentage of fat measurements using ADP-obtained D(b) and HW-obtained D(b) measures that were interchanged in a 4-compartment body composition model (4-comp %fat). Twenty adults with SCI underwent ADP and V(TG), and HW testing. In a subgroup (n=13) of subjects, 4-comp %fat procedures were computed. Research laboratories in a university setting. Twenty adults with SCI below the T3 vertebrae and motor complete paraplegia. Not applicable. Statistical analyses, including determination of group mean differences, shared variance, total error, and 95% confidence intervals. The 2 methods yielded small yet significantly different V(b) and D(b). The groups' mean V(TG) did not differ significantly, but the large relative differences indicated an unacceptable amount of individual error. When the 4-comp %fat measurements were compared, there was a trend toward significant differences (P=.08). ADP is a valid alternative method of determining the V(b) and D(b) in adults with SCI; however, the predicted V(TG) should be used with caution.

  18. Synchronization and long-time memory in neural networks with inhibitory hubs and synaptic plasticity

    NASA Astrophysics Data System (ADS)

    Bertolotti, Elena; Burioni, Raffaella; di Volo, Matteo; Vezzani, Alessandro

    2017-01-01

    We investigate the dynamical role of inhibitory and highly connected nodes (hub) in synchronization and input processing of leaky-integrate-and-fire neural networks with short term synaptic plasticity. We take advantage of a heterogeneous mean-field approximation to encode the role of network structure and we tune the fraction of inhibitory neurons fI and their connectivity level to investigate the cooperation between hub features and inhibition. We show that, depending on fI, highly connected inhibitory nodes strongly drive the synchronization properties of the overall network through dynamical transitions from synchronous to asynchronous regimes. Furthermore, a metastable regime with long memory of external inputs emerges for a specific fraction of hub inhibitory neurons, underlining the role of inhibition and connectivity also for input processing in neural networks.

  19. Overset grid applications on distributed memory MIMD computers

    NASA Technical Reports Server (NTRS)

    Chawla, Kalpana; Weeratunga, Sisira

    1994-01-01

    Analysis of modern aerospace vehicles requires the computation of flowfields about complex three dimensional geometries composed of regions with varying spatial resolution requirements. Overset grid methods allow the use of proven structured grid flow solvers to address the twin issues of geometrical complexity and the resolution variation by decomposing the complex physical domain into a collection of overlapping subdomains. This flexibility is accompanied by the need for irregular intergrid boundary communication among the overlapping component grids. This study investigates a strategy for implementing such a static overset grid implicit flow solver on distributed memory, MIMD computers; i.e., the 128 node Intel iPSC/860 and the 208 node Intel Paragon. Performance data for two composite grid configurations characteristic of those encountered in present day aerodynamic analysis are also presented.

  20. Managing coherence via put/get windows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blumrich, Matthias A; Chen, Dong; Coteus, Paul W

    A method and apparatus for managing coherence between two processors of a two processor node of a multi-processor computer system. Generally the present invention relates to a software algorithm that simplifies and significantly speeds the management of cache coherence in a message passing parallel computer, and to hardware apparatus that assists this cache coherence algorithm. The software algorithm uses the opening and closing of put/get windows to coordinate the activated required to achieve cache coherence. The hardware apparatus may be an extension to the hardware address decode, that creates, in the physical memory address space of the node, an areamore » of virtual memory that (a) does not actually exist, and (b) is therefore able to respond instantly to read and write requests from the processing elements.« less

  1. Efficient Broadband Simulation of Fluid-Structure Coupling for Membrane-Type Acoustic Transducer Arrays Using the Multilevel Fast Multipole Algorithm.

    PubMed

    Shieh, Bernard; Sabra, Karim G; Degertekin, F Levent

    2016-11-01

    A boundary element model provides great flexibility for the simulation of membrane-type micromachined ultrasonic transducers (MUTs) in terms of membrane shape, actuating mechanism, and array layout. Acoustic crosstalk is accounted for through a mutual impedance matrix that captures the primary crosstalk mechanism of dispersive-guided modes generated at the fluid-solid interface. However, finding the solution to the fully populated boundary element matrix equation using standard techniques requires computation time and memory usage that scales by the cube and by the square of the number of nodes, respectively, limiting simulation to a small number of membranes. We implement a solver with improved speed and efficiency through the application of a multilevel fast multipole algorithm (FMA). By approximating the fields of collections of nodes using multipole expansions of the free-space Green's function, an FMA solver can enable the simulation of hundreds of thousands of nodes while incurring an approximation error that is controllable. Convergence is drastically improved using a problem-specific block-diagonal preconditioner. We demonstrate the solver's capabilities by simulating a 32-element 7-MHz 1-D capacitive MUT (CMUT) phased array with 2880 membranes. The array is simulated using 233280 nodes for a very wide frequency band up to 50 MHz. For a simulation with 15210 nodes, the FMA solver performed ten times faster and used 32 times less memory than a standard solver based on LU decomposition. We investigate the effects of mesh density and phasing on the predicted array response and find that it is necessary to use about seven nodes over the width of the membrane to observe convergence of the solution-even below the first membrane resonance frequency-due to the influence of higher order membrane modes.

  2. Welcoming nora: a family event.

    PubMed

    Walsh, Allison J; Walsh, Paul R; Walsh, Jane M; Walsh, Gavin T

    2011-01-01

    In this column, Allison and Paul Walsh share the story of the birth of Nora, their third baby and their second child to be born at home. Allison and Paul share their individual memories of labor and birth. But their story is only part of the story of Nora's birth. Nora's birth was a family event, with Allison and Paul's other children very much part of the experience. Jane and Gavin share their own memories of their baby sister's birth.

  3. Fog computing job scheduling optimization based on bees swarm

    NASA Astrophysics Data System (ADS)

    Bitam, Salim; Zeadally, Sherali; Mellouk, Abdelhamid

    2018-04-01

    Fog computing is a new computing architecture, composed of a set of near-user edge devices called fog nodes, which collaborate together in order to perform computational services such as running applications, storing an important amount of data, and transmitting messages. Fog computing extends cloud computing by deploying digital resources at the premise of mobile users. In this new paradigm, management and operating functions, such as job scheduling aim at providing high-performance, cost-effective services requested by mobile users and executed by fog nodes. We propose a new bio-inspired optimization approach called Bees Life Algorithm (BLA) aimed at addressing the job scheduling problem in the fog computing environment. Our proposed approach is based on the optimized distribution of a set of tasks among all the fog computing nodes. The objective is to find an optimal tradeoff between CPU execution time and allocated memory required by fog computing services established by mobile users. Our empirical performance evaluation results demonstrate that the proposal outperforms the traditional particle swarm optimization and genetic algorithm in terms of CPU execution time and allocated memory.

  4. Percolation of spatially constrained Erdős-Rényi networks with degree correlations.

    PubMed

    Schmeltzer, C; Soriano, J; Sokolov, I M; Rüdiger, S

    2014-01-01

    Motivated by experiments on activity in neuronal cultures [ J. Soriano, M. Rodríguez Martínez, T. Tlusty and E. Moses Proc. Natl. Acad. Sci. 105 13758 (2008)], we investigate the percolation transition and critical exponents of spatially embedded Erdős-Rényi networks with degree correlations. In our model networks, nodes are randomly distributed in a two-dimensional spatial domain, and the connection probability depends on Euclidian link length by a power law as well as on the degrees of linked nodes. Generally, spatial constraints lead to higher percolation thresholds in the sense that more links are needed to achieve global connectivity. However, degree correlations favor or do not favor percolation depending on the connectivity rules. We employ two construction methods to introduce degree correlations. In the first one, nodes stay homogeneously distributed and are connected via a distance- and degree-dependent probability. We observe that assortativity in the resulting network leads to a decrease of the percolation threshold. In the second construction methods, nodes are first spatially segregated depending on their degree and afterwards connected with a distance-dependent probability. In this segregated model, we find a threshold increase that accompanies the rising assortativity. Additionally, when the network is constructed in a disassortative way, we observe that this property has little effect on the percolation transition.

  5. Colouring in the Blanks: Memory Drawings of the 1990 Kuwait Invasion

    ERIC Educational Resources Information Center

    Pepin-Wakefield, Yvonne

    2009-01-01

    This study used drawing tasks to examine the similarities and differences between females and males who shared a collective traumatic event in early childhood. Could these childhood memories be recorded, measured, and compared for gender differences in drawings by young adults who had shared a similar experience as children? Exploration of this…

  6. Functions of Memory Sharing and Mother-Child Reminiscing Behaviors: Individual and Cultural Variations

    ERIC Educational Resources Information Center

    Kulkofsky, Sarah; Wang, Qi; Koh, Jessie Bee Kim

    2009-01-01

    This study examined maternal beliefs about the functions of memory sharing and the relations between these beliefs and mother-child reminiscing behaviors in a cross-cultural context. Sixty-three European American and 47 Chinese mothers completed an open-ended questionnaire concerning their beliefs about the functions of parent-child memory…

  7. Quantifying Scheduling Challenges for Exascale System Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mondragon, Oscar; Bridges, Patrick G.; Jones, Terry R

    2015-01-01

    The move towards high-performance computing (HPC) ap- plications comprised of coupled codes and the need to dra- matically reduce data movement is leading to a reexami- nation of time-sharing vs. space-sharing in HPC systems. In this paper, we discuss and begin to quantify the perfor- mance impact of a move away from strict space-sharing of nodes for HPC applications. Specifically, we examine the po- tential performance cost of time-sharing nodes between ap- plication components, we determine whether a simple coor- dinated scheduling mechanism can address these problems, and we research how suitable simple constraint-based opti- mization techniques are for solvingmore » scheduling challenges in this regime. Our results demonstrate that current general- purpose HPC system software scheduling and resource al- location systems are subject to significant performance de- ciencies which we quantify for six representative applica- tions. Based on these results, we discuss areas in which ad- ditional research is needed to meet the scheduling challenges of next-generation HPC systems.« less

  8. Capitalizing on global demands for open data access and interoperability - the USGIN story

    NASA Astrophysics Data System (ADS)

    Richard, Stephen; Allison, Lee

    2016-04-01

    U.S. Geoscience Information Network (USGIN - http://usgin.org) data integration framework packages data so that it can be accessible through a broad array of open-source software and applications, including GeoServer, QGIS, GrassGIS, uDig, and gvSIG. USGIN data-sharing networks are designed to interact with other data exchange systems and have the ability to connect information on a granular level without jeopardizing data ownership. The system is compliant with international standards and protocols, scalable, extensible, and can be deployed throughout the world for a myriad of applications. Using GeoSciML as its data transfer standard and a collaborative approach to Content Model development and management, much of the architecture is publically available through GitHub. Initially developed by the USGS and Association of American State Geologists as a distributed, self-maintained platform for sharing geoscience information, USGIN meets all the requirements of the White House Open Data Access Initiative that applies to (almost) all federally-funded research and all federally-maintained data, opening up huge opportunities for further deployment. In December 2015, the USGIN Content Model schema was recommended for adoption by the White House-led US Group on Earth Observations (USGEO) "Draft Common Framework for Earth-Observation Data" for all US earth observation (i.e., satellite) data. The largest USGIN node is the U.S. National Geothermal Data System (NGDS - www.geothermaldata.org). NGDS provides free open access to ~ 10 million data records, maps, and reports, sharing relevant geoscience and land use data to propel geothermal development and production in the U.S. NGDS currently serves information from hundreds of the U.S. Department of Energy's sponsored projects and geologic data feeds from 60+ data providers in all 50 states, using free and open source software, in a federated system where data owners maintain control of their data. This interactive online system is opening new exploration opportunities and shortening project development by making data easily discoverable, accessible, and interoperable at no cost to users. USGIN Foundation, Inc. was established in 2014 as a not-for-profit company to deploy the USGIN data integration framework for other natural resource (energy, water, and minerals), natural hazards, and geoscience investigations applications, nationally and worldwide. The USGIN vision is that as each data node adds to its data repositories, the system-wide USGIN functions become increasingly valuable to it. The long term goal is that the data network reach a 'tipping point' at which it becomes like a data equivalent to the World Wide Web - where everyone will maintain the function because it is expected by its clientele and it fills critical needs.

  9. Network Coding Opportunities for Wireless Grids Formed by Mobile Devices

    NASA Astrophysics Data System (ADS)

    Nielsen, Karsten Fyhn; Madsen, Tatiana K.; Fitzek, Frank H. P.

    Wireless grids have potential in sharing communication, computa-tional and storage resources making these networks more powerful, more robust, and less cost intensive. However, to enjoy the benefits of cooperative resource sharing, a number of issues should be addressed and the cost of the wireless link should be taken into account. We focus on the question how nodes can efficiently communicate and distribute data in a wireless grid. We show the potential of a network coding approach when nodes have the possibility to combine packets thus increasing the amount of information per transmission. Our implementation demonstrates the feasibility of network coding for wireless grids formed by mobile devices.

  10. Load Balancing Strategies for Multiphase Flows on Structured Grids

    NASA Astrophysics Data System (ADS)

    Olshefski, Kristopher; Owkes, Mark

    2017-11-01

    The computation time required to perform large simulations of complex systems is currently one of the leading bottlenecks of computational research. Parallelization allows multiple processing cores to perform calculations simultaneously and reduces computational times. However, load imbalances between processors waste computing resources as processors wait for others to complete imbalanced tasks. In multiphase flows, these imbalances arise due to the additional computational effort required at the gas-liquid interface. However, many current load balancing schemes are only designed for unstructured grid applications. The purpose of this research is to develop a load balancing strategy while maintaining the simplicity of a structured grid. Several approaches are investigated including brute force oversubscription, node oversubscription through Message Passing Interface (MPI) commands, and shared memory load balancing using OpenMP. Each of these strategies are tested with a simple one-dimensional model prior to implementation into the three-dimensional NGA code. Current results show load balancing will reduce computational time by at least 30%.

  11. Roofline Analysis in the Intel® Advisor to Deliver Optimized Performance for applications on Intel® Xeon Phi™ Processor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koskela, Tuomas S.; Lobet, Mathieu; Deslippe, Jack

    In this session we show, in two case studies, how the roofline feature of Intel Advisor has been utilized to optimize the performance of kernels of the XGC1 and PICSAR codes in preparation for Intel Knights Landing architecture. The impact of the implemented optimizations and the benefits of using the automatic roofline feature of Intel Advisor to study performance of large applications will be presented. This demonstrates an effective optimization strategy that has enabled these science applications to achieve up to 4.6 times speed-up and prepare for future exascale architectures. # Goal/Relevance of Session The roofline model [1,2] is amore » powerful tool for analyzing the performance of applications with respect to the theoretical peak achievable on a given computer architecture. It allows one to graphically represent the performance of an application in terms of operational intensity, i.e. the ratio of flops performed and bytes moved from memory in order to guide optimization efforts. Given the scale and complexity of modern science applications, it can often be a tedious task for the user to perform the analysis on the level of functions or loops to identify where performance gains can be made. With new Intel tools, it is now possible to automate this task, as well as base the estimates of peak performance on measurements rather than vendor specifications. The goal of this session is to demonstrate how the roofline feature of Intel Advisor can be used to balance memory vs. computation related optimization efforts and effectively identify performance bottlenecks. A series of typical optimization techniques: cache blocking, structure refactoring, data alignment, and vectorization illustrated by the kernel cases will be addressed. # Description of the codes ## XGC1 The XGC1 code [3] is a magnetic fusion Particle-In-Cell code that uses an unstructured mesh for its Poisson solver that allows it to accurately resolve the edge plasma of a magnetic fusion device. After recent optimizations to its collision kernel [4], most of the computing time is spent in the electron push (pushe) kernel, where these optimization efforts have been focused. The kernel code scaled well with MPI+OpenMP but had almost no automatic compiler vectorization, in part due to indirect memory addresses and in part due to low trip counts of low-level loops that would be candidates for vectorization. Particle blocking and sorting have been implemented to increase trip counts of low-level loops and improve memory locality, and OpenMP directives have been added to vectorize compute-intensive loops that were identified by Advisor. The optimizations have improved the performance of the pushe kernel 2x on Haswell processors and 1.7x on KNL. The KNL node-for-node performance has been brought to within 30% of a NERSC Cori phase I Haswell node and we expect to bridge this gap by reducing the memory footprint of compute intensive routines to improve cache reuse. ## PICSAR is a Fortran/Python high-performance Particle-In-Cell library targeting at MIC architectures first designed to be coupled with the PIC code WARP for the simulation of laser-matter interaction and particle accelerators. PICSAR also contains a FORTRAN stand-alone kernel for performance studies and benchmarks. A MPI domain decomposition is used between NUMA domains and a tile decomposition (cache-blocking) handled by OpenMP has been added for shared-memory parallelism and better cache management. The so-called current deposition and field gathering steps that compose the PIC time loop constitute major hotspots that have been rewritten to enable more efficient vectorization. Particle communications between tiles and MPI domain has been merged and parallelized. All considered, these improvements provide speedups of 3.1 for order 1 and 4.6 for order 3 interpolation shape factors on KNL configured in SNC4 quadrant flat mode. Performance is similar between a node of cori phase 1 and KNL at order 1 and better on KNL by a factor 1.6 at order 3 with the considered test case (homogeneous thermal plasma).« less

  12. Mapping the neuropsychological profile of temporal lobe epilepsy using cognitive network topology and graph theory.

    PubMed

    Kellermann, Tanja S; Bonilha, Leonardo; Eskandari, Ramin; Garcia-Ramos, Camille; Lin, Jack J; Hermann, Bruce P

    2016-10-01

    Normal cognitive function is defined by harmonious interaction among multiple neuropsychological domains. Epilepsy has a disruptive effect on cognition, but how diverse cognitive abilities differentially interact with one another compared with healthy controls (HC) is unclear. This study used graph theory to analyze the community structure of cognitive networks in adults with temporal lobe epilepsy (TLE) compared with that in HC. Neuropsychological assessment was performed in 100 patients with TLE and 82 HC. For each group, an adjacency matrix was constructed representing pair-wise correlation coefficients between raw scores obtained in each possible test combination. For each cognitive network, each node corresponded to a cognitive test; each link corresponded to the correlation coefficient between tests. Global network structure, community structure, and node-wise graph theory properties were qualitatively assessed. The community structure in patients with TLE was composed of fewer, larger, more mixed modules, characterizing three main modules representing close relationships between the following: 1) aspects of executive function (EF), verbal and visual memory, 2) speed and fluency, and 3) speed, EF, perception, language, intelligence, and nonverbal memory. Conversely, controls exhibited a relative division between cognitive functions, segregating into more numerous, smaller modules consisting of the following: 1) verbal memory, 2) language, perception, and intelligence, 3) speed and fluency, and 4) visual memory and EF. Overall node-wise clustering coefficient and efficiency were increased in TLE. Adults with TLE demonstrate a less clear and poorly structured segregation between multiple cognitive domains. This panorama suggests a higher degree of interdependency across multiple cognitive domains in TLE, possibly indicating compensatory mechanisms to overcome functional impairments. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Intestinal lymphocyte subsets and turnover are affected by chronic alcohol consumption: implications for SIV/HIV infection.

    PubMed

    Poonia, Bhawna; Nelson, Steve; Bagby, Greg J; Veazey, Ronald S

    2006-04-15

    We recently demonstrated that simian immunodeficiency virus (SIV) viral loads were significantly higher in the plasma of rhesus macaques consuming alcohol compared with controls following intrarectal SIV infection. To understand the possible reasons behind increased viral replication, here we assessed the effects of chronic alcohol consumption on distribution and cycling of various lymphocyte subsets in the intestine. Macaques were administered alcohol (n = 11) or sucrose (n = 12), and percentages of memory and naive and activated lymphocyte subsets were compared in the blood, lymph nodes, and intestines. Although minimal differences were detected in blood or lymph nodes, there were significantly higher percentages of central memory (CD95+CD28+) CD4+ lymphocytes in the intestines from alcohol-receiving animals before infection compared with controls. In addition, higher percentages of naive (CD45RA+CD95-) as well as CXCR4+CD4 cells were detected in intestines of alcohol-treated macaques. Moreover, alcohol consumption resulted in significantly lower percentages of effector memory (CD95+CD28-) CD8 lymphocytes as well as activated Ki67+CD8 cells in the intestines. A subset (7 receiving alcohol and 8 receiving sucrose) were then intrarectally inoculated with SIV(mac251). Viral RNA was compared in different tissues using real-time PCR and in situ hybridization. Higher levels of SIV replication were observed in tissues from alcohol-consuming macaques compared with controls. Central memory CD4 lymphocytes were significantly depleted in intestines and mesenteric lymph nodes from all alcohol animals at 8 weeks postinfection. Thus, changes in the mucosal immune compartment (intestines) in response to alcohol are likely the major reasons behind higher replication of SIV observed in these animals.

  14. Stillbirth and stigma: the spoiling and repair of multiple social identities.

    PubMed

    Brierley-Jones, Lyn; Crawley, Rosalind; Lomax, Samantha; Ayers, Susan

    This study investigated mothers' experiences surrounding stillbirth in the United Kingdom, their memory making and sharing opportunities, and the effect these opportunities had on them. Qualitative data were generated from free text responses to open-ended questions. Thematic content analysis revealed that "stigma" was experienced by most women and Goffman's (1963) work on stigma was subsequently used as an analytical framework. Results suggest that stillbirth can spoil the identities of "patient," "mother," and "full citizen." Stigma was reported as arising from interactions with professionals, family, friends, work colleagues, and even casual acquaintances. Stillbirth produces common learning experiences often requiring "identity work" (Murphy, 2012). Memory making and sharing may be important in this work and further research is needed. Stigma can reduce the memory sharing opportunities for women after stillbirth and this may explain some of the differential mental health effects of memory making after stillbirth that is documented in the literature.

  15. Parallelization of KENO-Va Monte Carlo code

    NASA Astrophysics Data System (ADS)

    Ramón, Javier; Peña, Jorge

    1995-07-01

    KENO-Va is a code integrated within the SCALE system developed by Oak Ridge that solves the transport equation through the Monte Carlo Method. It is being used at the Consejo de Seguridad Nuclear (CSN) to perform criticality calculations for fuel storage pools and shipping casks. Two parallel versions of the code: one for shared memory machines and other for distributed memory systems using the message-passing interface PVM have been generated. In both versions the neutrons of each generation are tracked in parallel. In order to preserve the reproducibility of the results in both versions, advanced seeds for random numbers were used. The CONVEX C3440 with four processors and shared memory at CSN was used to implement the shared memory version. A FDDI network of 6 HP9000/735 was employed to implement the message-passing version using proprietary PVM. The speedup obtained was 3.6 in both cases.

  16. MemAxes Visualization Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hardware advancements such as Intel's PEBS and AMD's IBS, as well as software developments such as the perf_event API in Linux have made available the acquisition of memory access samples with performance information. MemAxes is a visualization and analysis tool for memory access sample data. By mapping the samples to their associated code, variables, node topology, and application dataset, MemAxes provides intuitive views of the data.

  17. A simple modern correctness condition for a space-based high-performance multiprocessor

    NASA Technical Reports Server (NTRS)

    Probst, David K.; Li, Hon F.

    1992-01-01

    A number of U.S. national programs, including space-based detection of ballistic missile launches, envisage putting significant computing power into space. Given sufficient progress in low-power VLSI, multichip-module packaging and liquid-cooling technologies, we will see design of high-performance multiprocessors for individual satellites. In very high speed implementations, performance depends critically on tolerating large latencies in interprocessor communication; without latency tolerance, performance is limited by the vastly differing time scales in processor and data-memory modules, including interconnect times. The modern approach to tolerating remote-communication cost in scalable, shared-memory multiprocessors is to use a multithreaded architecture, and alter the semantics of shared memory slightly, at the price of forcing the programmer either to reason about program correctness in a relaxed consistency model or to agree to program in a constrained style. The literature on multiprocessor correctness conditions has become increasingly complex, and sometimes confusing, which may hinder its practical application. We propose a simple modern correctness condition for a high-performance, shared-memory multiprocessor; the correctness condition is based on a simple interface between the multiprocessor architecture and a high-performance, shared-memory multiprocessor; the correctness condition is based on a simple interface between the multiprocessor architecture and the parallel programming system.

  18. Brain Information Sharing During Visual Short-Term Memory Binding Yields a Memory Biomarker for Familial Alzheimer's Disease.

    PubMed

    Parra, Mario A; Mikulan, Ezequiel; Trujillo, Natalia; Sala, Sergio Della; Lopera, Francisco; Manes, Facundo; Starr, John; Ibanez, Agustin

    2017-01-01

    Alzheimer's disease (AD) as a disconnection syndrome which disrupts both brain information sharing and memory binding functions. The extent to which these two phenotypic expressions share pathophysiological mechanisms remains unknown. To unveil the electrophysiological correlates of integrative memory impairments in AD towards new memory biomarkers for its prodromal stages. Patients with 100% risk of familial AD (FAD) and healthy controls underwent assessment with the Visual Short-Term Memory binding test (VSTMBT) while we recorded their EEG. We applied a novel brain connectivity method (Weighted Symbolic Mutual Information) to EEG data. Patients showed significant deficits during the VSTMBT. A reduction of brain connectivity was observed during resting as well as during correct VSTM binding, particularly over frontal and posterior regions. An increase of connectivity was found during VSTM binding performance over central regions. While decreased connectivity was found in cases in more advanced stages of FAD, increased brain connectivity appeared in cases in earlier stages. Such altered patterns of task-related connectivity were found in 89% of the assessed patients. VSTM binding in the prodromal stages of FAD are associated to altered patterns of brain connectivity thus confirming the link between integrative memory deficits and impaired brain information sharing in prodromal FAD. While significant loss of brain connectivity seems to be a feature of the advanced stages of FAD increased brain connectivity characterizes its earlier stages. These findings are discussed in the light of recent proposals about the earliest pathophysiological mechanisms of AD and their clinical expression. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  19. Parallel processing data network of master and slave transputers controlled by a serial control network

    DOEpatents

    Crosetto, D.B.

    1996-12-31

    The present device provides for a dynamically configurable communication network having a multi-processor parallel processing system having a serial communication network and a high speed parallel communication network. The serial communication network is used to disseminate commands from a master processor to a plurality of slave processors to effect communication protocol, to control transmission of high density data among nodes and to monitor each slave processor`s status. The high speed parallel processing network is used to effect the transmission of high density data among nodes in the parallel processing system. Each node comprises a transputer, a digital signal processor, a parallel transfer controller, and two three-port memory devices. A communication switch within each node connects it to a fast parallel hardware channel through which all high density data arrives or leaves the node. 6 figs.

  20. Parallel processing data network of master and slave transputers controlled by a serial control network

    DOEpatents

    Crosetto, Dario B.

    1996-01-01

    The present device provides for a dynamically configurable communication network having a multi-processor parallel processing system having a serial communication network and a high speed parallel communication network. The serial communication network is used to disseminate commands from a master processor (100) to a plurality of slave processors (200) to effect communication protocol, to control transmission of high density data among nodes and to monitor each slave processor's status. The high speed parallel processing network is used to effect the transmission of high density data among nodes in the parallel processing system. Each node comprises a transputer (104), a digital signal processor (114), a parallel transfer controller (106), and two three-port memory devices. A communication switch (108) within each node (100) connects it to a fast parallel hardware channel (70) through which all high density data arrives or leaves the node.

  1. Student goal orientation in learning inquiry skills with modifiable software advisors

    NASA Astrophysics Data System (ADS)

    Shimoda, Todd A.; White, Barbara Y.; Frederiksen, John R.

    2002-03-01

    A computer support environment (SCI-WISE) for learning and doing science inquiry projects was designed. SCI-WISE incorporates software advisors that give general advice about a skill such as hypothesizing. By giving general advice (rather than step-by-step procedures), the system is intended to help students conduct experiments that are more epistemologically authentic. Also, students using SCI-WISE can select the type of advice the advisors give and when they give advice, as well as modify the advisors' knowledge bases. The system is based partly on a theoretical framework of levels of agency and goal orientation. This framework assumes that giving students higher levels of agency facilitates higher-level goal orientations (such as mastery or knowledge building as opposed to task completion) that in turn produce higher levels of competence. A study of sixth grade science students was conducted. Students took a pretest questionnaire that measured their goal orientations for science projects and their inquiry skills. The students worked in pairs on an open-ended inquiry project that requires complex reasoning about human memory. The students used one of two versions of SCI-WISE - one that was modifiable and one that was not. After finishing the project, the students took a posttest questionnaire similar to the pretest, and evaluated the version of the system they used. The main results showed that (a) there was no correlation of goal orientation with grade point average, (b) knowledge-oriented students using the modifiable version tended to rate SCI-WISE more helpful than task-oriented students, and (c) knowledge-oriented pairs using the nonmodifiable version tended to have higher posttest inquiry skills scores than other pair types.

  2. High Performance Active Database Management on a Shared-Nothing Parallel Processor

    DTIC Science & Technology

    1998-05-01

    either stored or virtual. A stored node is like a materialized view. It actually contains the specified tuples. A virtual node is like a real view...90292-6695 DL-5 COLUMBIA UNIV/DEPT COMPUTER SCIENCi ATTN: OR GAIL £. KAISER 450 COMPUTER SCIENCE 3LDG 500 WEST 12ÖTH STRSET NEW YORK NY 10027

  3. Decomposing the relationship between cognitive functioning and self-referent memory beliefs in older adulthood: What’s memory got to do with it?

    PubMed Central

    Payne, Brennan R.; Gross, Alden L.; Hill, Patrick L.; Parisi, Jeanine M.; Rebok, George W.; Stine-Morrow, Elizabeth A. L.

    2018-01-01

    With advancing age, episodic memory performance shows marked declines along with concurrent reports of lower subjective memory beliefs. Given that normative age-related declines in episodic memory co-occur with declines in other cognitive domains, we examined the relationship between memory beliefs and multiple domains of cognitive functioning. Confirmatory bi-factor structural equation models were used to parse the shared and independent variance among factors representing episodic memory, psychomotor speed, and executive reasoning in one large cohort study (Senior Odyssey, N = 462), and replicated using another large cohort of healthy older adults (ACTIVE, N = 2,802). Accounting for a general fluid cognitive functioning factor (comprised of the shared variance among measures of episodic memory, speed, and reasoning) attenuated the relationship between objective memory performance and subjective memory beliefs in both samples. Moreover, the general cognitive functioning factor was the strongest predictor of memory beliefs in both samples. These findings are consistent with the notion that dispositional memory beliefs may reflect perceptions of cognition more broadly. This may be one reason why memory beliefs have broad predictive validity for interventions that target fluid cognitive ability. PMID:27685541

  4. Decomposing the relationship between cognitive functioning and self-referent memory beliefs in older adulthood: what's memory got to do with it?

    PubMed

    Payne, Brennan R; Gross, Alden L; Hill, Patrick L; Parisi, Jeanine M; Rebok, George W; Stine-Morrow, Elizabeth A L

    2017-07-01

    With advancing age, episodic memory performance shows marked declines along with concurrent reports of lower subjective memory beliefs. Given that normative age-related declines in episodic memory co-occur with declines in other cognitive domains, we examined the relationship between memory beliefs and multiple domains of cognitive functioning. Confirmatory bi-factor structural equation models were used to parse the shared and independent variance among factors representing episodic memory, psychomotor speed, and executive reasoning in one large cohort study (Senior Odyssey, N = 462), and replicated using another large cohort of healthy older adults (ACTIVE, N = 2802). Accounting for a general fluid cognitive functioning factor (comprised of the shared variance among measures of episodic memory, speed, and reasoning) attenuated the relationship between objective memory performance and subjective memory beliefs in both samples. Moreover, the general cognitive functioning factor was the strongest predictor of memory beliefs in both samples. These findings are consistent with the notion that dispositional memory beliefs may reflect perceptions of cognition more broadly. This may be one reason why memory beliefs have broad predictive validity for interventions that target fluid cognitive ability.

  5. [Overgeneral autobiographical memory in depressive disorders].

    PubMed

    Dutra, Tarcísio Gomes; Kurtinaitis, Laila da Camara Lima; Cantilino, Amaury; Vasconcelos, Maria Carolina Souto de; Hazin, Izabel; Sougey, Everton Botelho

    2012-01-01

    This article aims to review studies focusing on the relationship between overgeneral autobiographical memory and depressive disorders. Such characteristic has attracted attention because of its relationship with a poor ability to solve problems and to imagine the future, as well as with the maintenance and a poor prognosis of depression. Data were collected through a systematic search on LILACS, SciELO, MEDLINE, and IBECS databases, and also on the health sciences records of Portal de Periódicos da Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES), a Brazilian journal database, focusing on articles published between 2000 and 2010. The following keywords were used: memória autobiográfica, supergeneralização da memória autobiográfica, and memória autobiográfica e depressão in Portuguese; and autobiographical memory, overgeneral autobiographical memory, and autobiographical memory and depression in English. Following application of exclusion criteria, a total of 27 studies were reviewed. Overgeneral autobiographical memory has been investigated in several depressive disorders. However, further longitudinal studies are required to confirm the relevant role of this cognitive characteristic in anamnesis and in the treatment of mood disorders.

  6. Collection Evaluation and Evolution

    NASA Technical Reports Server (NTRS)

    Habermann, Ted; Kozimor, John

    2017-01-01

    We will review metadata evaluation tools and share results from our most recent CMR analysis. We will demonstrate results using Google spreadsheets and present new results in terms of number of records that include specific content. We will show evolution of UMM-compliance over time and also show results of comparing various CMR collections (NASA, non-NASA, and SciOps).

  7. Method for prefetching non-contiguous data structures

    DOEpatents

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Hoenicke, Dirk [Ossining, NY; Ohmacht, Martin [Brewster, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Takken, Todd E [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY

    2009-05-05

    A low latency memory system access is provided in association with a weakly-ordered multiprocessor system. Each processor in the multiprocessor shares resources, and each shared resource has an associated lock within a locking device that provides support for synchronization between the multiple processors in the multiprocessor and the orderly sharing of the resources. A processor only has permission to access a resource when it owns the lock associated with that resource, and an attempt by a processor to own a lock requires only a single load operation, rather than a traditional atomic load followed by store, such that the processor only performs a read operation and the hardware locking device performs a subsequent write operation rather than the processor. A simple perfecting for non-contiguous data structures is also disclosed. A memory line is redefined so that in addition to the normal physical memory data, every line includes a pointer that is large enough to point to any other line in the memory, wherein the pointers to determine which memory line to prefect rather than some other predictive algorithm. This enables hardware to effectively prefect memory access patterns that are non-contiguous, but repetitive.

  8. A shared resource between declarative memory and motor memory.

    PubMed

    Keisler, Aysha; Shadmehr, Reza

    2010-11-03

    The neural systems that support motor adaptation in humans are thought to be distinct from those that support the declarative system. Yet, during motor adaptation changes in motor commands are supported by a fast adaptive process that has important properties (rapid learning, fast decay) that are usually associated with the declarative system. The fast process can be contrasted to a slow adaptive process that also supports motor memory, but learns gradually and shows resistance to forgetting. Here we show that after people stop performing a motor task, the fast motor memory can be disrupted by a task that engages declarative memory, but the slow motor memory is immune from this interference. Furthermore, we find that the fast/declarative component plays a major role in the consolidation of the slow motor memory. Because of the competitive nature of declarative and nondeclarative memory during consolidation, impairment of the fast/declarative component leads to improvements in the slow/nondeclarative component. Therefore, the fast process that supports formation of motor memory is not only neurally distinct from the slow process, but it shares critical resources with the declarative memory system.

  9. A shared resource between declarative memory and motor memory

    PubMed Central

    Keisler, Aysha; Shadmehr, Reza

    2010-01-01

    The neural systems that support motor adaptation in humans are thought to be distinct from those that support the declarative system. Yet, during motor adaptation changes in motor commands are supported by a fast adaptive process that has important properties (rapid learning, fast decay) that are usually associated with the declarative system. The fast process can be contrasted to a slow adaptive process that also supports motor memory, but learns gradually and shows resistance to forgetting. Here we show that after people stop performing a motor task, the fast motor memory can be disrupted by a task that engages declarative memory, but the slow motor memory is immune from this interference. Furthermore, we find that the fast/declarative component plays a major role in the consolidation of the slow motor memory. Because of the competitive nature of declarative and non-declarative memory during consolidation, impairment of the fast/declarative component leads to improvements in the slow/non-declarative component. Therefore, the fast process that supports formation of motor memory is not only neurally distinct from the slow process, but it shares critical resources with the declarative memory system. PMID:21048140

  10. Discrete-Slots Models of Visual Working-Memory Response Times

    PubMed Central

    Donkin, Christopher; Nosofsky, Robert M.; Gold, Jason M.; Shiffrin, Richard M.

    2014-01-01

    Much recent research has aimed to establish whether visual working memory (WM) is better characterized by a limited number of discrete all-or-none slots or by a continuous sharing of memory resources. To date, however, researchers have not considered the response-time (RT) predictions of discrete-slots versus shared-resources models. To complement the past research in this field, we formalize a family of mixed-state, discrete-slots models for explaining choice and RTs in tasks of visual WM change detection. In the tasks under investigation, a small set of visual items is presented, followed by a test item in 1 of the studied positions for which a change judgment must be made. According to the models, if the studied item in that position is retained in 1 of the discrete slots, then a memory-based evidence-accumulation process determines the choice and the RT; if the studied item in that position is missing, then a guessing-based accumulation process operates. Observed RT distributions are therefore theorized to arise as probabilistic mixtures of the memory-based and guessing distributions. We formalize an analogous set of continuous shared-resources models. The model classes are tested on individual subjects with both qualitative contrasts and quantitative fits to RT-distribution data. The discrete-slots models provide much better qualitative and quantitative accounts of the RT and choice data than do the shared-resources models, although there is some evidence for “slots plus resources” when memory set size is very small. PMID:24015956

  11. Shared Representations in Language Processing and Verbal Short-Term Memory: The Case of Grammatical Gender

    ERIC Educational Resources Information Center

    Schweppe, Judith; Rummer, Ralf

    2007-01-01

    The general idea of language-based accounts of short-term memory is that retention of linguistic materials is based on representations within the language processing system. In the present sentence recall study, we address the question whether the assumption of shared representations holds for morphosyntactic information (here: grammatical gender…

  12. The Precategorical Nature of Visual Short-Term Memory

    ERIC Educational Resources Information Center

    Quinlan, Philip T.; Cohen, Dale J.

    2016-01-01

    We conducted a series of recognition experiments that assessed whether visual short-term memory (VSTM) is sensitive to shared category membership of to-be-remembered (tbr) images of common objects. In Experiment 1 some of the tbr items shared the same basic level category (e.g., hand axe): Such items were no better retained than others. In the…

  13. Fault tolerant onboard packet switch architecture for communication satellites: Shared memory per beam approach

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Mary JO; Quintana, Jorge A.; Soni, Nitin J.

    1994-01-01

    The NASA Lewis Research Center is developing a multichannel communication signal processing satellite (MCSPS) system which will provide low data rate, direct to user, commercial communications services. The focus of current space segment developments is a flexible, high-throughput, fault tolerant onboard information switching processor. This information switching processor (ISP) is a destination-directed packet switch which performs both space and time switching to route user information among numerous user ground terminals. Through both industry study contracts and in-house investigations, several packet switching architectures were examined. A contention-free approach, the shared memory per beam architecture, was selected for implementation. The shared memory per beam architecture, fault tolerance insertion, implementation, and demonstration plans are described.

  14. The performance of disk arrays in shared-memory database machines

    NASA Technical Reports Server (NTRS)

    Katz, Randy H.; Hong, Wei

    1993-01-01

    In this paper, we examine how disk arrays and shared memory multiprocessors lead to an effective method for constructing database machines for general-purpose complex query processing. We show that disk arrays can lead to cost-effective storage systems if they are configured from suitably small formfactor disk drives. We introduce the storage system metric data temperature as a way to evaluate how well a disk configuration can sustain its workload, and we show that disk arrays can sustain the same data temperature as a more expensive mirrored-disk configuration. We use the metric to evaluate the performance of disk arrays in XPRS, an operational shared-memory multiprocessor database system being developed at the University of California, Berkeley.

  15. Development of the Nonhydrostatic Unified Model of the Atmosphere (NUMA): Limited-Area Mode (Preprint)

    DTIC Science & Technology

    2011-04-14

    such that qN(x, t ) = MN ∑ j =1 qj( t )ψj(x) (9) 6 whereMN = (N+1) 3 is the number of nodes per element andN is the order of the basis functions. The...M.H. Mawson , A. Staniforth, A.A. White, N. Wood, A new dynamical core for the Met Office’s global and regional modelling of the atmosphere, Q. J . R...Tufo, M. Levy, T . Voran, Development of a scalable global discontinuous Galerkin atmospheric model, Int. J . of Comput. Sci. Eng. In Press (2008). [6

  16. Secondary health conditions and spinal cord injury: an uphill battle in the journey of care.

    PubMed

    Guilcher, Sara J T; Craven, B Cathy; Lemieux-Charles, Louise; Casciaro, Tiziana; McColl, Mary Ann; Jaglal, Susan B

    2013-06-01

    To understand the journey of care in the prevention and management of secondary health conditions (SHCs) following spinal cord injury (SCI). This was a case study design with 'Ontario' as the case. The Network Episode Model was used as the conceptual framework. Data sources included in depth interviews with persons with SCI, care providers, and policy and decision makers. Document analysis was also conducted on relevant materials and policies. Key informants were selected by purposeful sampling as well as snowball sampling to provide maximum variation. Data analysis was an iterative process and involved descriptive and interpretive analyses. A coding structure was developed based on the conceptual framework which allowed for free nodes when emerging ideas or themes were identified. Twenty-eight individuals were interviewed (14 persons with SCI and 14 persons representing care providers, community advocacy organization representatives, system service delivery administrators and policy-makers). A major over-arching domain that emerged from the data was the concept of 'fighting'. Eleven themes were identified: at the micro-individual level: (i) social isolation and system abandonment, (ii) funding and equitable care, (iii) bounded freedom and self-management; at the meso care provider level: (iv) gender and caregiving strain, (v) help versus disempowerment, (vi) holistic care-thinking outside the box, (vii) poor communication and coordination of care; and at the macro health system level: (viii) fight for access and availability, (ix) models of care tensions, (x) private versus public tensions and (xi) rigid rules and policies. Findings suggest that the journey is challenging and a persistent uphill struggle for persons with SCI, care providers, and community-based advocates. If we are to make significant gains in minimizing the incidence and severity of SHCs, we need to tailor efforts at the health system level. • Secondary health conditions are problematic for individuals with a spinal cord injury (SCI). • This study aimed to understand the journey of care in the prevention and management of secondary health conditions (SHCs) following SCI. • Findings suggest that the journey is challenging and a persistent uphill struggle for persons with SCI, care providers, and community-based advocates. • All stakeholders involved recognized the disparities in access to care and resources that exist within the system. We recommend that if we are to make significant gains in minimizing the incidence and severity of SHCs, we need to tailor efforts at the health system level.

  17. Damage healing ability of a shape-memory-polymer-based particulate composite with small thermoplastic contents

    NASA Astrophysics Data System (ADS)

    Nji, Jones; Li, Guoqiang

    2012-02-01

    The purpose of this study is to investigate the potential of a shape-memory-polymer (SMP)-based particulate composite to heal structural-length scale damage with small thermoplastic additive contents through a close-then-heal (CTH) self-healing scheme that was introduced in a previous study (Li and Uppu 2010 Comput. Sci. Technol. 70 1419-27). The idea is to achieve reasonable healing efficiencies with minimal sacrifice in structural load capacity. By first closing cracks, the gap between two crack surfaces is narrowed and a lesser amount of thermoplastic particles is required to achieve healing. The particulate composite was fabricated by dispersing copolyester thermoplastic particles in a shape memory polymer matrix. It is found that, for small thermoplastic contents of less than 10%, the CTH scheme followed in this study heals structural-length scale damage in the SMP particulate composite to a meaningful extent and with less sacrifice of structural capacity.

  18. An Ephemeral Burst-Buffer File System for Scientific Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Teng; Moody, Adam; Yu, Weikuan

    BurstFS is a distributed file system for node-local burst buffers on high performance computing systems. BurstFS presents a shared file system space across the burst buffers so that applications that use shared files can access the highly-scalable burst buffers without changing their applications.

  19. Learning to Share

    ERIC Educational Resources Information Center

    Raths, David

    2010-01-01

    In the tug-of-war between researchers and IT for supercomputing resources, a centralized approach can help both sides get more bang for their buck. As 2010 began, the University of Washington was preparing to launch its first shared high-performance computing cluster, a 1,500-node system called Hyak, dedicated to research activities. Like other…

  20. Patients' views on their decision making during inpatient rehabilitation after newly acquired spinal cord injury-A qualitative interview-based study.

    PubMed

    Scheel-Sailer, Anke; Post, Marcel W; Michel, Franz; Weidmann-Hügle, Tatjana; Baumann Hölzle, Ruth

    2017-10-01

    Involving patients in decision making is a legal requirement in many countries, associated with better rehabilitation outcomes, but not easily accomplished during initial inpatient rehabilitation after severe trauma. Providing medical treatment according to the principles of shared decision making is challenging as a point in case for persons with spinal cord injury (SCI). The aim of this study was to retrospectively explore the patients' views on their participation in decision making during their first inpatient rehabilitation after onset of SCI, in order to optimize treatment concepts. A total of 22 participants with SCI were interviewed in-depth using a semi-structured interview scheme between 6 months and 35 years post-onset. Interviews were transcribed verbatim and analysed with the Mayring method for qualitative content analysis. Participants experienced a substantially reduced ability to participate in decision making during the early phase after SCI. They perceived physical, psychological and environmental factors to have impacted upon this ability. Patients mentioned regaining their ability to make decisions was an important goal during their first rehabilitation. Receiving adequate information in an understandable and personalized way was a prerequisite to achieve this goal. Other important factors included medical and psychological condition, personal engagement, time and dialogue with peers. During the initial rehabilitation of patients with SCI, professionals need to deal with the discrepancy between the obligation to respect a patient's autonomy and their diminished ability for decision making. © 2017 The Authors Health Expectations Published by John Wiley & Sons Ltd.

Top