Sample records for distributed memory massively

  1. Sparse distributed memory overview

    NASA Technical Reports Server (NTRS)

    Raugh, Mike

    1990-01-01

    The Sparse Distributed Memory (SDM) project is investigating the theory and applications of massively parallel computing architecture, called sparse distributed memory, that will support the storage and retrieval of sensory and motor patterns characteristic of autonomous systems. The immediate objectives of the project are centered in studies of the memory itself and in the use of the memory to solve problems in speech, vision, and robotics. Investigation of methods for encoding sensory data is an important part of the research. Examples of NASA missions that may benefit from this work are Space Station, planetary rovers, and solar exploration. Sparse distributed memory offers promising technology for systems that must learn through experience and be capable of adapting to new circumstances, and for operating any large complex system requiring automatic monitoring and control. Sparse distributed memory is a massively parallel architecture motivated by efforts to understand how the human brain works. Sparse distributed memory is an associative memory, able to retrieve information from cues that only partially match patterns stored in the memory. It is able to store long temporal sequences derived from the behavior of a complex system, such as progressive records of the system's sensory data and correlated records of the system's motor controls.

  2. Performance of the Heavy Flavor Tracker (HFT) detector in star experiment at RHIC

    NASA Astrophysics Data System (ADS)

    Alruwaili, Manal

    With the growing technology, the number of the processors is becoming massive. Current supercomputer processing will be available on desktops in the next decade. For mass scale application software development on massive parallel computing available on desktops, existing popular languages with large libraries have to be augmented with new constructs and paradigms that exploit massive parallel computing and distributed memory models while retaining the user-friendliness. Currently, available object oriented languages for massive parallel computing such as Chapel, X10 and UPC++ exploit distributed computing, data parallel computing and thread-parallelism at the process level in the PGAS (Partitioned Global Address Space) memory model. However, they do not incorporate: 1) any extension at for object distribution to exploit PGAS model; 2) the programs lack the flexibility of migrating or cloning an object between places to exploit load balancing; and 3) lack the programming paradigms that will result from the integration of data and thread-level parallelism and object distribution. In the proposed thesis, I compare different languages in PGAS model; propose new constructs that extend C++ with object distribution and object migration; and integrate PGAS based process constructs with these extensions on distributed objects. Object cloning and object migration. Also a new paradigm MIDD (Multiple Invocation Distributed Data) is presented when different copies of the same class can be invoked, and work on different elements of a distributed data concurrently using remote method invocations. I present new constructs, their grammar and their behavior. The new constructs have been explained using simple programs utilizing these constructs.

  3. A Massively Parallel Code for Polarization Calculations

    NASA Astrophysics Data System (ADS)

    Akiyama, Shizuka; Höflich, Peter

    2001-03-01

    We present an implementation of our Monte-Carlo radiation transport method for rapidly expanding, NLTE atmospheres for massively parallel computers which utilizes both the distributed and shared memory models. This allows us to take full advantage of the fast communication and low latency inherent to nodes with multiple CPUs, and to stretch the limits of scalability with the number of nodes compared to a version which is based on the shared memory model. Test calculations on a local 20-node Beowulf cluster with dual CPUs showed an improved scalability by about 40%.

  4. Distributed-Memory Breadth-First Search on Massive Graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buluc, Aydin; Beamer, Scott; Madduri, Kamesh

    This chapter studies the problem of traversing large graphs using the breadth-first search order on distributed-memory supercomputers. We consider both the traditional level-synchronous top-down algorithm as well as the recently discovered direction optimizing algorithm. We analyze the performance and scalability trade-offs in using different local data structures such as CSR and DCSC, enabling in-node multithreading, and graph decompositions such as 1D and 2D decomposition.

  5. Contention Modeling for Multithreaded Distributed Shared Memory Machines: The Cray XMT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Secchi, Simone; Tumeo, Antonino; Villa, Oreste

    Distributed Shared Memory (DSM) machines are a wide class of multi-processor computing systems where a large virtually-shared address space is mapped on a network of physically distributed memories. High memory latency and network contention are two of the main factors that limit performance scaling of such architectures. Modern high-performance computing DSM systems have evolved toward exploitation of massive hardware multi-threading and fine-grained memory hashing to tolerate irregular latencies, avoid network hot-spots and enable high scaling. In order to model the performance of such large-scale machines, parallel simulation has been proved to be a promising approach to achieve good accuracy inmore » reasonable times. One of the most critical factors in solving the simulation speed-accuracy trade-off is network modeling. The Cray XMT is a massively multi-threaded supercomputing architecture that belongs to the DSM class, since it implements a globally-shared address space abstraction on top of a physically distributed memory substrate. In this paper, we discuss the development of a contention-aware network model intended to be integrated in a full-system XMT simulator. We start by measuring the effects of network contention in a 128-processor XMT machine and then investigate the trade-off that exists between simulation accuracy and speed, by comparing three network models which operate at different levels of accuracy. The comparison and model validation is performed by executing a string-matching algorithm on the full-system simulator and on the XMT, using three datasets that generate noticeably different contention patterns.« less

  6. Design of multiple sequence alignment algorithms on parallel, distributed memory supercomputers.

    PubMed

    Church, Philip C; Goscinski, Andrzej; Holt, Kathryn; Inouye, Michael; Ghoting, Amol; Makarychev, Konstantin; Reumann, Matthias

    2011-01-01

    The challenge of comparing two or more genomes that have undergone recombination and substantial amounts of segmental loss and gain has recently been addressed for small numbers of genomes. However, datasets of hundreds of genomes are now common and their sizes will only increase in the future. Multiple sequence alignment of hundreds of genomes remains an intractable problem due to quadratic increases in compute time and memory footprint. To date, most alignment algorithms are designed for commodity clusters without parallelism. Hence, we propose the design of a multiple sequence alignment algorithm on massively parallel, distributed memory supercomputers to enable research into comparative genomics on large data sets. Following the methodology of the sequential progressiveMauve algorithm, we design data structures including sequences and sorted k-mer lists on the IBM Blue Gene/P supercomputer (BG/P). Preliminary results show that we can reduce the memory footprint so that we can potentially align over 250 bacterial genomes on a single BG/P compute node. We verify our results on a dataset of E.coli, Shigella and S.pneumoniae genomes. Our implementation returns results matching those of the original algorithm but in 1/2 the time and with 1/4 the memory footprint for scaffold building. In this study, we have laid the basis for multiple sequence alignment of large-scale datasets on a massively parallel, distributed memory supercomputer, thus enabling comparison of hundreds instead of a few genome sequences within reasonable time.

  7. A Distributed Platform for Global-Scale Agent-Based Models of Disease Transmission

    PubMed Central

    Parker, Jon; Epstein, Joshua M.

    2013-01-01

    The Global-Scale Agent Model (GSAM) is presented. The GSAM is a high-performance distributed platform for agent-based epidemic modeling capable of simulating a disease outbreak in a population of several billion agents. It is unprecedented in its scale, its speed, and its use of Java. Solutions to multiple challenges inherent in distributing massive agent-based models are presented. Communication, synchronization, and memory usage are among the topics covered in detail. The memory usage discussion is Java specific. However, the communication and synchronization discussions apply broadly. We provide benchmarks illustrating the GSAM’s speed and scalability. PMID:24465120

  8. A new parallel-vector finite element analysis software on distributed-memory computers

    NASA Technical Reports Server (NTRS)

    Qin, Jiangning; Nguyen, Duc T.

    1993-01-01

    A new parallel-vector finite element analysis software package MPFEA (Massively Parallel-vector Finite Element Analysis) is developed for large-scale structural analysis on massively parallel computers with distributed-memory. MPFEA is designed for parallel generation and assembly of the global finite element stiffness matrices as well as parallel solution of the simultaneous linear equations, since these are often the major time-consuming parts of a finite element analysis. Block-skyline storage scheme along with vector-unrolling techniques are used to enhance the vector performance. Communications among processors are carried out concurrently with arithmetic operations to reduce the total execution time. Numerical results on the Intel iPSC/860 computers (such as the Intel Gamma with 128 processors and the Intel Touchstone Delta with 512 processors) are presented, including an aircraft structure and some very large truss structures, to demonstrate the efficiency and accuracy of MPFEA.

  9. 3-D parallel program for numerical calculation of gas dynamics problems with heat conductivity on distributed memory computational systems (CS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sofronov, I.D.; Voronin, B.L.; Butnev, O.I.

    1997-12-31

    The aim of the work performed is to develop a 3D parallel program for numerical calculation of gas dynamics problem with heat conductivity on distributed memory computational systems (CS), satisfying the condition of numerical result independence from the number of processors involved. Two basically different approaches to the structure of massive parallel computations have been developed. The first approach uses the 3D data matrix decomposition reconstructed at temporal cycle and is a development of parallelization algorithms for multiprocessor CS with shareable memory. The second approach is based on using a 3D data matrix decomposition not reconstructed during a temporal cycle.more » The program was developed on 8-processor CS MP-3 made in VNIIEF and was adapted to a massive parallel CS Meiko-2 in LLNL by joint efforts of VNIIEF and LLNL staffs. A large number of numerical experiments has been carried out with different number of processors up to 256 and the efficiency of parallelization has been evaluated in dependence on processor number and their parameters.« less

  10. Running ATLAS workloads within massively parallel distributed applications using Athena Multi-Process framework (AthenaMP)

    NASA Astrophysics Data System (ADS)

    Calafiura, Paolo; Leggett, Charles; Seuster, Rolf; Tsulaia, Vakhtang; Van Gemmeren, Peter

    2015-12-01

    AthenaMP is a multi-process version of the ATLAS reconstruction, simulation and data analysis framework Athena. By leveraging Linux fork and copy-on-write mechanisms, it allows for sharing of memory pages between event processors running on the same compute node with little to no change in the application code. Originally targeted to optimize the memory footprint of reconstruction jobs, AthenaMP has demonstrated that it can reduce the memory usage of certain configurations of ATLAS production jobs by a factor of 2. AthenaMP has also evolved to become the parallel event-processing core of the recently developed ATLAS infrastructure for fine-grained event processing (Event Service) which allows the running of AthenaMP inside massively parallel distributed applications on hundreds of compute nodes simultaneously. We present the architecture of AthenaMP, various strategies implemented by AthenaMP for scheduling workload to worker processes (for example: Shared Event Queue and Shared Distributor of Event Tokens) and the usage of AthenaMP in the diversity of ATLAS event processing workloads on various computing resources: Grid, opportunistic resources and HPC.

  11. Sparse distributed memory prototype: Principles of operation

    NASA Technical Reports Server (NTRS)

    Flynn, Michael J.; Kanerva, Pentti; Ahanin, Bahram; Bhadkamkar, Neal; Flaherty, Paul; Hickey, Philip

    1988-01-01

    Sparse distributed memory is a generalized random access memory (RAM) for long binary words. Such words can be written into and read from the memory, and they can be used to address the memory. The main attribute of the memory is sensitivity to similarity, meaning that a word can be read back not only by giving the original right address but also by giving one close to it as measured by the Hamming distance between addresses. Large memories of this kind are expected to have wide use in speech and scene analysis, in signal detection and verification, and in adaptive control of automated equipment. The memory can be realized as a simple, massively parallel computer. Digital technology has reached a point where building large memories is becoming practical. The research is aimed at resolving major design issues that have to be faced in building the memories. The design of a prototype memory with 256-bit addresses and from 8K to 128K locations for 256-bit words is described. A key aspect of the design is extensive use of dynamic RAM and other standard components.

  12. The Representation of Knowledge in Image Understanding.

    DTIC Science & Technology

    1985-03-01

    for memory traces? It’s been quite a few years since Lashley wrote his famous paper "Search for the Engram ." That paper pinpointed the fundamental...34 engram hunters," mostly empirical neurobiologists guided by their own often powerful working hypotheses, ensued and has continued that search with...progress. What is an Engram ? (Some Informal Considerations) Memory, viewed as traces of experience, is necessarily massively . distributed. As an example

  13. Sparse distributed memory: Principles and operation

    NASA Technical Reports Server (NTRS)

    Flynn, M. J.; Kanerva, P.; Bhadkamkar, N.

    1989-01-01

    Sparse distributed memory is a generalized random access memory (RAM) for long (1000 bit) binary words. Such words can be written into and read from the memory, and they can also be used to address the memory. The main attribute of the memory is sensitivity to similarity, meaning that a word can be read back not only by giving the original write address but also by giving one close to it as measured by the Hamming distance between addresses. Large memories of this kind are expected to have wide use in speech recognition and scene analysis, in signal detection and verification, and in adaptive control of automated equipment, in general, in dealing with real world information in real time. The memory can be realized as a simple, massively parallel computer. Digital technology has reached a point where building large memories is becoming practical. Major design issues were resolved which were faced in building the memories. The design is described of a prototype memory with 256 bit addresses and from 8 to 128 K locations for 256 bit words. A key aspect of the design is extensive use of dynamic RAM and other standard components.

  14. Massive Memory Revisited: Limitations on Storage Capacity for Object Details in Visual Long-Term Memory

    ERIC Educational Resources Information Center

    Cunningham, Corbin A.; Yassa, Michael A.; Egeth, Howard E.

    2015-01-01

    Previous work suggests that visual long-term memory (VLTM) is highly detailed and has a massive capacity. However, memory performance is subject to the effects of the type of testing procedure used. The current study examines detail memory performance by probing the same memories within the same subjects, but using divergent probing methods. The…

  15. Using CLIPS in the domain of knowledge-based massively parallel programming

    NASA Technical Reports Server (NTRS)

    Dvorak, Jiri J.

    1994-01-01

    The Program Development Environment (PDE) is a tool for massively parallel programming of distributed-memory architectures. Adopting a knowledge-based approach, the PDE eliminates the complexity introduced by parallel hardware with distributed memory and offers complete transparency in respect of parallelism exploitation. The knowledge-based part of the PDE is realized in CLIPS. Its principal task is to find an efficient parallel realization of the application specified by the user in a comfortable, abstract, domain-oriented formalism. A large collection of fine-grain parallel algorithmic skeletons, represented as COOL objects in a tree hierarchy, contains the algorithmic knowledge. A hybrid knowledge base with rule modules and procedural parts, encoding expertise about application domain, parallel programming, software engineering, and parallel hardware, enables a high degree of automation in the software development process. In this paper, important aspects of the implementation of the PDE using CLIPS and COOL are shown, including the embedding of CLIPS with C++-based parts of the PDE. The appropriateness of the chosen approach and of the CLIPS language for knowledge-based software engineering are discussed.

  16. Research in Parallel Algorithms and Software for Computational Aerosciences

    NASA Technical Reports Server (NTRS)

    Domel, Neal D.

    1996-01-01

    Phase I is complete for the development of a Computational Fluid Dynamics parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.

  17. Research in Parallel Algorithms and Software for Computational Aerosciences

    NASA Technical Reports Server (NTRS)

    Domel, Neal D.

    1996-01-01

    Phase 1 is complete for the development of a computational fluid dynamics CFD) parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.

  18. Directions in parallel programming: HPF, shared virtual memory and object parallelism in pC++

    NASA Technical Reports Server (NTRS)

    Bodin, Francois; Priol, Thierry; Mehrotra, Piyush; Gannon, Dennis

    1994-01-01

    Fortran and C++ are the dominant programming languages used in scientific computation. Consequently, extensions to these languages are the most popular for programming massively parallel computers. We discuss two such approaches to parallel Fortran and one approach to C++. The High Performance Fortran Forum has designed HPF with the intent of supporting data parallelism on Fortran 90 applications. HPF works by asking the user to help the compiler distribute and align the data structures with the distributed memory modules in the system. Fortran-S takes a different approach in which the data distribution is managed by the operating system and the user provides annotations to indicate parallel control regions. In the case of C++, we look at pC++ which is based on a concurrent aggregate parallel model.

  19. Two-dimensional shape recognition using sparse distributed memory

    NASA Technical Reports Server (NTRS)

    Kanerva, Pentti; Olshausen, Bruno

    1990-01-01

    Researchers propose a method for recognizing two-dimensional shapes (hand-drawn characters, for example) with an associative memory. The method consists of two stages: first, the image is preprocessed to extract tangents to the contour of the shape; second, the set of tangents is converted to a long bit string for recognition with sparse distributed memory (SDM). SDM provides a simple, massively parallel architecture for an associative memory. Long bit vectors (256 to 1000 bits, for example) serve as both data and addresses to the memory, and patterns are grouped or classified according to similarity in Hamming distance. At the moment, tangents are extracted in a simple manner by progressively blurring the image and then using a Canny-type edge detector (Canny, 1986) to find edges at each stage of blurring. This results in a grid of tangents. While the technique used for obtaining the tangents is at present rather ad hoc, researchers plan to adopt an existing framework for extracting edge orientation information over a variety of resolutions, such as suggested by Watson (1987, 1983), Marr and Hildreth (1980), or Canny (1986).

  20. Use of Massive Parallel Computing Libraries in the Context of Global Gravity Field Determination from Satellite Data

    NASA Astrophysics Data System (ADS)

    Brockmann, J. M.; Schuh, W.-D.

    2011-07-01

    The estimation of the global Earth's gravity field parametrized as a finite spherical harmonic series is computationally demanding. The computational effort depends on the one hand on the maximal resolution of the spherical harmonic expansion (i.e. the number of parameters to be estimated) and on the other hand on the number of observations (which are several millions for e.g. observations from the GOCE satellite missions). To circumvent these restrictions, a massive parallel software based on high-performance computing (HPC) libraries as ScaLAPACK, PBLAS and BLACS was designed in the context of GOCE HPF WP6000 and the GOCO consortium. A prerequisite for the use of these libraries is that all matrices are block-cyclic distributed on a processor grid comprised by a large number of (distributed memory) computers. Using this set of standard HPC libraries has the benefit that once the matrices are distributed across the computer cluster, a huge set of efficient and highly scalable linear algebra operations can be used.

  1. Computational mechanics analysis tools for parallel-vector supercomputers

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.; Nguyen, Duc T.; Baddourah, Majdi; Qin, Jiangning

    1993-01-01

    Computational algorithms for structural analysis on parallel-vector supercomputers are reviewed. These parallel algorithms, developed by the authors, are for the assembly of structural equations, 'out-of-core' strategies for linear equation solution, massively distributed-memory equation solution, unsymmetric equation solution, general eigensolution, geometrically nonlinear finite element analysis, design sensitivity analysis for structural dynamics, optimization search analysis and domain decomposition. The source code for many of these algorithms is available.

  2. Computational mechanics analysis tools for parallel-vector supercomputers

    NASA Technical Reports Server (NTRS)

    Storaasli, O. O.; Nguyen, D. T.; Baddourah, M. A.; Qin, J.

    1993-01-01

    Computational algorithms for structural analysis on parallel-vector supercomputers are reviewed. These parallel algorithms, developed by the authors, are for the assembly of structural equations, 'out-of-core' strategies for linear equation solution, massively distributed-memory equation solution, unsymmetric equation solution, general eigen-solution, geometrically nonlinear finite element analysis, design sensitivity analysis for structural dynamics, optimization algorithm and domain decomposition. The source code for many of these algorithms is available from NASA Langley.

  3. The architecture of tomorrow's massively parallel computer

    NASA Technical Reports Server (NTRS)

    Batcher, Ken

    1987-01-01

    Goodyear Aerospace delivered the Massively Parallel Processor (MPP) to NASA/Goddard in May 1983, over three years ago. Ever since then, Goodyear has tried to look in a forward direction. There is always some debate as to which way is forward when it comes to supercomputer architecture. Improvements to the MPP's massively parallel architecture are discussed in the areas of data I/O, memory capacity, connectivity, and indirect (or local) addressing. In I/O, transfer rates up to 640 megabytes per second can be achieved. There are devices that can supply the data and accept it at this rate. The memory capacity can be increased up to 128 megabytes in the ARU and over a gigabyte in the staging memory. For connectivity, there are several different kinds of multistage networks that should be considered.

  4. A dynamic re-partitioning strategy based on the distribution of key in Spark

    NASA Astrophysics Data System (ADS)

    Zhang, Tianyu; Lian, Xin

    2018-05-01

    Spark is a memory-based distributed data processing framework, has the ability of processing massive data and becomes a focus in Big Data. But the performance of Spark Shuffle depends on the distribution of data. The naive Hash partition function of Spark can not guarantee load balancing when data is skewed. The time of job is affected by the node which has more data to process. In order to handle this problem, dynamic sampling is used. In the process of task execution, histogram is used to count the key frequency distribution of each node, and then generate the global key frequency distribution. After analyzing the distribution of key, load balance of data partition is achieved. Results show that the Dynamic Re-Partitioning function is better than the default Hash partition, Fine Partition and the Balanced-Schedule strategy, it can reduce the execution time of the task and improve the efficiency of the whole cluster.

  5. Massively Parallel Processing for Fast and Accurate Stamping Simulations

    NASA Astrophysics Data System (ADS)

    Gress, Jeffrey J.; Xu, Siguang; Joshi, Ramesh; Wang, Chuan-tao; Paul, Sabu

    2005-08-01

    The competitive automotive market drives automotive manufacturers to speed up the vehicle development cycles and reduce the lead-time. Fast tooling development is one of the key areas to support fast and short vehicle development programs (VDP). In the past ten years, the stamping simulation has become the most effective validation tool in predicting and resolving all potential formability and quality problems before the dies are physically made. The stamping simulation and formability analysis has become an critical business segment in GM math-based die engineering process. As the simulation becomes as one of the major production tools in engineering factory, the simulation speed and accuracy are the two of the most important measures for stamping simulation technology. The speed and time-in-system of forming analysis becomes an even more critical to support the fast VDP and tooling readiness. Since 1997, General Motors Die Center has been working jointly with our software vendor to develop and implement a parallel version of simulation software for mass production analysis applications. By 2001, this technology was matured in the form of distributed memory processing (DMP) of draw die simulations in a networked distributed memory computing environment. In 2004, this technology was refined to massively parallel processing (MPP) and extended to line die forming analysis (draw, trim, flange, and associated spring-back) running on a dedicated computing environment. The evolution of this technology and the insight gained through the implementation of DM0P/MPP technology as well as performance benchmarks are discussed in this publication.

  6. Massive infection and loss of CD4+ T cells occurs in the intestinal tract of neonatal rhesus macaques in acute SIV infection.

    PubMed

    Wang, Xiaolei; Rasmussen, Terri; Pahar, Bapi; Poonia, Bhawna; Alvarez, Xavier; Lackner, Andrew A; Veazey, Ronald S

    2007-02-01

    Rapid, profound, and selective depletion of memory CD4+ T cells has now been confirmed to occur in simian immunodeficiency virus (SIV)-infected adult macaques and human immunodeficiency virus (HIV)-infected humans. Within days of infection, marked depletion of memory CD4+ T cells occurs primarily in mucosal tissues, the major reservoir for memory CD4+ T cells in adults. However, HIV infection in neonates often results in higher viral loads and rapid disease progression, despite the paucity of memory CD4+ T cells in the peripheral blood. Here, we examined the immunophenotype of CD4+ T cells in normal and SIV-infected neonatal macaques to determine the distribution of naive and memory T-cell subsets in tissues. We demonstrate that, similar to adults, neonates have abundant memory CD4+ T cells in the intestinal tract and spleen and that these are selectively infected and depleted in primary SIV infection. Within 12 days of SIV infection, activated (CD69+), central memory (CD95+CD28+) CD4+ T cells are marked and persistently depleted in the intestine and other tissues of neonates compared with controls. The results in dicate that "activated" central memory CD4+ T cells are the major target for early SIV infection and CD4+ T cell depletion in neonatal macaques.

  7. Massive infection and loss of CD4+ T cells occurs in the intestinal tract of neonatal rhesus macaques in acute SIV infection

    PubMed Central

    Wang, Xiaolei; Rasmussen, Terri; Pahar, Bapi; Poonia, Bhawna; Alvarez, Xavier; Lackner, Andrew A.; Veazey, Ronald S.

    2007-01-01

    Rapid, profound, and selective depletion of memory CD4+ T cells has now been confirmed to occur in simian immunodeficiency virus (SIV)–infected adult macaques and human immunodeficiency virus (HIV)–infected humans. Within days of infection, marked depletion of memory CD4+ T cells occurs primarily in mucosal tissues, the major reservoir for memory CD4+ T cells in adults. However, HIV infection in neonates often results in higher viral loads and rapid disease progression, despite the paucity of memory CD4+ T cells in the peripheral blood. Here, we examined the immunophenotype of CD4+ T cells in normal and SIV-infected neonatal macaques to determine the distribution of naive and memory T-cell subsets in tissues. We demonstrate that, similar to adults, neonates have abundant memory CD4+ T cells in the intestinal tract and spleen and that these are selectively infected and depleted in primary SIV infection. Within 12 days of SIV infection, activated (CD69+), central memory (CD95+CD28+) CD4+ T cells are marked and persistently depleted in the intestine and other tissues of neonates compared with controls. The results in dicate that “activated” central memory CD4+ T cells are the major target for early SIV infection and CD4+ T cell depletion in neonatal macaques. PMID:17047153

  8. Multi-GPU and multi-CPU accelerated FDTD scheme for vibroacoustic applications

    NASA Astrophysics Data System (ADS)

    Francés, J.; Otero, B.; Bleda, S.; Gallego, S.; Neipp, C.; Márquez, A.; Beléndez, A.

    2015-06-01

    The Finite-Difference Time-Domain (FDTD) method is applied to the analysis of vibroacoustic problems and to study the propagation of longitudinal and transversal waves in a stratified media. The potential of the scheme and the relevance of each acceleration strategy for massively computations in FDTD are demonstrated in this work. In this paper, we propose two new specific implementations of the bi-dimensional scheme of the FDTD method using multi-CPU and multi-GPU, respectively. In the first implementation, an open source message passing interface (OMPI) has been included in order to massively exploit the resources of a biprocessor station with two Intel Xeon processors. Moreover, regarding CPU code version, the streaming SIMD extensions (SSE) and also the advanced vectorial extensions (AVX) have been included with shared memory approaches that take advantage of the multi-core platforms. On the other hand, the second implementation called the multi-GPU code version is based on Peer-to-Peer communications available in CUDA on two GPUs (NVIDIA GTX 670). Subsequently, this paper presents an accurate analysis of the influence of the different code versions including shared memory approaches, vector instructions and multi-processors (both CPU and GPU) and compares them in order to delimit the degree of improvement of using distributed solutions based on multi-CPU and multi-GPU. The performance of both approaches was analysed and it has been demonstrated that the addition of shared memory schemes to CPU computing improves substantially the performance of vector instructions enlarging the simulation sizes that use efficiently the cache memory of CPUs. In this case GPU computing is slightly twice times faster than the fine tuned CPU version in both cases one and two nodes. However, for massively computations explicit vector instructions do not worth it since the memory bandwidth is the limiting factor and the performance tends to be the same than the sequential version with auto-vectorisation and also shared memory approach. In this scenario GPU computing is the best option since it provides a homogeneous behaviour. More specifically, the speedup of GPU computing achieves an upper limit of 12 for both one and two GPUs, whereas the performance reaches peak values of 80 GFlops and 146 GFlops for the performance for one GPU and two GPUs respectively. Finally, the method is applied to an earth crust profile in order to demonstrate the potential of our approach and the necessity of applying acceleration strategies in these type of applications.

  9. PCI bus content-addressable-memory (CAM) implementation on FPGA for pattern recognition/image retrieval in a distributed environment

    NASA Astrophysics Data System (ADS)

    Megherbi, Dalila B.; Yan, Yin; Tanmay, Parikh; Khoury, Jed; Woods, C. L.

    2004-11-01

    Recently surveillance and Automatic Target Recognition (ATR) applications are increasing as the cost of computing power needed to process the massive amount of information continues to fall. This computing power has been made possible partly by the latest advances in FPGAs and SOPCs. In particular, to design and implement state-of-the-Art electro-optical imaging systems to provide advanced surveillance capabilities, there is a need to integrate several technologies (e.g. telescope, precise optics, cameras, image/compute vision algorithms, which can be geographically distributed or sharing distributed resources) into a programmable system and DSP systems. Additionally, pattern recognition techniques and fast information retrieval, are often important components of intelligent systems. The aim of this work is using embedded FPGA as a fast, configurable and synthesizable search engine in fast image pattern recognition/retrieval in a distributed hardware/software co-design environment. In particular, we propose and show a low cost Content Addressable Memory (CAM)-based distributed embedded FPGA hardware architecture solution with real time recognition capabilities and computing for pattern look-up, pattern recognition, and image retrieval. We show how the distributed CAM-based architecture offers a performance advantage of an order-of-magnitude over RAM-based architecture (Random Access Memory) search for implementing high speed pattern recognition for image retrieval. The methods of designing, implementing, and analyzing the proposed CAM based embedded architecture are described here. Other SOPC solutions/design issues are covered. Finally, experimental results, hardware verification, and performance evaluations using both the Xilinx Virtex-II and the Altera Apex20k are provided to show the potential and power of the proposed method for low cost reconfigurable fast image pattern recognition/retrieval at the hardware/software co-design level.

  10. Infants Hierarchically Organize Memory Representations

    ERIC Educational Resources Information Center

    Rosenberg, Rebecca D.; Feigenson, Lisa

    2013-01-01

    Throughout development, working memory is subject to capacity limits that severely constrain short-term storage. However, adults can massively expand the total amount of remembered information by grouping items into "chunks". Although infants also have been shown to chunk objects in memory, little is known regarding the limits of this…

  11. Massively parallel sparse matrix function calculations with NTPoly

    NASA Astrophysics Data System (ADS)

    Dawson, William; Nakajima, Takahito

    2018-04-01

    We present NTPoly, a massively parallel library for computing the functions of sparse, symmetric matrices. The theory of matrix functions is a well developed framework with a wide range of applications including differential equations, graph theory, and electronic structure calculations. One particularly important application area is diagonalization free methods in quantum chemistry. When the input and output of the matrix function are sparse, methods based on polynomial expansions can be used to compute matrix functions in linear time. We present a library based on these methods that can compute a variety of matrix functions. Distributed memory parallelization is based on a communication avoiding sparse matrix multiplication algorithm. OpenMP task parallellization is utilized to implement hybrid parallelization. We describe NTPoly's interface and show how it can be integrated with programs written in many different programming languages. We demonstrate the merits of NTPoly by performing large scale calculations on the K computer.

  12. Unleashing the Power of Distributed CPU/GPU Architectures: Massive Astronomical Data Analysis and Visualization Case Study

    NASA Astrophysics Data System (ADS)

    Hassan, A. H.; Fluke, C. J.; Barnes, D. G.

    2012-09-01

    Upcoming and future astronomy research facilities will systematically generate terabyte-sized data sets moving astronomy into the Petascale data era. While such facilities will provide astronomers with unprecedented levels of accuracy and coverage, the increases in dataset size and dimensionality will pose serious computational challenges for many current astronomy data analysis and visualization tools. With such data sizes, even simple data analysis tasks (e.g. calculating a histogram or computing data minimum/maximum) may not be achievable without access to a supercomputing facility. To effectively handle such dataset sizes, which exceed today's single machine memory and processing limits, we present a framework that exploits the distributed power of GPUs and many-core CPUs, with a goal of providing data analysis and visualizing tasks as a service for astronomers. By mixing shared and distributed memory architectures, our framework effectively utilizes the underlying hardware infrastructure handling both batched and real-time data analysis and visualization tasks. Offering such functionality as a service in a “software as a service” manner will reduce the total cost of ownership, provide an easy to use tool to the wider astronomical community, and enable a more optimized utilization of the underlying hardware infrastructure.

  13. Conceptual Distinctiveness Supports Detailed Visual Long-Term Memory for Real-World Objects

    ERIC Educational Resources Information Center

    Konkle, Talia; Brady, Timothy F.; Alvarez, George A.; Oliva, Aude

    2010-01-01

    Humans have a massive capacity to store detailed information in visual long-term memory. The present studies explored the fidelity of these visual long-term memory representations and examined how conceptual and perceptual features of object categories support this capacity. Observers viewed 2,800 object images with a different number of exemplars…

  14. Representation-Independent Iteration of Sparse Data Arrays

    NASA Technical Reports Server (NTRS)

    James, Mark

    2007-01-01

    An approach is defined that describes a method of iterating over massively large arrays containing sparse data using an approach that is implementation independent of how the contents of the sparse arrays are laid out in memory. What is unique and important here is the decoupling of the iteration over the sparse set of array elements from how they are internally represented in memory. This enables this approach to be backward compatible with existing schemes for representing sparse arrays as well as new approaches. What is novel here is a new approach for efficiently iterating over sparse arrays that is independent of the underlying memory layout representation of the array. A functional interface is defined for implementing sparse arrays in any modern programming language with a particular focus for the Chapel programming language. Examples are provided that show the translation of a loop that computes a matrix vector product into this representation for both the distributed and not-distributed cases. This work is directly applicable to NASA and its High Productivity Computing Systems (HPCS) program that JPL and our current program are engaged in. The goal of this program is to create powerful, scalable, and economically viable high-powered computer systems suitable for use in national security and industry by 2010. This is important to NASA for its computationally intensive requirements for analyzing and understanding the volumes of science data from our returned missions.

  15. A Single HIV-1 Cluster and a Skewed Immune Homeostasis Drive the Early Spread of HIV among Resting CD4+ Cell Subsets within One Month Post-Infection

    PubMed Central

    Avettand-Fenoël, Véronique; Nembot, Georges; Mélard, Adeline; Blanc, Catherine; Lascoux-Combe, Caroline; Slama, Laurence; Allegre, Thierry; Allavena, Clotilde; Yazdanpanah, Yazdan; Duvivier, Claudine; Katlama, Christine; Goujard, Cécile; Seksik, Bao Chau Phung; Leplatois, Anne; Molina, Jean-Michel; Meyer, Laurence; Autran, Brigitte; Rouzioux, Christine

    2013-01-01

    Optimizing therapeutic strategies for an HIV cure requires better understanding the characteristics of early HIV-1 spread among resting CD4+ cells within the first month of primary HIV-1 infection (PHI). We studied the immune distribution, diversity, and inducibility of total HIV-DNA among the following cell subsets: monocytes, peripheral blood activated and resting CD4 T cells, long-lived (naive [TN] and central-memory [TCM]) and short-lived (transitional-memory [TTM] and effector-memory cells [TEM]) resting CD4+T cells from 12 acutely-infected individuals recruited at a median 36 days from infection. Cells were sorted for total HIV-DNA quantification, phylogenetic analysis and inducibility, all studied in relation to activation status and cell signaling. One month post-infection, a single CCR5-restricted viral cluster was massively distributed in all resting CD4+ subsets from 88% subjects, while one subject showed a slight diversity. High levels of total HIV-DNA were measured among TN (median 3.4 log copies/million cells), although 10-fold less (p = 0.0005) than in equally infected TCM (4.5), TTM (4.7) and TEM (4.6) cells. CD3−CD4+ monocytes harbored a low viral burden (median 2.3 log copies/million cells), unlike equally infected resting and activated CD4+ T cells (4.5 log copies/million cells). The skewed repartition of resting CD4 subsets influenced their contribution to the pool of resting infected CD4+T cells, two thirds of which consisted of short-lived TTM and TEM subsets, whereas long-lived TN and TCM subsets contributed the balance. Each resting CD4 subset produced HIV in vitro after stimulation with anti-CD3/anti-CD28+IL-2 with kinetics and magnitude varying according to subset differentiation, while IL-7 preferentially induced virus production from long-lived resting TN cells. In conclusion, within a month of infection, a clonal HIV-1 cluster is massively distributed among resting CD4 T-cell subsets with a flexible inducibility, suggesting that subset activation and skewed immune homeostasis determine the conditions of viral dissemination and early establishment of the HIV reservoir. PMID:23691172

  16. Tough2{_}MP: A parallel version of TOUGH2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Keni; Wu, Yu-Shu; Ding, Chris

    2003-04-09

    TOUGH2{_}MP is a massively parallel version of TOUGH2. It was developed for running on distributed-memory parallel computers to simulate large simulation problems that may not be solved by the standard, single-CPU TOUGH2 code. The new code implements an efficient massively parallel scheme, while preserving the full capacity and flexibility of the original TOUGH2 code. The new software uses the METIS software package for grid partitioning and AZTEC software package for linear-equation solving. The standard message-passing interface is adopted for communication among processors. Numerical performance of the current version code has been tested on CRAY-T3E and IBM RS/6000 SP platforms. Inmore » addition, the parallel code has been successfully applied to real field problems of multi-million-cell simulations for three-dimensional multiphase and multicomponent fluid and heat flow, as well as solute transport. In this paper, we will review the development of the TOUGH2{_}MP, and discuss the basic features, modules, and their applications.« less

  17. Particle simulation of plasmas on the massively parallel processor

    NASA Technical Reports Server (NTRS)

    Gledhill, I. M. A.; Storey, L. R. O.

    1987-01-01

    Particle simulations, in which collective phenomena in plasmas are studied by following the self consistent motions of many discrete particles, involve several highly repetitive sets of calculations that are readily adaptable to SIMD parallel processing. A fully electromagnetic, relativistic plasma simulation for the massively parallel processor is described. The particle motions are followed in 2 1/2 dimensions on a 128 x 128 grid, with periodic boundary conditions. The two dimensional simulation space is mapped directly onto the processor network; a Fast Fourier Transform is used to solve the field equations. Particle data are stored according to an Eulerian scheme, i.e., the information associated with each particle is moved from one local memory to another as the particle moves across the spatial grid. The method is applied to the study of the nonlinear development of the whistler instability in a magnetospheric plasma model, with an anisotropic electron temperature. The wave distribution function is included as a new diagnostic to allow simulation results to be compared with satellite observations.

  18. Fast, Massively Parallel Data Processors

    NASA Technical Reports Server (NTRS)

    Heaton, Robert A.; Blevins, Donald W.; Davis, ED

    1994-01-01

    Proposed fast, massively parallel data processor contains 8x16 array of processing elements with efficient interconnection scheme and options for flexible local control. Processing elements communicate with each other on "X" interconnection grid with external memory via high-capacity input/output bus. This approach to conditional operation nearly doubles speed of various arithmetic operations.

  19. Rutger's CAM2000 chip architecture

    NASA Technical Reports Server (NTRS)

    Smith, Donald E.; Hall, J. Storrs; Miyake, Keith

    1993-01-01

    This report describes the architecture and instruction set of the Rutgers CAM2000 memory chip. The CAM2000 combines features of Associative Processing (AP), Content Addressable Memory (CAM), and Dynamic Random Access Memory (DRAM) in a single chip package that is not only DRAM compatible but capable of applying simple massively parallel operations to memory. This document reflects the current status of the CAM2000 architecture and is continually updated to reflect the current state of the architecture and instruction set.

  20. Systems and methods for rapid processing and storage of data

    DOEpatents

    Stalzer, Mark A.

    2017-01-24

    Systems and methods of building massively parallel computing systems using low power computing complexes in accordance with embodiments of the invention are disclosed. A massively parallel computing system in accordance with one embodiment of the invention includes at least one Solid State Blade configured to communicate via a high performance network fabric. In addition, each Solid State Blade includes a processor configured to communicate with a plurality of low power computing complexes interconnected by a router, and each low power computing complex includes at least one general processing core, an accelerator, an I/O interface, and cache memory and is configured to communicate with non-volatile solid state memory.

  1. Design and Implementation of a Parallel Multivariate Ensemble Kalman Filter for the Poseidon Ocean General Circulation Model

    NASA Technical Reports Server (NTRS)

    Keppenne, Christian L.; Rienecker, Michele M.; Koblinsky, Chester (Technical Monitor)

    2001-01-01

    A multivariate ensemble Kalman filter (MvEnKF) implemented on a massively parallel computer architecture has been implemented for the Poseidon ocean circulation model and tested with a Pacific Basin model configuration. There are about two million prognostic state-vector variables. Parallelism for the data assimilation step is achieved by regionalization of the background-error covariances that are calculated from the phase-space distribution of the ensemble. Each processing element (PE) collects elements of a matrix measurement functional from nearby PEs. To avoid the introduction of spurious long-range covariances associated with finite ensemble sizes, the background-error covariances are given compact support by means of a Hadamard (element by element) product with a three-dimensional canonical correlation function. The methodology and the MvEnKF configuration are discussed. It is shown that the regionalization of the background covariances; has a negligible impact on the quality of the analyses. The parallel algorithm is very efficient for large numbers of observations but does not scale well beyond 100 PEs at the current model resolution. On a platform with distributed memory, memory rather than speed is the limiting factor.

  2. A third-generation density-functional-theory-based method for calculating canonical molecular orbitals of large molecules.

    PubMed

    Hirano, Toshiyuki; Sato, Fumitoshi

    2014-07-28

    We used grid-free modified Cholesky decomposition (CD) to develop a density-functional-theory (DFT)-based method for calculating the canonical molecular orbitals (CMOs) of large molecules. Our method can be used to calculate standard CMOs, analytically compute exchange-correlation terms, and maximise the capacity of next-generation supercomputers. Cholesky vectors were first analytically downscaled using low-rank pivoted CD and CD with adaptive metric (CDAM). The obtained Cholesky vectors were distributed and stored on each computer node in a parallel computer, and the Coulomb, Fock exchange, and pure exchange-correlation terms were calculated by multiplying the Cholesky vectors without evaluating molecular integrals in self-consistent field iterations. Our method enables DFT and massively distributed memory parallel computers to be used in order to very efficiently calculate the CMOs of large molecules.

  3. High speed optical object recognition processor with massive holographic memory

    NASA Technical Reports Server (NTRS)

    Chao, T.; Zhou, H.; Reyes, G.

    2002-01-01

    Real-time object recognition using a compact grayscale optical correlator will be introduced. A holographic memory module for storing a large bank of optimum correlation filters, to accommodate the large data throughput rate needed for many real-world applications, has also been developed. System architecture of the optical processor and the holographic memory will be presented. Application examples of this object recognition technology will also be demonstrated.

  4. Massively parallel support for a case-based planning system

    NASA Technical Reports Server (NTRS)

    Kettler, Brian P.; Hendler, James A.; Anderson, William A.

    1993-01-01

    Case-based planning (CBP), a kind of case-based reasoning, is a technique in which previously generated plans (cases) are stored in memory and can be reused to solve similar planning problems in the future. CBP can save considerable time over generative planning, in which a new plan is produced from scratch. CBP thus offers a potential (heuristic) mechanism for handling intractable problems. One drawback of CBP systems has been the need for a highly structured memory to reduce retrieval times. This approach requires significant domain engineering and complex memory indexing schemes to make these planners efficient. In contrast, our CBP system, CaPER, uses a massively parallel frame-based AI language (PARKA) and can do extremely fast retrieval of complex cases from a large, unindexed memory. The ability to do fast, frequent retrievals has many advantages: indexing is unnecessary; very large case bases can be used; memory can be probed in numerous alternate ways; and queries can be made at several levels, allowing more specific retrieval of stored plans that better fit the target problem with less adaptation. In this paper we describe CaPER's case retrieval techniques and some experimental results showing its good performance, even on large case bases.

  5. Box schemes and their implementation on the iPSC/860

    NASA Technical Reports Server (NTRS)

    Chattot, J. J.; Merriam, M. L.

    1991-01-01

    Research on algoriths for efficiently solving fluid flow problems on massively parallel computers is continued in the present paper. Attention is given to the implementation of a box scheme on the iPSC/860, a massively parallel computer with a peak speed of 10 Gflops and a memory of 128 Mwords. A domain decomposition approach to parallelism is used.

  6. Spacecraft On-Board Information Extraction Computer (SOBIEC)

    NASA Technical Reports Server (NTRS)

    Eisenman, David; Decaro, Robert E.; Jurasek, David W.

    1994-01-01

    The Jet Propulsion Laboratory is the Technical Monitor on an SBIR Program issued for Irvine Sensors Corporation to develop a highly compact, dual use massively parallel processing node known as SOBIEC. SOBIEC couples 3D memory stacking technology provided by nCUBE. The node contains sufficient network Input/Output to implement up to an order-13 binary hypercube. The benefit of this network, is that it scales linearly as more processors are added, and it is a superset of other commonly used interconnect topologies such as: meshes, rings, toroids, and trees. In this manner, a distributed processing network can be easily devised and supported. The SOBIEC node has sufficient memory for most multi-computer applications, and also supports external memory expansion and DMA interfaces. The SOBIEC node is supported by a mature set of software development tools from nCUBE. The nCUBE operating system (OS) provides configuration and operational support for up to 8000 SOBIEC processors in an order-13 binary hypercube or any subset or partition(s) thereof. The OS is UNIX (USL SVR4) compatible, with C, C++, and FORTRAN compilers readily available. A stand-alone development system is also available to support SOBIEC test and integration.

  7. Ultrahigh-order Maxwell solver with extreme scalability for electromagnetic PIC simulations of plasmas

    NASA Astrophysics Data System (ADS)

    Vincenti, Henri; Vay, Jean-Luc

    2018-07-01

    The advent of massively parallel supercomputers, with their distributed-memory technology using many processing units, has favored the development of highly-scalable local low-order solvers at the expense of harder-to-scale global very high-order spectral methods. Indeed, FFT-based methods, which were very popular on shared memory computers, have been largely replaced by finite-difference (FD) methods for the solution of many problems, including plasmas simulations with electromagnetic Particle-In-Cell methods. For some problems, such as the modeling of so-called "plasma mirrors" for the generation of high-energy particles and ultra-short radiations, we have shown that the inaccuracies of standard FD-based PIC methods prevent the modeling on present supercomputers at sufficient accuracy. We demonstrate here that a new method, based on the use of local FFTs, enables ultrahigh-order accuracy with unprecedented scalability, and thus for the first time the accurate modeling of plasma mirrors in 3D.

  8. ClimateSpark: An In-memory Distributed Computing Framework for Big Climate Data Analytics

    NASA Astrophysics Data System (ADS)

    Hu, F.; Yang, C. P.; Duffy, D.; Schnase, J. L.; Li, Z.

    2016-12-01

    Massive array-based climate data is being generated from global surveillance systems and model simulations. They are widely used to analyze the environment problems, such as climate changes, natural hazards, and public health. However, knowing the underlying information from these big climate datasets is challenging due to both data- and computing- intensive issues in data processing and analyzing. To tackle the challenges, this paper proposes ClimateSpark, an in-memory distributed computing framework to support big climate data processing. In ClimateSpark, the spatiotemporal index is developed to enable Apache Spark to treat the array-based climate data (e.g. netCDF4, HDF4) as native formats, which are stored in Hadoop Distributed File System (HDFS) without any preprocessing. Based on the index, the spatiotemporal query services are provided to retrieve dataset according to a defined geospatial and temporal bounding box. The data subsets will be read out, and a data partition strategy will be applied to equally split the queried data to each computing node, and store them in memory as climateRDDs for processing. By leveraging Spark SQL and User Defined Function (UDFs), the climate data analysis operations can be conducted by the intuitive SQL language. ClimateSpark is evaluated by two use cases using the NASA Modern-Era Retrospective Analysis for Research and Applications (MERRA) climate reanalysis dataset. One use case is to conduct the spatiotemporal query and visualize the subset results in animation; the other one is to compare different climate model outputs using Taylor-diagram service. Experimental results show that ClimateSpark can significantly accelerate data query and processing, and enable the complex analysis services served in the SQL-style fashion.

  9. LDRD final report on massively-parallel linear programming : the parPCx system.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parekh, Ojas; Phillips, Cynthia Ann; Boman, Erik Gunnar

    2005-02-01

    This report summarizes the research and development performed from October 2002 to September 2004 at Sandia National Laboratories under the Laboratory-Directed Research and Development (LDRD) project ''Massively-Parallel Linear Programming''. We developed a linear programming (LP) solver designed to use a large number of processors. LP is the optimization of a linear objective function subject to linear constraints. Companies and universities have expended huge efforts over decades to produce fast, stable serial LP solvers. Previous parallel codes run on shared-memory systems and have little or no distribution of the constraint matrix. We have seen no reports of general LP solver runsmore » on large numbers of processors. Our parallel LP code is based on an efficient serial implementation of Mehrotra's interior-point predictor-corrector algorithm (PCx). The computational core of this algorithm is the assembly and solution of a sparse linear system. We have substantially rewritten the PCx code and based it on Trilinos, the parallel linear algebra library developed at Sandia. Our interior-point method can use either direct or iterative solvers for the linear system. To achieve a good parallel data distribution of the constraint matrix, we use a (pre-release) version of a hypergraph partitioner from the Zoltan partitioning library. We describe the design and implementation of our new LP solver called parPCx and give preliminary computational results. We summarize a number of issues related to efficient parallel solution of LPs with interior-point methods including data distribution, numerical stability, and solving the core linear system using both direct and iterative methods. We describe a number of applications of LP specific to US Department of Energy mission areas and we summarize our efforts to integrate parPCx (and parallel LP solvers in general) into Sandia's massively-parallel integer programming solver PICO (Parallel Interger and Combinatorial Optimizer). We conclude with directions for long-term future algorithmic research and for near-term development that could improve the performance of parPCx.« less

  10. Scalable parallel distance field construction for large-scale applications

    DOE PAGES

    Yu, Hongfeng; Xie, Jinrong; Ma, Kwan -Liu; ...

    2015-10-01

    Computing distance fields is fundamental to many scientific and engineering applications. Distance fields can be used to direct analysis and reduce data. In this paper, we present a highly scalable method for computing 3D distance fields on massively parallel distributed-memory machines. Anew distributed spatial data structure, named parallel distance tree, is introduced to manage the level sets of data and facilitate surface tracking overtime, resulting in significantly reduced computation and communication costs for calculating the distance to the surface of interest from any spatial locations. Our method supports several data types and distance metrics from real-world applications. We demonstrate itsmore » efficiency and scalability on state-of-the-art supercomputers using both large-scale volume datasets and surface models. We also demonstrate in-situ distance field computation on dynamic turbulent flame surfaces for a petascale combustion simulation. In conclusion, our work greatly extends the usability of distance fields for demanding applications.« less

  11. Scalable Parallel Distance Field Construction for Large-Scale Applications.

    PubMed

    Yu, Hongfeng; Xie, Jinrong; Ma, Kwan-Liu; Kolla, Hemanth; Chen, Jacqueline H

    2015-10-01

    Computing distance fields is fundamental to many scientific and engineering applications. Distance fields can be used to direct analysis and reduce data. In this paper, we present a highly scalable method for computing 3D distance fields on massively parallel distributed-memory machines. A new distributed spatial data structure, named parallel distance tree, is introduced to manage the level sets of data and facilitate surface tracking over time, resulting in significantly reduced computation and communication costs for calculating the distance to the surface of interest from any spatial locations. Our method supports several data types and distance metrics from real-world applications. We demonstrate its efficiency and scalability on state-of-the-art supercomputers using both large-scale volume datasets and surface models. We also demonstrate in-situ distance field computation on dynamic turbulent flame surfaces for a petascale combustion simulation. Our work greatly extends the usability of distance fields for demanding applications.

  12. Randomized Dynamic Mode Decomposition

    NASA Astrophysics Data System (ADS)

    Erichson, N. Benjamin; Brunton, Steven L.; Kutz, J. Nathan

    2017-11-01

    The dynamic mode decomposition (DMD) is an equation-free, data-driven matrix decomposition that is capable of providing accurate reconstructions of spatio-temporal coherent structures arising in dynamical systems. We present randomized algorithms to compute the near-optimal low-rank dynamic mode decomposition for massive datasets. Randomized algorithms are simple, accurate and able to ease the computational challenges arising with `big data'. Moreover, randomized algorithms are amenable to modern parallel and distributed computing. The idea is to derive a smaller matrix from the high-dimensional input data matrix using randomness as a computational strategy. Then, the dynamic modes and eigenvalues are accurately learned from this smaller representation of the data, whereby the approximation quality can be controlled via oversampling and power iterations. Here, we present randomized DMD algorithms that are categorized by how many passes the algorithm takes through the data. Specifically, the single-pass randomized DMD does not require data to be stored for subsequent passes. Thus, it is possible to approximately decompose massive fluid flows (stored out of core memory, or not stored at all) using single-pass algorithms, which is infeasible with traditional DMD algorithms.

  13. MODA A Framework for Memory Centric Performance Characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shrestha, Sunil; Su, Chun-Yi; White, Amanda M.

    2012-06-29

    In the age of massive parallelism, the focus of performance analysis has switched from the processor and related structures to the memory and I/O resources. Adapting to this new reality, a performance analysis tool has to provide a way to analyze resource usage to pinpoint existing and potential problems in a given application. This paper provides an overview of the Memory Observant Data Analysis (MODA) tool, a memory-centric tool first implemented on the Cray XMT supercomputer. Throughout the paper, MODA's capabilities have been showcased with experiments done on matrix multiply and Graph-500 application codes.

  14. Role of APOE Isoforms in the Pathogenesis of TBI induced Alzheimer’s Disease

    DTIC Science & Technology

    2016-10-01

    deletion, APOE targeted replacement, complex breeding, CCI model optimization, mRNA library generation, high throughput massive parallel sequencing...demonstrate that the lack of Abca1 increases amyloid plaques and decreased APOE protein levels in AD-model mice. In this proposal we will test the hypothesis...injury, inflammatory reaction, transcriptome, high throughput massive parallel sequencing, mRNA-seq., behavioral testing, memory impairment, recovery 3

  15. Can We Remember Future Actions yet Forget the Last Two Minutes? Study in Transient Global Amnesia

    ERIC Educational Resources Information Center

    Hainselin, Mathieu; Quinette, Peggy; Desgranges, Beatrice; Martinaud, Olivier; Hannequin, Didier; de La Sayette, Vincent; Viader, Fausto; Eustache, Francis

    2011-01-01

    Transient global amnesia (TGA) is a clinical syndrome characterized by the abrupt onset of a massive episodic memory deficit that spares other cognitive functions. If the anterograde dimension is known to be impaired in TGA, researchers have yet to investigate prospective memory (PM)--which involves remembering to perform an intended action at…

  16. SciSpark's SRDD : A Scientific Resilient Distributed Dataset for Multidimensional Data

    NASA Astrophysics Data System (ADS)

    Palamuttam, R. S.; Wilson, B. D.; Mogrovejo, R. M.; Whitehall, K. D.; Mattmann, C. A.; McGibbney, L. J.; Ramirez, P.

    2015-12-01

    Remote sensing data and climate model output are multi-dimensional arrays of massive sizes locked away in heterogeneous file formats (HDF5/4, NetCDF 3/4) and metadata models (HDF-EOS, CF) making it difficult to perform multi-stage, iterative science processing since each stage requires writing and reading data to and from disk. We have developed SciSpark, a robust Big Data framework, that extends ApacheTM Spark for scaling scientific computations. Apache Spark improves the map-reduce implementation in ApacheTM Hadoop for parallel computing on a cluster, by emphasizing in-memory computation, "spilling" to disk only as needed, and relying on lazy evaluation. Central to Spark is the Resilient Distributed Dataset (RDD), an in-memory distributed data structure that extends the functional paradigm provided by the Scala programming language. However, RDDs are ideal for tabular or unstructured data, and not for highly dimensional data. The SciSpark project introduces the Scientific Resilient Distributed Dataset (sRDD), a distributed-computing array structure which supports iterative scientific algorithms for multidimensional data. SciSpark processes data stored in NetCDF and HDF files by partitioning them across time or space and distributing the partitions among a cluster of compute nodes. We show usability and extensibility of SciSpark by implementing distributed algorithms for geospatial operations on large collections of multi-dimensional grids. In particular we address the problem of scaling an automated method for finding Mesoscale Convective Complexes. SciSpark provides a tensor interface to support the pluggability of different matrix libraries. We evaluate performance of the various matrix libraries in distributed pipelines, such as Nd4jTM and BreezeTM. We detail the architecture and design of SciSpark, our efforts to integrate climate science algorithms, parallel ingest and partitioning (sharding) of A-Train satellite observations from model grids. These solutions are encompassed in SciSpark, an open-source software framework for distributed computing on scientific data.

  17. Nonvolatile “AND,” “OR,” and “NOT” Boolean logic gates based on phase-change memory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Y.; Zhong, Y. P.; Deng, Y. F.

    2013-12-21

    Electronic devices or circuits that can implement both logic and memory functions are regarded as the building blocks for future massive parallel computing beyond von Neumann architecture. Here we proposed phase-change memory (PCM)-based nonvolatile logic gates capable of AND, OR, and NOT Boolean logic operations verified in SPICE simulations and circuit experiments. The logic operations are parallel computing and results can be stored directly in the states of the logic gates, facilitating the combination of computing and memory in the same circuit. These results are encouraging for ultralow-power and high-speed nonvolatile logic circuit design based on novel memory devices.

  18. Phase space simulation of collisionless stellar systems on the massively parallel processor

    NASA Technical Reports Server (NTRS)

    White, Richard L.

    1987-01-01

    A numerical technique for solving the collisionless Boltzmann equation describing the time evolution of a self gravitating fluid in phase space was implemented on the Massively Parallel Processor (MPP). The code performs calculations for a two dimensional phase space grid (with one space and one velocity dimension). Some results from calculations are presented. The execution speed of the code is comparable to the speed of a single processor of a Cray-XMP. Advantages and disadvantages of the MPP architecture for this type of problem are discussed. The nearest neighbor connectivity of the MPP array does not pose a significant obstacle. Future MPP-like machines should have much more local memory and easier access to staging memory and disks in order to be effective for this type of problem.

  19. Merlin - Massively parallel heterogeneous computing

    NASA Technical Reports Server (NTRS)

    Wittie, Larry; Maples, Creve

    1989-01-01

    Hardware and software for Merlin, a new kind of massively parallel computing system, are described. Eight computers are linked as a 300-MIPS prototype to develop system software for a larger Merlin network with 16 to 64 nodes, totaling 600 to 3000 MIPS. These working prototypes help refine a mapped reflective memory technique that offers a new, very general way of linking many types of computer to form supercomputers. Processors share data selectively and rapidly on a word-by-word basis. Fast firmware virtual circuits are reconfigured to match topological needs of individual application programs. Merlin's low-latency memory-sharing interfaces solve many problems in the design of high-performance computing systems. The Merlin prototypes are intended to run parallel programs for scientific applications and to determine hardware and software needs for a future Teraflops Merlin network.

  20. Parallel design patterns for a low-power, software-defined compressed video encoder

    NASA Astrophysics Data System (ADS)

    Bruns, Michael W.; Hunt, Martin A.; Prasad, Durga; Gunupudi, Nageswara R.; Sonachalam, Sekar

    2011-06-01

    Video compression algorithms such as H.264 offer much potential for parallel processing that is not always exploited by the technology of a particular implementation. Consumer mobile encoding devices often achieve real-time performance and low power consumption through parallel processing in Application Specific Integrated Circuit (ASIC) technology, but many other applications require a software-defined encoder. High quality compression features needed for some applications such as 10-bit sample depth or 4:2:2 chroma format often go beyond the capability of a typical consumer electronics device. An application may also need to efficiently combine compression with other functions such as noise reduction, image stabilization, real time clocks, GPS data, mission/ESD/user data or software-defined radio in a low power, field upgradable implementation. Low power, software-defined encoders may be implemented using a massively parallel memory-network processor array with 100 or more cores and distributed memory. The large number of processor elements allow the silicon device to operate more efficiently than conventional DSP or CPU technology. A dataflow programming methodology may be used to express all of the encoding processes including motion compensation, transform and quantization, and entropy coding. This is a declarative programming model in which the parallelism of the compression algorithm is expressed as a hierarchical graph of tasks with message communication. Data parallel and task parallel design patterns are supported without the need for explicit global synchronization control. An example is described of an H.264 encoder developed for a commercially available, massively parallel memorynetwork processor device.

  1. Intrinsic role of FoxO3a in the development of CD8+ T cell memory

    PubMed Central

    Tzelepis, Fanny; Joseph, Julie; Haddad, Elias K.; MacLean, Susanne; Dudani, Renu; Agenes, Fabien; Peng, Stanford L.; Sekaly, Rafick-Pierre; Sad, Subash

    2013-01-01

    CD8+ T cells undergo rapid expansion during infection with intracellular pathogens, which is followed by swift and massive culling of primed CD8+ T cells. The mechanisms that govern the massive contraction and maintenance of primed CD8+ T cells are not clear. We show here that the transcription factor, FoxO3a does not influence antigen-presentation and the consequent expansion of CD8+ T cell response during Listeria monocytogenes (LM) infection, but plays a key role in the maintenance of memory CD8+ T cells. The effector function of primed CD8+ T cells as revealed by cytokine secretion and CD107a degranulation was not influenced by inactivation of FoxO3a. Interestingly, FoxO3a-deficient CD8+ T cells displayed reduced expression of pro-apoptotic molecules BIM and PUMA during the various phases of response, and underwent reduced apoptosis in comparison to WT cells. A higher number of memory precursor effector cells (MPECs) and memory subsets were detectable in FoxO3a-deficient mice compared to WT mice. Furthermore, FoxO3a-deficient memory CD8+ T cells upon transfer into normal or RAG1-deficient mice displayed enhanced survival. These results suggest that FoxO3a acts in a cell intrinsic manner to regulate the survival of primed CD8+ T cells. PMID:23277488

  2. Kanerva's sparse distributed memory: An associative memory algorithm well-suited to the Connection Machine

    NASA Technical Reports Server (NTRS)

    Rogers, David

    1988-01-01

    The advent of the Connection Machine profoundly changes the world of supercomputers. The highly nontraditional architecture makes possible the exploration of algorithms that were impractical for standard Von Neumann architectures. Sparse distributed memory (SDM) is an example of such an algorithm. Sparse distributed memory is a particularly simple and elegant formulation for an associative memory. The foundations for sparse distributed memory are described, and some simple examples of using the memory are presented. The relationship of sparse distributed memory to three important computational systems is shown: random-access memory, neural networks, and the cerebellum of the brain. Finally, the implementation of the algorithm for sparse distributed memory on the Connection Machine is discussed.

  3. Staging memory for massively parallel processor

    NASA Technical Reports Server (NTRS)

    Batcher, Kenneth E. (Inventor)

    1988-01-01

    The invention herein relates to a computer organization capable of rapidly processing extremely large volumes of data. A staging memory is provided having a main stager portion consisting of a large number of memory banks which are accessed in parallel to receive, store, and transfer data words simultaneous with each other. Substager portions interconnect with the main stager portion to match input and output data formats with the data format of the main stager portion. An address generator is coded for accessing the data banks for receiving or transferring the appropriate words. Input and output permutation networks arrange the lineal order of data into and out of the memory banks.

  4. Binary synaptic connections based on memory switching in a-Si:H for artificial neural networks

    NASA Technical Reports Server (NTRS)

    Thakoor, A. P.; Lamb, J. L.; Moopenn, A.; Khanna, S. K.

    1987-01-01

    A scheme for nonvolatile associative electronic memory storage with high information storage density is proposed which is based on neural network models and which uses a matrix of two-terminal passive interconnections (synapses). It is noted that the massive parallelism in the architecture would require the ON state of a synaptic connection to be unusually weak (highly resistive). Memory switching using a-Si:H along with ballast resistors patterned from amorphous Ge-metal alloys is investigated for a binary programmable read only memory matrix. The fabrication of a 1600 synapse test array of uniform connection strengths and a-Si:H switching elements is discussed.

  5. Recognition techniques for extracting information from semistructured documents

    NASA Astrophysics Data System (ADS)

    Della Ventura, Anna; Gagliardi, Isabella; Zonta, Bruna

    2000-12-01

    Archives of optical documents are more and more massively employed, the demand driven also by the new norms sanctioning the legal value of digital documents, provided they are stored on supports that are physically unalterable. On the supply side there is now a vast and technologically advanced market, where optical memories have solved the problem of the duration and permanence of data at costs comparable to those for magnetic memories. The remaining bottleneck in these systems is the indexing. The indexing of documents with a variable structure, while still not completely automated, can be machine supported to a large degree with evident advantages both in the organization of the work, and in extracting information, providing data that is much more detailed and potentially significant for the user. We present here a system for the automatic registration of correspondence to and from a public office. The system is based on a general methodology for the extraction, indexing, archiving, and retrieval of significant information from semi-structured documents. This information, in our prototype application, is distributed among the database fields of sender, addressee, subject, date, and body of the document.

  6. Over-Distribution in Source Memory

    PubMed Central

    Brainerd, C. J.; Reyna, V. F.; Holliday, R. E.; Nakamura, K.

    2012-01-01

    Semantic false memories are confounded with a second type of error, over-distribution, in which items are attributed to contradictory episodic states. Over-distribution errors have proved to be more common than false memories when the two are disentangled. We investigated whether over-distribution is prevalent in another classic false memory paradigm: source monitoring. It is. Conventional false memory responses (source misattributions) were predominantly over-distribution errors, but unlike semantic false memory, over-distribution also accounted for more than half of true memory responses (correct source attributions). Experimental control of over-distribution was achieved via a series of manipulations that affected either recollection of contextual details or item memory (concreteness, frequency, list-order, number of presentation contexts, and individual differences in verbatim memory). A theoretical model was used to analyze the data (conjoint process dissociation) that predicts that predicts that (a) over-distribution is directly proportional to item memory but inversely proportional to recollection and (b) item memory is not a necessary precondition for recollection of contextual details. The results were consistent with both predictions. PMID:21942494

  7. Implementing Access to Data Distributed on Many Processors

    NASA Technical Reports Server (NTRS)

    James, Mark

    2006-01-01

    A reference architecture is defined for an object-oriented implementation of domains, arrays, and distributions written in the programming language Chapel. This technology primarily addresses domains that contain arrays that have regular index sets with the low-level implementation details being beyond the scope of this discussion. What is defined is a complete set of object-oriented operators that allows one to perform data distributions for domain arrays involving regular arithmetic index sets. What is unique is that these operators allow for the arbitrary regions of the arrays to be fragmented and distributed across multiple processors with a single point of access giving the programmer the illusion that all the elements are collocated on a single processor. Today's massively parallel High Productivity Computing Systems (HPCS) are characterized by a modular structure, with a large number of processing and memory units connected by a high-speed network. Locality of access as well as load balancing are primary concerns in these systems that are typically used for high-performance scientific computation. Data distributions address these issues by providing a range of methods for spreading large data sets across the components of a system. Over the past two decades, many languages, systems, tools, and libraries have been developed for the support of distributions. Since the performance of data parallel applications is directly influenced by the distribution strategy, users often resort to low-level programming models that allow fine-tuning of the distribution aspects affecting performance, but, at the same time, are tedious and error-prone. This technology presents a reusable design of a data-distribution framework for data parallel high-performance applications. Distributions are a means to express locality in systems composed of large numbers of processor and memory components connected by a network. Since distributions have a great effect on the performance of applications, it is important that the distribution strategy is flexible, so its behavior can change depending on the needs of the application. At the same time, high productivity concerns require that the user be shielded from error-prone, tedious details such as communication and synchronization.

  8. Scalable Evaluation of Polarization Energy and Associated Forces in Polarizable Molecular Dynamics: II.Towards Massively Parallel Computations using Smooth Particle Mesh Ewald.

    PubMed

    Lagardère, Louis; Lipparini, Filippo; Polack, Étienne; Stamm, Benjamin; Cancès, Éric; Schnieders, Michael; Ren, Pengyu; Maday, Yvon; Piquemal, Jean-Philip

    2014-02-28

    In this paper, we present a scalable and efficient implementation of point dipole-based polarizable force fields for molecular dynamics (MD) simulations with periodic boundary conditions (PBC). The Smooth Particle-Mesh Ewald technique is combined with two optimal iterative strategies, namely, a preconditioned conjugate gradient solver and a Jacobi solver in conjunction with the Direct Inversion in the Iterative Subspace for convergence acceleration, to solve the polarization equations. We show that both solvers exhibit very good parallel performances and overall very competitive timings in an energy-force computation needed to perform a MD step. Various tests on large systems are provided in the context of the polarizable AMOEBA force field as implemented in the newly developed Tinker-HP package which is the first implementation for a polarizable model making large scale experiments for massively parallel PBC point dipole models possible. We show that using a large number of cores offers a significant acceleration of the overall process involving the iterative methods within the context of spme and a noticeable improvement of the memory management giving access to very large systems (hundreds of thousands of atoms) as the algorithm naturally distributes the data on different cores. Coupled with advanced MD techniques, gains ranging from 2 to 3 orders of magnitude in time are now possible compared to non-optimized, sequential implementations giving new directions for polarizable molecular dynamics in periodic boundary conditions using massively parallel implementations.

  9. Scalable Evaluation of Polarization Energy and Associated Forces in Polarizable Molecular Dynamics: II.Towards Massively Parallel Computations using Smooth Particle Mesh Ewald

    PubMed Central

    Lagardère, Louis; Lipparini, Filippo; Polack, Étienne; Stamm, Benjamin; Cancès, Éric; Schnieders, Michael; Ren, Pengyu; Maday, Yvon; Piquemal, Jean-Philip

    2015-01-01

    In this paper, we present a scalable and efficient implementation of point dipole-based polarizable force fields for molecular dynamics (MD) simulations with periodic boundary conditions (PBC). The Smooth Particle-Mesh Ewald technique is combined with two optimal iterative strategies, namely, a preconditioned conjugate gradient solver and a Jacobi solver in conjunction with the Direct Inversion in the Iterative Subspace for convergence acceleration, to solve the polarization equations. We show that both solvers exhibit very good parallel performances and overall very competitive timings in an energy-force computation needed to perform a MD step. Various tests on large systems are provided in the context of the polarizable AMOEBA force field as implemented in the newly developed Tinker-HP package which is the first implementation for a polarizable model making large scale experiments for massively parallel PBC point dipole models possible. We show that using a large number of cores offers a significant acceleration of the overall process involving the iterative methods within the context of spme and a noticeable improvement of the memory management giving access to very large systems (hundreds of thousands of atoms) as the algorithm naturally distributes the data on different cores. Coupled with advanced MD techniques, gains ranging from 2 to 3 orders of magnitude in time are now possible compared to non-optimized, sequential implementations giving new directions for polarizable molecular dynamics in periodic boundary conditions using massively parallel implementations. PMID:26512230

  10. A distributed query execution engine of big attributed graphs.

    PubMed

    Batarfi, Omar; Elshawi, Radwa; Fayoumi, Ayman; Barnawi, Ahmed; Sakr, Sherif

    2016-01-01

    A graph is a popular data model that has become pervasively used for modeling structural relationships between objects. In practice, in many real-world graphs, the graph vertices and edges need to be associated with descriptive attributes. Such type of graphs are referred to as attributed graphs. G-SPARQL has been proposed as an expressive language, with a centralized execution engine, for querying attributed graphs. G-SPARQL supports various types of graph querying operations including reachability, pattern matching and shortest path where any G-SPARQL query may include value-based predicates on the descriptive information (attributes) of the graph edges/vertices in addition to the structural predicates. In general, a main limitation of centralized systems is that their vertical scalability is always restricted by the physical limits of computer systems. This article describes the design, implementation in addition to the performance evaluation of DG-SPARQL, a distributed, hybrid and adaptive parallel execution engine of G-SPARQL queries. In this engine, the topology of the graph is distributed over the main memory of the underlying nodes while the graph data are maintained in a relational store which is replicated on the disk of each of the underlying nodes. DG-SPARQL evaluates parts of the query plan via SQL queries which are pushed to the underlying relational stores while other parts of the query plan, as necessary, are evaluated via indexless memory-based graph traversal algorithms. Our experimental evaluation shows the efficiency and the scalability of DG-SPARQL on querying massive attributed graph datasets in addition to its ability to outperform the performance of Apache Giraph, a popular distributed graph processing system, by orders of magnitudes.

  11. Design considerations for parallel graphics libraries

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1994-01-01

    Applications which run on parallel supercomputers are often characterized by massive datasets. Converting these vast collections of numbers to visual form has proven to be a powerful aid to comprehension. For a variety of reasons, it may be desirable to provide this visual feedback at runtime. One way to accomplish this is to exploit the available parallelism to perform graphics operations in place. In order to do this, we need appropriate parallel rendering algorithms and library interfaces. This paper provides a tutorial introduction to some of the issues which arise in designing parallel graphics libraries and their underlying rendering algorithms. The focus is on polygon rendering for distributed memory message-passing systems. We illustrate our discussion with examples from PGL, a parallel graphics library which has been developed on the Intel family of parallel systems.

  12. Iconographic dental typography. A dental character font for computer graphics.

    PubMed

    McCormack, J

    1991-06-08

    The recent massive increase in available memory for microcomputers now allows multiple font faces to be stored in computer RAM memory for instant access to the screen and for printed output. Fonts can be constructed in which the characters are not just letters or numbers, but are miniature graphic icons--in this instance pictures of teeth. When printed on an appropriate laser printer, this produces printed graphics of publishing quality.

  13. Finding Cardinality Heavy-Hitters in Massive Traffic Data and Its Application to Anomaly Detection

    NASA Astrophysics Data System (ADS)

    Ishibashi, Keisuke; Mori, Tatsuya; Kawahara, Ryoichi; Hirokawa, Yutaka; Kobayashi, Atsushi; Yamamoto, Kimihiro; Sakamoto, Hitoaki; Asano, Shoichiro

    We propose an algorithm for finding heavy hitters in terms of cardinality (the number of distinct items in a set) in massive traffic data using a small amount of memory. Examples of such cardinality heavy-hitters are hosts that send large numbers of flows, or hosts that communicate with large numbers of other hosts. Finding these hosts is crucial to the provision of good communication quality because they significantly affect the communications of other hosts via either malicious activities such as worm scans, spam distribution, or botnet control or normal activities such as being a member of a flash crowd or performing peer-to-peer (P2P) communication. To precisely determine the cardinality of a host we need tables of previously seen items for each host (e. g., flow tables for every host) and this may infeasible for a high-speed environment with a massive amount of traffic. In this paper, we use a cardinality estimation algorithm that does not require these tables but needs only a little information called the cardinality summary. This is made possible by relaxing the goal from exact counting to estimation of cardinality. In addition, we propose an algorithm that does not need to maintain the cardinality summary for each host, but only for partitioned addresses of a host. As a result, the required number of tables can be significantly decreased. We evaluated our algorithm using actual backbone traffic data to find the heavy-hitters in the number of flows and estimate the number of these flows. We found that while the accuracy degraded when estimating for hosts with few flows, the algorithm could accurately find the top-100 hosts in terms of the number of flows using a limited-sized memory. In addition, we found that the number of tables required to achieve a pre-defined accuracy increased logarithmically with respect to the total number of hosts, which indicates that our method is applicable for large traffic data for a very large number of hosts. We also introduce an application of our algorithm to anomaly detection. With actual traffic data, our method could successfully detect a sudden network scan.

  14. Conceptual Distinctiveness Supports Detailed Visual Long-Term Memory for Real-World Objects

    PubMed Central

    Konkle, Talia; Brady, Timothy F.; Alvarez, George A.; Oliva, Aude

    2012-01-01

    Humans have a massive capacity to store detailed information in visual long-term memory. The present studies explored the fidelity of these visual long-term memory representations and examined how conceptual and perceptual features of object categories support this capacity. Observers viewed 2,800 object images with a different number of exemplars presented from each category. At test, observers indicated which of 2 exemplars they had previously studied. Memory performance was high and remained quite high (82% accuracy) with 16 exemplars from a category in memory, demonstrating a large memory capacity for object exemplars. However, memory performance decreased as more exemplars were held in memory, implying systematic categorical interference. Object categories with conceptually distinctive exemplars showed less interference in memory as the number of exemplars increased. Interference in memory was not predicted by the perceptual distinctiveness of exemplars from an object category, though these perceptual measures predicted visual search rates for an object target among exemplars. These data provide evidence that observers’ capacity to remember visual information in long-term memory depends more on conceptual structure than perceptual distinctiveness. PMID:20677899

  15. [Knowing without remembering: the contribution of developmental amnesia].

    PubMed

    Lebrun-Givois, C; Guillery-Girard, B; Thomas-Anterion, C; Laurent, B

    2008-05-01

    The organization of episodic and semantic memory is currently debated, and especially the rule of the hippocampus in the functioning of these two systems. Since theories derived from the observation of the famous patient HM, that highlighted the involvement of this structure in these two systems, numerous studies questioned the implication of the hippocampus in learning a new semantic knowledge. Among these studies, we found Vargha-Kadem's cases of developmental amnesia. In spite of their clear hippocampal atrophy and a massive impairment of episodic memory, these children were able to acquire de novo new semantic knowledge. In the present paper, we describe a new case of developmental amnesia characteristic of this syndrome. In conclusion, the whole published data question the implication of the hippocampus in every semantic learning and suggest the existence of a neocortical network, slower and that needs more exposures to semantic stimuli than the hippocampal one, which can supply a massive hippocampal impairment.

  16. Tinker-HP: a massively parallel molecular dynamics package for multiscale simulations of large complex systems with advanced point dipole polarizable force fields.

    PubMed

    Lagardère, Louis; Jolly, Luc-Henri; Lipparini, Filippo; Aviat, Félix; Stamm, Benjamin; Jing, Zhifeng F; Harger, Matthew; Torabifard, Hedieh; Cisneros, G Andrés; Schnieders, Michael J; Gresh, Nohad; Maday, Yvon; Ren, Pengyu Y; Ponder, Jay W; Piquemal, Jean-Philip

    2018-01-28

    We present Tinker-HP, a massively MPI parallel package dedicated to classical molecular dynamics (MD) and to multiscale simulations, using advanced polarizable force fields (PFF) encompassing distributed multipoles electrostatics. Tinker-HP is an evolution of the popular Tinker package code that conserves its simplicity of use and its reference double precision implementation for CPUs. Grounded on interdisciplinary efforts with applied mathematics, Tinker-HP allows for long polarizable MD simulations on large systems up to millions of atoms. We detail in the paper the newly developed extension of massively parallel 3D spatial decomposition to point dipole polarizable models as well as their coupling to efficient Krylov iterative and non-iterative polarization solvers. The design of the code allows the use of various computer systems ranging from laboratory workstations to modern petascale supercomputers with thousands of cores. Tinker-HP proposes therefore the first high-performance scalable CPU computing environment for the development of next generation point dipole PFFs and for production simulations. Strategies linking Tinker-HP to Quantum Mechanics (QM) in the framework of multiscale polarizable self-consistent QM/MD simulations are also provided. The possibilities, performances and scalability of the software are demonstrated via benchmarks calculations using the polarizable AMOEBA force field on systems ranging from large water boxes of increasing size and ionic liquids to (very) large biosystems encompassing several proteins as well as the complete satellite tobacco mosaic virus and ribosome structures. For small systems, Tinker-HP appears to be competitive with the Tinker-OpenMM GPU implementation of Tinker. As the system size grows, Tinker-HP remains operational thanks to its access to distributed memory and takes advantage of its new algorithmic enabling for stable long timescale polarizable simulations. Overall, a several thousand-fold acceleration over a single-core computation is observed for the largest systems. The extension of the present CPU implementation of Tinker-HP to other computational platforms is discussed.

  17. Shift and rotation invariant photorefractive crystal-based associative memory

    NASA Astrophysics Data System (ADS)

    Uang, Chii-Maw; Lin, Wei-Feng; Lu, Ming-Huei; Lu, Guowen; Lu, Mingzhe

    1995-08-01

    A shift and rotation invariant photorefractive (PR) crystal based associative memory is addressed. The proposed associative memory has three layers: the feature extraction, inner- product, and output mapping layers. The feature extraction is performed by expanding an input object into a set of circular harmonic expansions (CHE) in the Fourier domain to acquire both the shift and rotation invariant properties. The inner product operation is performed by taking the advantage of Bragg diffraction of the bulky PR-crystal. The output mapping is achieved by using the massive storage capacity of the PR-crystal. In the training process, memories are stored in another PR-crystal by using the wavelength multiplexing technique. During the recall process, the output from the winner-take-all processor decides which wavelength should be used to read out the memory from the PR-crystal.

  18. Routing performance analysis and optimization within a massively parallel computer

    DOEpatents

    Archer, Charles Jens; Peters, Amanda; Pinnow, Kurt Walter; Swartz, Brent Allen

    2013-04-16

    An apparatus, program product and method optimize the operation of a massively parallel computer system by, in part, receiving actual performance data concerning an application executed by the plurality of interconnected nodes, and analyzing the actual performance data to identify an actual performance pattern. A desired performance pattern may be determined for the application, and an algorithm may be selected from among a plurality of algorithms stored within a memory, the algorithm being configured to achieve the desired performance pattern based on the actual performance data.

  19. Scalable Parallel Density-based Clustering and Applications

    NASA Astrophysics Data System (ADS)

    Patwary, Mostofa Ali

    2014-04-01

    Recently, density-based clustering algorithms (DBSCAN and OPTICS) have gotten significant attention of the scientific community due to their unique capability of discovering arbitrary shaped clusters and eliminating noise data. These algorithms have several applications, which require high performance computing, including finding halos and subhalos (clusters) from massive cosmology data in astrophysics, analyzing satellite images, X-ray crystallography, and anomaly detection. However, parallelization of these algorithms are extremely challenging as they exhibit inherent sequential data access order, unbalanced workload resulting in low parallel efficiency. To break the data access sequentiality and to achieve high parallelism, we develop new parallel algorithms, both for DBSCAN and OPTICS, designed using graph algorithmic techniques. For example, our parallel DBSCAN algorithm exploits the similarities between DBSCAN and computing connected components. Using datasets containing up to a billion floating point numbers, we show that our parallel density-based clustering algorithms significantly outperform the existing algorithms, achieving speedups up to 27.5 on 40 cores on shared memory architecture and speedups up to 5,765 using 8,192 cores on distributed memory architecture. In our experiments, we found that while achieving the scalability, our algorithms produce clustering results with comparable quality to the classical algorithms.

  20. Satisfiability Test with Synchronous Simulated Annealing on the Fujitsu AP1000 Massively-Parallel Multiprocessor

    NASA Technical Reports Server (NTRS)

    Sohn, Andrew; Biswas, Rupak

    1996-01-01

    Solving the hard Satisfiability Problem is time consuming even for modest-sized problem instances. Solving the Random L-SAT Problem is especially difficult due to the ratio of clauses to variables. This report presents a parallel synchronous simulated annealing method for solving the Random L-SAT Problem on a large-scale distributed-memory multiprocessor. In particular, we use a parallel synchronous simulated annealing procedure, called Generalized Speculative Computation, which guarantees the same decision sequence as sequential simulated annealing. To demonstrate the performance of the parallel method, we have selected problem instances varying in size from 100-variables/425-clauses to 5000-variables/21,250-clauses. Experimental results on the AP1000 multiprocessor indicate that our approach can satisfy 99.9 percent of the clauses while giving almost a 70-fold speedup on 500 processors.

  1. AFL-1: A programming Language for Massively Concurrent Computers.

    DTIC Science & Technology

    1986-11-01

    Bibliography Ackley, D.H., Hinton, G.E., Sejnowski, T.J., "A Learning Algorithm for boltzmann Machines", Cognitive Science, 1985, 9, 147-169. Agre...P.E., "Routines", Memo 828, MIT AI Laboratory, Many 1985. Ballard, D.H., Hayes, P.J., "Parallel Logical Inference", Conference of the Cognitive Science...34Experiments on Semantic Memory and Language Com- 125 prehension", in L.W. Greg (Ed.), Cognition in Learning and Memory, New York, Wiley, 1972._ Collins

  2. A general purpose subroutine for fast fourier transform on a distributed memory parallel machine

    NASA Technical Reports Server (NTRS)

    Dubey, A.; Zubair, M.; Grosch, C. E.

    1992-01-01

    One issue which is central in developing a general purpose Fast Fourier Transform (FFT) subroutine on a distributed memory parallel machine is the data distribution. It is possible that different users would like to use the FFT routine with different data distributions. Thus, there is a need to design FFT schemes on distributed memory parallel machines which can support a variety of data distributions. An FFT implementation on a distributed memory parallel machine which works for a number of data distributions commonly encountered in scientific applications is presented. The problem of rearranging the data after computing the FFT is also addressed. The performance of the implementation on a distributed memory parallel machine Intel iPSC/860 is evaluated.

  3. The application of a sparse, distributed memory to the detection, identification and manipulation of physical objects

    NASA Technical Reports Server (NTRS)

    Kanerva, P.

    1986-01-01

    To determine the relation of the sparse, distributed memory to other architectures, a broad review of the literature was made. The memory is called a pattern memory because they work with large patterns of features (high-dimensional vectors). A pattern is stored in a pattern memory by distributing it over a large number of storage elements and by superimposing it over other stored patterns. A pattern is retrieved by mathematical or statistical reconstruction from the distributed elements. Three pattern memories are discussed.

  4. The BlueGene/L supercomputer

    NASA Astrophysics Data System (ADS)

    Bhanota, Gyan; Chen, Dong; Gara, Alan; Vranas, Pavlos

    2003-05-01

    The architecture of the BlueGene/L massively parallel supercomputer is described. Each computing node consists of a single compute ASIC plus 256 MB of external memory. The compute ASIC integrates two 700 MHz PowerPC 440 integer CPU cores, two 2.8 Gflops floating point units, 4 MB of embedded DRAM as cache, a memory controller for external memory, six 1.4 Gbit/s bi-directional ports for a 3-dimensional torus network connection, three 2.8 Gbit/s bi-directional ports for connecting to a global tree network and a Gigabit Ethernet for I/O. 65,536 of such nodes are connected into a 3-d torus with a geometry of 32×32×64. The total peak performance of the system is 360 Teraflops and the total amount of memory is 16 TeraBytes.

  5. The RMS survey: galactic distribution of massive star formation

    NASA Astrophysics Data System (ADS)

    Urquhart, J. S.; Figura, C. C.; Moore, T. J. T.; Hoare, M. G.; Lumsden, S. L.; Mottram, J. C.; Thompson, M. A.; Oudmaijer, R. D.

    2014-01-01

    We have used the well-selected sample of ˜1750 embedded, young, massive stars identified by the Red MSX Source (RMS) survey to investigate the Galactic distribution of recent massive star formation. We present molecular line observations for ˜800 sources without existing radial velocities. We describe the various methods used to assign distances extracted from the literature and solve the distance ambiguities towards approximately 200 sources located within the solar circle using archival H I data. These distances are used to calculate bolometric luminosities and estimate the survey completeness (˜2 × 104 L⊙). In total, we calculate the distance and luminosity of ˜1650 sources, one third of which are above the survey's completeness threshold. Examination of the sample's longitude, latitude, radial velocities and mid-infrared images has identified ˜120 small groups of sources, many of which are associated with well-known star formation complexes, such as G305, G333, W31, W43, W49 and W51. We compare the positional distribution of the sample with the expected locations of the spiral arms, assuming a model of the Galaxy consisting of four gaseous arms. The distribution of young massive stars in the Milky Way is spatially correlated with the spiral arms, with strong peaks in the source position and luminosity distributions at the arms' Galactocentric radii. The overall source and luminosity surface densities are both well correlated with the surface density of the molecular gas, which suggests that the massive star formation rate per unit molecular mass is approximately constant across the Galaxy. A comparison of the distribution of molecular gas and the young massive stars to that in other nearby spiral galaxies shows similar radial dependences. We estimate the total luminosity of the embedded massive star population to be ˜0.76 × 108 L⊙, 30 per cent of which is associated with the 10 most active star-forming complexes. We measure the scaleheight as a function of the Galactocentric distance and find that it increases only modestly from ˜20-30 pc between 4 and 8 kpc, but much more rapidly at larger distances.

  6. An Interactive Web-Based Analysis Framework for Remote Sensing Cloud Computing

    NASA Astrophysics Data System (ADS)

    Wang, X. Z.; Zhang, H. M.; Zhao, J. H.; Lin, Q. H.; Zhou, Y. C.; Li, J. H.

    2015-07-01

    Spatiotemporal data, especially remote sensing data, are widely used in ecological, geographical, agriculture, and military research and applications. With the development of remote sensing technology, more and more remote sensing data are accumulated and stored in the cloud. An effective way for cloud users to access and analyse these massive spatiotemporal data in the web clients becomes an urgent issue. In this paper, we proposed a new scalable, interactive and web-based cloud computing solution for massive remote sensing data analysis. We build a spatiotemporal analysis platform to provide the end-user with a safe and convenient way to access massive remote sensing data stored in the cloud. The lightweight cloud storage system used to store public data and users' private data is constructed based on open source distributed file system. In it, massive remote sensing data are stored as public data, while the intermediate and input data are stored as private data. The elastic, scalable, and flexible cloud computing environment is built using Docker, which is a technology of open-source lightweight cloud computing container in the Linux operating system. In the Docker container, open-source software such as IPython, NumPy, GDAL, and Grass GIS etc., are deployed. Users can write scripts in the IPython Notebook web page through the web browser to process data, and the scripts will be submitted to IPython kernel to be executed. By comparing the performance of remote sensing data analysis tasks executed in Docker container, KVM virtual machines and physical machines respectively, we can conclude that the cloud computing environment built by Docker makes the greatest use of the host system resources, and can handle more concurrent spatial-temporal computing tasks. Docker technology provides resource isolation mechanism in aspects of IO, CPU, and memory etc., which offers security guarantee when processing remote sensing data in the IPython Notebook. Users can write complex data processing code on the web directly, so they can design their own data processing algorithm.

  7. Parallel computation and the Basis system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, G.R.

    1992-12-16

    A software package has been written that can facilitate efforts to develop powerful, flexible, and easy-to-use programs that can run in single-processor, massively parallel, and distributed computing environments. Particular attention has been given to the difficulties posed by a program consisting of many science packages that represent subsystems of a complicated, coupled system. Methods have been found to maintain independence of the packages by hiding data structures without increasing the communication costs in a parallel computing environment. Concepts developed in this work are demonstrated by a prototype program that uses library routines from two existing software systems, Basis and Parallelmore » Virtual Machine (PVM). Most of the details of these libraries have been encapsulated in routines and macros that could be rewritten for alternative libraries that possess certain minimum capabilities. The prototype software uses a flexible master-and-slaves paradigm for parallel computation and supports domain decomposition with message passing for partitioning work among slaves. Facilities are provided for accessing variables that are distributed among the memories of slaves assigned to subdomains. The software is named PROTOPAR.« less

  8. Parallel computation and the basis system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, G.R.

    1993-05-01

    A software package has been written that can facilitate efforts to develop powerful, flexible, and easy-to use programs that can run in single-processor, massively parallel, and distributed computing environments. Particular attention has been given to the difficulties posed by a program consisting of many science packages that represent subsystems of a complicated, coupled system. Methods have been found to maintain independence of the packages by hiding data structures without increasing the communications costs in a parallel computing environment. Concepts developed in this work are demonstrated by a prototype program that uses library routines from two existing software systems, Basis andmore » Parallel Virtual Machine (PVM). Most of the details of these libraries have been encapsulated in routines and macros that could be rewritten for alternative libraries that possess certain minimum capabilities. The prototype software uses a flexible master-and-slaves paradigm for parallel computation and supports domain decomposition with message passing for partitioning work among slaves. Facilities are provided for accessing variables that are distributed among the memories of slaves assigned to subdomains. The software is named PROTOPAR.« less

  9. Particle merging algorithm for PIC codes

    NASA Astrophysics Data System (ADS)

    Vranic, M.; Grismayer, T.; Martins, J. L.; Fonseca, R. A.; Silva, L. O.

    2015-06-01

    Particle-in-cell merging algorithms aim to resample dynamically the six-dimensional phase space occupied by particles without distorting substantially the physical description of the system. Whereas various approaches have been proposed in previous works, none of them seemed to be able to conserve fully charge, momentum, energy and their associated distributions. We describe here an alternative algorithm based on the coalescence of N massive or massless particles, considered to be close enough in phase space, into two new macro-particles. The local conservation of charge, momentum and energy are ensured by the resolution of a system of scalar equations. Various simulation comparisons have been carried out with and without the merging algorithm, from classical plasma physics problems to extreme scenarios where quantum electrodynamics is taken into account, showing in addition to the conservation of local quantities, the good reproducibility of the particle distributions. In case where the number of particles ought to increase exponentially in the simulation box, the dynamical merging permits a considerable speedup, and significant memory savings that otherwise would make the simulations impossible to perform.

  10. Formation of large-scale structure from cosmic strings and massive neutrinos

    NASA Technical Reports Server (NTRS)

    Scherrer, Robert J.; Melott, Adrian L.; Bertschinger, Edmund

    1989-01-01

    Numerical simulations of large-scale structure formation from cosmic strings and massive neutrinos are described. The linear power spectrum in this model resembles the cold-dark-matter power spectrum. Galaxy formation begins early, and the final distribution consists of isolated density peaks embedded in a smooth background, leading to a natural bias in the distribution of luminous matter. The distribution of clustered matter has a filamentary appearance with large voids.

  11. Portable parallel stochastic optimization for the design of aeropropulsion components

    NASA Technical Reports Server (NTRS)

    Sues, Robert H.; Rhodes, G. S.

    1994-01-01

    This report presents the results of Phase 1 research to develop a methodology for performing large-scale Multi-disciplinary Stochastic Optimization (MSO) for the design of aerospace systems ranging from aeropropulsion components to complete aircraft configurations. The current research recognizes that such design optimization problems are computationally expensive, and require the use of either massively parallel or multiple-processor computers. The methodology also recognizes that many operational and performance parameters are uncertain, and that uncertainty must be considered explicitly to achieve optimum performance and cost. The objective of this Phase 1 research was to initialize the development of an MSO methodology that is portable to a wide variety of hardware platforms, while achieving efficient, large-scale parallelism when multiple processors are available. The first effort in the project was a literature review of available computer hardware, as well as review of portable, parallel programming environments. The first effort was to implement the MSO methodology for a problem using the portable parallel programming language, Parallel Virtual Machine (PVM). The third and final effort was to demonstrate the example on a variety of computers, including a distributed-memory multiprocessor, a distributed-memory network of workstations, and a single-processor workstation. Results indicate the MSO methodology can be well-applied towards large-scale aerospace design problems. Nearly perfect linear speedup was demonstrated for computation of optimization sensitivity coefficients on both a 128-node distributed-memory multiprocessor (the Intel iPSC/860) and a network of workstations (speedups of almost 19 times achieved for 20 workstations). Very high parallel efficiencies (75 percent for 31 processors and 60 percent for 50 processors) were also achieved for computation of aerodynamic influence coefficients on the Intel. Finally, the multi-level parallelization strategy that will be needed for large-scale MSO problems was demonstrated to be highly efficient. The same parallel code instructions were used on both platforms, demonstrating portability. There are many applications for which MSO can be applied, including NASA's High-Speed-Civil Transport, and advanced propulsion systems. The use of MSO will reduce design and development time and testing costs dramatically.

  12. Memory-induced acceleration and slowdown of barrier crossing

    NASA Astrophysics Data System (ADS)

    Kappler, Julian; Daldrop, Jan O.; Brünig, Florian N.; Boehle, Moritz D.; Netz, Roland R.

    2018-01-01

    We study the mean first-passage time τMFP for the barrier crossing of a single massive particle with non-Markovian memory by Langevin simulations in one dimension. In the Markovian limit of short memory time τΓ, the expected Kramers turnover between the overdamped (high-friction) and the inertial (low-friction) limits is recovered. Compared to the Markovian case, we find barrier crossing to be accelerated for intermediate memory time, while for long memory time, barrier crossing is slowed down and τMFP increases with τΓ as a power law τM F P˜τΓ2. Both effects are derived from an asymptotic propagator analysis: while barrier crossing acceleration at intermediate memory can be understood as an effective particle mass reduction, slowing down for long memory is caused by the slow kinetics of energy diffusion. A simple and globally accurate heuristic formula for τMFP in terms of all relevant time scales of the system is presented and used to establish a scaling diagram featuring the Markovian overdamped and the Markovian inertial regimes, as well as the non-Markovian intermediate memory time regime where barrier crossing is accelerated and the non-Markovian long memory time regime where barrier crossing is slowed down.

  13. On the spatial distributions of dense cores in Orion B

    NASA Astrophysics Data System (ADS)

    Parker, Richard J.

    2018-05-01

    We quantify the spatial distributions of dense cores in three spatially distinct areas of the Orion B star-forming region. For L1622, NGC 2068/NGC 2071, and NGC 2023/NGC 2024, we measure the amount of spatial substructure using the Q-parameter and find all three regions to be spatially substructured (Q < 0.8). We quantify the amount of mass segregation using ΛMSR and find that the most massive cores are mildly mass segregated in NGC 2068/NGC 2071 (ΛMSR ˜ 2), and very mass segregated in NGC 2023/NGC 2024 (Λ _MSR = 28^{+13}_{-10} for the four most massive cores). Whereas the most massive cores in L1622 are not in areas of relatively high surface density, or deeper gravitational potentials, the massive cores in NGC 2068/NGC 2071 and NGC 2023/NGC 2024 are significantly so. Given the low density (10 cores pc-2) and spatial substructure of cores in Orion B, the mass segregation cannot be dynamical. Our results are also inconsistent with simulations in which the most massive stars form via competitive accretion, and instead hint that magnetic fields may be important in influencing the primordial spatial distributions of gas and stars in star-forming regions.

  14. Three-Dimensional High-Lift Analysis Using a Parallel Unstructured Multigrid Solver

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.

    1998-01-01

    A directional implicit unstructured agglomeration multigrid solver is ported to shared and distributed memory massively parallel machines using the explicit domain-decomposition and message-passing approach. Because the algorithm operates on local implicit lines in the unstructured mesh, special care is required in partitioning the problem for parallel computing. A weighted partitioning strategy is described which avoids breaking the implicit lines across processor boundaries, while incurring minimal additional communication overhead. Good scalability is demonstrated on a 128 processor SGI Origin 2000 machine and on a 512 processor CRAY T3E machine for reasonably fine grids. The feasibility of performing large-scale unstructured grid calculations with the parallel multigrid algorithm is demonstrated by computing the flow over a partial-span flap wing high-lift geometry on a highly resolved grid of 13.5 million points in approximately 4 hours of wall clock time on the CRAY T3E.

  15. A Disciplined Architectural Approach to Scaling Data Analysis for Massive, Scientific Data

    NASA Astrophysics Data System (ADS)

    Crichton, D. J.; Braverman, A. J.; Cinquini, L.; Turmon, M.; Lee, H.; Law, E.

    2014-12-01

    Data collections across remote sensing and ground-based instruments in astronomy, Earth science, and planetary science are outpacing scientists' ability to analyze them. Furthermore, the distribution, structure, and heterogeneity of the measurements themselves pose challenges that limit the scalability of data analysis using traditional approaches. Methods for developing science data processing pipelines, distribution of scientific datasets, and performing analysis will require innovative approaches that integrate cyber-infrastructure, algorithms, and data into more systematic approaches that can more efficiently compute and reduce data, particularly distributed data. This requires the integration of computer science, machine learning, statistics and domain expertise to identify scalable architectures for data analysis. The size of data returned from Earth Science observing satellites and the magnitude of data from climate model output, is predicted to grow into the tens of petabytes challenging current data analysis paradigms. This same kind of growth is present in astronomy and planetary science data. One of the major challenges in data science and related disciplines defining new approaches to scaling systems and analysis in order to increase scientific productivity and yield. Specific needs include: 1) identification of optimized system architectures for analyzing massive, distributed data sets; 2) algorithms for systematic analysis of massive data sets in distributed environments; and 3) the development of software infrastructures that are capable of performing massive, distributed data analysis across a comprehensive data science framework. NASA/JPL has begun an initiative in data science to address these challenges. Our goal is to evaluate how scientific productivity can be improved through optimized architectural topologies that identify how to deploy and manage the access, distribution, computation, and reduction of massive, distributed data, while managing the uncertainties of scientific conclusions derived from such capabilities. This talk will provide an overview of JPL's efforts in developing a comprehensive architectural approach to data science.

  16. Adjusting process count on demand for petascale global optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sosonkina, Masha; Watson, Layne T.; Radcliffe, Nicholas R.

    2012-11-23

    There are many challenges that need to be met before efficient and reliable computation at the petascale is possible. Many scientific and engineering codes running at the petascale are likely to be memory intensive, which makes thrashing a serious problem for many petascale applications. One way to overcome this challenge is to use a dynamic number of processes, so that the total amount of memory available for the computation can be increased on demand. This paper describes modifications made to the massively parallel global optimization code pVTdirect in order to allow for a dynamic number of processes. In particular, themore » modified version of the code monitors memory use and spawns new processes if the amount of available memory is determined to be insufficient. The primary design challenges are discussed, and performance results are presented and analyzed.« less

  17. The evolution of massive stars

    NASA Technical Reports Server (NTRS)

    1982-01-01

    The hypotheses underlying theoretical studies of the evolution of massive model stars with and without mass loss are summarized. The evolutionary tracks followed by the models across theoretical Hertzsprung-Russell (HR) diagrams are compared with the observed distribution of B stars in an HR diagram. The pulsational properties of models of massive star are also described.

  18. Immunological memory is associative

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, D.J.; Forrest, S.; Perelson, A.S.

    1996-12-31

    The purpose of this paper is to show that immunological memory is an associative and robust memory that belongs to the class of sparse distributed memories. This class of memories derives its associative and robust nature by sparsely sampling the input space and distributing the data among many independent agents. Other members of this class include a model of the cerebellar cortex and Sparse Distributed Memory (SDM). First we present a simplified account of the immune response and immunological memory. Next we present SDM, and then we show the correlations between immunological memory and SDM. Finally, we show how associativemore » recall in the immune response can be both beneficial and detrimental to the fitness of an individual.« less

  19. A Methodology to Assess UrbanSim Scenarios

    DTIC Science & Technology

    2012-09-01

    Education LOE – Line of Effort MMOG – Massively Multiplayer Online Game MC3 – Maneuver Captain’s Career Course MSCCC – Maneuver Support...augmented reality simulations, increased automation and artificial intelligence simulation, and massively multiplayer online games (MMOG), among...distribution is unlimited 12b. DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) Turn-based strategy games and simulations are vital tools for military

  20. Tinker-HP: a massively parallel molecular dynamics package for multiscale simulations of large complex systems with advanced point dipole polarizable force fields† †Electronic supplementary information (ESI) available. See DOI: 10.1039/c7sc04531j

    PubMed Central

    Lagardère, Louis; Jolly, Luc-Henri; Lipparini, Filippo; Aviat, Félix; Stamm, Benjamin; Jing, Zhifeng F.; Harger, Matthew; Torabifard, Hedieh; Cisneros, G. Andrés; Schnieders, Michael J.; Gresh, Nohad; Maday, Yvon; Ren, Pengyu Y.; Ponder, Jay W.

    2017-01-01

    We present Tinker-HP, a massively MPI parallel package dedicated to classical molecular dynamics (MD) and to multiscale simulations, using advanced polarizable force fields (PFF) encompassing distributed multipoles electrostatics. Tinker-HP is an evolution of the popular Tinker package code that conserves its simplicity of use and its reference double precision implementation for CPUs. Grounded on interdisciplinary efforts with applied mathematics, Tinker-HP allows for long polarizable MD simulations on large systems up to millions of atoms. We detail in the paper the newly developed extension of massively parallel 3D spatial decomposition to point dipole polarizable models as well as their coupling to efficient Krylov iterative and non-iterative polarization solvers. The design of the code allows the use of various computer systems ranging from laboratory workstations to modern petascale supercomputers with thousands of cores. Tinker-HP proposes therefore the first high-performance scalable CPU computing environment for the development of next generation point dipole PFFs and for production simulations. Strategies linking Tinker-HP to Quantum Mechanics (QM) in the framework of multiscale polarizable self-consistent QM/MD simulations are also provided. The possibilities, performances and scalability of the software are demonstrated via benchmarks calculations using the polarizable AMOEBA force field on systems ranging from large water boxes of increasing size and ionic liquids to (very) large biosystems encompassing several proteins as well as the complete satellite tobacco mosaic virus and ribosome structures. For small systems, Tinker-HP appears to be competitive with the Tinker-OpenMM GPU implementation of Tinker. As the system size grows, Tinker-HP remains operational thanks to its access to distributed memory and takes advantage of its new algorithmic enabling for stable long timescale polarizable simulations. Overall, a several thousand-fold acceleration over a single-core computation is observed for the largest systems. The extension of the present CPU implementation of Tinker-HP to other computational platforms is discussed. PMID:29732110

  1. ORCA Project: Research on high-performance parallel computer programming environments. Final report, 1 Apr-31 Mar 90

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snyder, L.; Notkin, D.; Adams, L.

    1990-03-31

    This task relates to research on programming massively parallel computers. Previous work on the Ensamble concept of programming was extended and investigation into nonshared memory models of parallel computation was undertaken. Previous work on the Ensamble concept defined a set of programming abstractions and was used to organize the programming task into three distinct levels; Composition of machine instruction, composition of processes, and composition of phases. It was applied to shared memory models of computations. During the present research period, these concepts were extended to nonshared memory models. During the present research period, one Ph D. thesis was completed, onemore » book chapter, and six conference proceedings were published.« less

  2. [Physiopathology of autobiographical memory in aging: episodic and semantic distinction, clinical findings and neuroimaging studies].

    PubMed

    Piolino, Pascale; Martinelli, Pénélope; Viard, Armelle; Noulhiane, Marion; Eustache, Francis; Desgranges, Béatrice

    2010-01-01

    From an early age, autobiographical memory models our feeling of identity and continuity. It grows throughout lifetime with our experiences and is built up from general self-knowledge and specific memories. The study of autobiographical memory depicts the dynamic and reconstructive features of this type of long-term memory, combining both semantic and episodic aspects, its strength and fragility. In this article, we propose to illustrate the properties of autobiographical memory from the field of cognitive psychology, neuropsychology and neuroimaging research through the analysis of the mechanisms of disturbance in normal and Alzheimer's disease. We show that the cognitive and neural bases of autobiographical memory are distinct in both cases. In normal aging, autobiographical memory retrieval is mainly dependent on frontal/executive function and on sense of reexperiencing specific context connected to hippocampal regions regardless of memory remoteness. In Alzheimer's disease, autobiographical memory deficit, characterized by a Ribot's temporal gradient, is connected to different regions according to memory remoteness. Our functional neuroimaging results suggest that patients at the early stage can compensate for their massive deficit of episodic recent memories correlated to hippocampal alteration with over general remote memories related to prefrontal regions. On the whole, the research findings allowed initiating new autobiographical memory studies by comparing normal and pathological aging and developing cognitive methods of memory rehabilitation in patients based on preserved personal semantic capacity. © Société de Biologie, 2010.

  3. Autobiographical memory in adolescent girls with anorexia nervosa.

    PubMed

    Bomba, Monica; Marfone, Mirella; Brivio, Elisa; Oggiano, Silvia; Broggi, Fiorenza; Neri, Francesca; Nacinovich, Renata

    2014-11-01

    The aim of the study is to investigate deficits in autobiographical memory in adolescents with anorexia nervosa (AN). Sixty female individuals with AN and 60 healthy volunteers with an age range of 11-18 years were enrolled. The Autobiographical Memory Test (AMT), the Eating Disorder Inventory-3, the Toronto Alexithymia Scale-20 for the evaluation of alexithymia and Children's Depression Inventory to evaluate depressive traits were administered. In addition to classical AMT words, we proposed seven experimental cues, chosen from words often used by individuals with eating disorders in daily life. Girls with AN showed a massive overgeneral memory effect. This effect was not related to the presence of depression or alexithymia but increased with the duration of the disorder rather than with its severity. The alteration of autobiographical memory manifests in adolescence. Girls with AN showed a dysregulation of both negative and positive emotional experiences that seemed to be influenced by the disease duration. Copyright © 2014 John Wiley & Sons, Ltd and Eating Disorders Association.

  4. Distributed representations in memory: Insights from functional brain imaging

    PubMed Central

    Rissman, Jesse; Wagner, Anthony D.

    2015-01-01

    Forging new memories for facts and events, holding critical details in mind on a moment-to-moment basis, and retrieving knowledge in the service of current goals all depend on a complex interplay between neural ensembles throughout the brain. Over the past decade, researchers have increasingly leveraged powerful analytical tools (e.g., multi-voxel pattern analysis) to decode the information represented within distributed fMRI activity patterns. In this review, we discuss how these methods can sensitively index neural representations of perceptual and semantic content, and how leverage on the engagement of distributed representations provides unique insights into distinct aspects of memory-guided behavior. We emphasize that, in addition to characterizing the contents of memories, analyses of distributed patterns shed light on the processes that influence how information is encoded, maintained, or retrieved, and thus inform memory theory. We conclude by highlighting open questions about memory that can be addressed through distributed pattern analyses. PMID:21943171

  5. Scarcity of autoreactive human blood IgA+ memory B cells

    PubMed Central

    Prigent, Julie; Lorin, Valérie; Kök, Ayrin; Hieu, Thierry; Bourgeau, Salomé

    2016-01-01

    Class‐switched memory B cells are key components of the “reactive” humoral immunity, which ensures a fast and massive secretion of high‐affinity antigen‐specific antibodies upon antigenic challenge. In humans, IgA class‐switched (IgA+) memory B cells and IgA antibodies are abundant in the blood. Although circulating IgA+ memory B cells and their corresponding secreted immunoglobulins likely possess major protective and/or regulatory immune roles, little is known about their specificity and function. Here, we show that IgA+ and IgG+ memory B‐cell antibodies cloned from the same healthy humans share common immunoglobulin gene features. IgA and IgG memory antibodies have comparable lack of reactivity to vaccines, common mucosa‐tropic viruses and commensal bacteria. However, the IgA+ memory B‐cell compartment contains fewer polyreactive clones and importantly, only rare self‐reactive clones compared to IgG+ memory B cells. Self‐reactivity of IgAs is acquired following B‐cell affinity maturation but not antibody class switching. Together, our data suggest the existence of different regulatory mechanisms for removing autoreactive clones from the IgG+ and IgA+ memory B‐cell repertoires, and/or different maturation pathways potentially reflecting the distinct nature and localization of the cognate antigens recognized by individual B‐cell populations. PMID:27469325

  6. A cost-effective methodology for the design of massively-parallel VLSI functional units

    NASA Technical Reports Server (NTRS)

    Venkateswaran, N.; Sriram, G.; Desouza, J.

    1993-01-01

    In this paper we propose a generalized methodology for the design of cost-effective massively-parallel VLSI Functional Units. This methodology is based on a technique of generating and reducing a massive bit-array on the mask-programmable PAcube VLSI array. This methodology unifies (maintains identical data flow and control) the execution of complex arithmetic functions on PAcube arrays. It is highly regular, expandable and uniform with respect to problem-size and wordlength, thereby reducing the communication complexity. The memory-functional unit interface is regular and expandable. Using this technique functional units of dedicated processors can be mask-programmed on the naked PAcube arrays, reducing the turn-around time. The production cost of such dedicated processors can be drastically reduced since the naked PAcube arrays can be mass-produced. Analysis of the the performance of functional units designed by our method yields promising results.

  7. Especial Skills: Their Emergence with Massive Amounts of Practice

    ERIC Educational Resources Information Center

    Keetch, Katherine M.; Schmidt, Richard A.; Lee, Timothy D.; Young, Douglas E.

    2005-01-01

    Differing viewpoints concerning the specificity and generality of motor skill representations in memory were compared by contrasting versions of a skill having either extensive or minimal specific practice. In Experiments 1 and 2, skilled basketball players more accurately performed set shots at the foul line than would be predicted on the basis…

  8. Designing Next Generation Massively Multithreaded Architectures for Irregular Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tumeo, Antonino; Secchi, Simone; Villa, Oreste

    Irregular applications, such as data mining or graph-based computations, show unpredictable memory/network access patterns and control structures. Massively multi-threaded architectures with large node count, like the Cray XMT, have been shown to address their requirements better than commodity clusters. In this paper we present the approaches that we are currently pursuing to design future generations of these architectures. First, we introduce the Cray XMT and compare it to other multithreaded architectures. We then propose an evolution of the architecture, integrating multiple cores per node and next generation network interconnect. We advocate the use of hardware support for remote memory referencemore » aggregation to optimize network utilization. For this evaluation we developed a highly parallel, custom simulation infrastructure for multi-threaded systems. Our simulator executes unmodified XMT binaries with very large datasets, capturing effects due to contention and hot-spotting, while predicting execution times with greater than 90% accuracy. We also discuss the FPGA prototyping approach that we are employing to study efficient support for irregular applications in next generation manycore processors.« less

  9. Mnemonic transmission, social contagion, and emergence of collective memory: Influence of emotional valence, group structure, and information distribution.

    PubMed

    Choi, Hae-Yoon; Kensinger, Elizabeth A; Rajaram, Suparna

    2017-09-01

    Social transmission of memory and its consequence on collective memory have generated enduring interdisciplinary interest because of their widespread significance in interpersonal, sociocultural, and political arenas. We tested the influence of 3 key factors-emotional salience of information, group structure, and information distribution-on mnemonic transmission, social contagion, and collective memory. Participants individually studied emotionally salient (negative or positive) and nonemotional (neutral) picture-word pairs that were completely shared, partially shared, or unshared within participant triads, and then completed 3 consecutive recalls in 1 of 3 conditions: individual-individual-individual (control), collaborative-collaborative (identical group; insular structure)-individual, and collaborative-collaborative (reconfigured group; diverse structure)-individual. Collaboration enhanced negative memories especially in insular group structure and especially for shared information, and promoted collective forgetting of positive memories. Diverse group structure reduced this negativity effect. Unequally distributed information led to social contagion that creates false memories; diverse structure propagated a greater variety of false memories whereas insular structure promoted confidence in false recognition and false collective memory. A simultaneous assessment of network structure, information distribution, and emotional valence breaks new ground to specify how network structure shapes the spread of negative memories and false memories, and the emergence of collective memory. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  10. KSC-2012-1127

    NASA Image and Video Library

    2012-01-26

    CAPE CANAVERAL, Fla. -- A blue sky is reflected in the massive granite Space Mirror Memorial at the Kennedy Space Center Visitor Complex in Florida where a large wreath was placed during Kennedy Space Center’s NASA Day of Remembrance. The Day of Remembrance honors members of the NASA family who lost their lives while furthering the cause of exploration and discovery, including the astronaut crews of Apollo 1 and space shuttles Challenger and Columbia. Kennedy civil service and contractor employees, along with the general public, paid their respects throughout the day. The visitor complex provided flowers for visitors to place at the memorial. Photo credit: NASA/Kim Shiflett

  11. Memcomputing with membrane memcapacitive systems

    NASA Astrophysics Data System (ADS)

    Pershin, Y. V.; Traversa, F. L.; Di Ventra, M.

    2015-06-01

    We show theoretically that networks of membrane memcapacitive systems—capacitors with memory made out of membrane materials—can be used to perform a complete set of logic gates in a massively parallel way by simply changing the external input amplitudes, but not the topology of the network. This polymorphism is an important characteristic of memcomputing (computing with memories) that closely reproduces one of the main features of the brain. A practical realization of these membrane memcapacitive systems, using, e.g., graphene or other 2D materials, would be a step forward towards a solid-state realization of memcomputing with passive devices.

  12. Three Types of Memory in Emergency Medical Services Communication

    ERIC Educational Resources Information Center

    Angeli, Elizabeth L.

    2015-01-01

    This article examines memory and distributed cognition involved in the writing practices of emergency medical services (EMS) professionals. Results from a 16-month study indicate that EMS professionals rely on distributed cognition and three kinds of memory: individual, collaborative, and professional. Distributed cognition and the three types of…

  13. Cosmological Evolution of Massive Black Holes: Effects of Eddington Ratio Distribution and Quasar Lifetime

    NASA Astrophysics Data System (ADS)

    Cao, Xinwu

    2010-12-01

    A power-law time-dependent light curve for active galactic nuclei (AGNs) is expected by the self-regulated black hole growth scenario, in which the feedback of AGNs expels gas and shut down accretion. This is also supported by the observed power-law Eddington ratio distribution of AGNs. At high redshifts, the AGN life timescale is comparable with (or even shorter than) the age of the universe, which sets a constraint on the minimal Eddington ratio for AGNs on the assumption of a power-law AGN light curve. The black hole mass function (BHMF) of AGN relics is calculated by integrating the continuity equation of massive black hole number density on the assumption of the growth of massive black holes being dominated by mass accretion with a power-law Eddington ratio distribution for AGNs. The derived BHMF of AGN relics at z = 0 can fit the measured local mass function of the massive black holes in galaxies quite well, provided the radiative efficiency ~0.1 and a suitable power-law index for the Eddington ratio distribution are adopted. In our calculations of the black hole evolution, the duty cycle of AGN should be less than unity, which requires the quasar life timescale τQ >~ 5 × 108 years.

  14. Distributed-Memory Fast Maximal Independent Set

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kanewala Appuhamilage, Thejaka Amila J.; Zalewski, Marcin J.; Lumsdaine, Andrew

    The Maximal Independent Set (MIS) graph problem arises in many applications such as computer vision, information theory, molecular biology, and process scheduling. The growing scale of MIS problems suggests the use of distributed-memory hardware as a cost-effective approach to providing necessary compute and memory resources. Luby proposed four randomized algorithms to solve the MIS problem. All those algorithms are designed focusing on shared-memory machines and are analyzed using the PRAM model. These algorithms do not have direct efficient distributed-memory implementations. In this paper, we extend two of Luby’s seminal MIS algorithms, “Luby(A)” and “Luby(B),” to distributed-memory execution, and we evaluatemore » their performance. We compare our results with the “Filtered MIS” implementation in the Combinatorial BLAS library for two types of synthetic graph inputs.« less

  15. Large-Scale Cubic-Scaling Random Phase Approximation Correlation Energy Calculations Using a Gaussian Basis.

    PubMed

    Wilhelm, Jan; Seewald, Patrick; Del Ben, Mauro; Hutter, Jürg

    2016-12-13

    We present an algorithm for computing the correlation energy in the random phase approximation (RPA) in a Gaussian basis requiring [Formula: see text] operations and [Formula: see text] memory. The method is based on the resolution of the identity (RI) with the overlap metric, a reformulation of RI-RPA in the Gaussian basis, imaginary time, and imaginary frequency integration techniques, and the use of sparse linear algebra. Additional memory reduction without extra computations can be achieved by an iterative scheme that overcomes the memory bottleneck of canonical RPA implementations. We report a massively parallel implementation that is the key for the application to large systems. Finally, cubic-scaling RPA is applied to a thousand water molecules using a correlation-consistent triple-ζ quality basis.

  16. A single population of red globular clusters around the massive compact galaxy NGC 1277

    NASA Astrophysics Data System (ADS)

    Beasley, Michael A.; Trujillo, Ignacio; Leaman, Ryan; Montes, Mireia

    2018-03-01

    Massive galaxies are thought to form in two phases: an initial collapse of gas and giant burst of central star formation, followed by the later accretion of material that builds up their stellar and dark-matter haloes. The systems of globular clusters within such galaxies are believed to form in a similar manner. The initial central burst forms metal-rich (spectrally red) clusters, whereas more metal-poor (spectrally blue) clusters are brought in by the later accretion of less-massive satellites. This formation process is thought to result in the multimodal optical colour distributions that are seen in the globular cluster systems of massive galaxies. Here we report optical observations of the massive relic-galaxy candidate NGC 1277—a nearby, un-evolved example of a high-redshift ‘red nugget’ galaxy. We find that the optical colour distribution of the cluster system of NGC 1277 is unimodal and entirely red. This finding is in strong contrast to other galaxies of similar and larger stellar mass, the cluster systems of which always exhibit (and are generally dominated by) blue clusters. We argue that the colour distribution of the cluster system of NGC 1277 indicates that the galaxy has undergone little (if any) mass accretion after its initial collapse, and use simulations of possible merger histories to show that the stellar mass due to accretion is probably at most ten per cent of the total stellar mass of the galaxy. These results confirm that NGC 1277 is a genuine relic galaxy and demonstrate that blue clusters constitute an accreted population in present-day massive galaxies.

  17. A single population of red globular clusters around the massive compact galaxy NGC 1277.

    PubMed

    Beasley, Michael A; Trujillo, Ignacio; Leaman, Ryan; Montes, Mireia

    2018-03-22

    Massive galaxies are thought to form in two phases: an initial collapse of gas and giant burst of central star formation, followed by the later accretion of material that builds up their stellar and dark-matter haloes. The systems of globular clusters within such galaxies are believed to form in a similar manner. The initial central burst forms metal-rich (spectrally red) clusters, whereas more metal-poor (spectrally blue) clusters are brought in by the later accretion of less-massive satellites. This formation process is thought to result in the multimodal optical colour distributions that are seen in the globular cluster systems of massive galaxies. Here we report optical observations of the massive relic-galaxy candidate NGC 1277-a nearby, un-evolved example of a high-redshift 'red nugget' galaxy. We find that the optical colour distribution of the cluster system of NGC 1277 is unimodal and entirely red. This finding is in strong contrast to other galaxies of similar and larger stellar mass, the cluster systems of which always exhibit (and are generally dominated by) blue clusters. We argue that the colour distribution of the cluster system of NGC 1277 indicates that the galaxy has undergone little (if any) mass accretion after its initial collapse, and use simulations of possible merger histories to show that the stellar mass due to accretion is probably at most ten per cent of the total stellar mass of the galaxy. These results confirm that NGC 1277 is a genuine relic galaxy and demonstrate that blue clusters constitute an accreted population in present-day massive galaxies.

  18. Continuous Time Random Walks with memory and financial distributions

    NASA Astrophysics Data System (ADS)

    Montero, Miquel; Masoliver, Jaume

    2017-11-01

    We study financial distributions from the perspective of Continuous Time Random Walks with memory. We review some of our previous developments and apply them to financial problems. We also present some new models with memory that can be useful in characterizing tendency effects which are inherent in most markets. We also briefly study the effect on return distributions of fractional behaviors in the distribution of pausing times between successive transactions.

  19. Total recall in distributive associative memories

    NASA Technical Reports Server (NTRS)

    Danforth, Douglas G.

    1991-01-01

    Iterative error correction of asymptotically large associative memories is equivalent to a one-step learning rule. This rule is the inverse of the activation function of the memory. Spectral representations of nonlinear activation functions are used to obtain the inverse in closed form for Sparse Distributed Memory, Selected-Coordinate Design, and Radial Basis Functions.

  20. How can individual differences in autobiographical memory distributions of older adults be explained?

    PubMed

    Wolf, Tabea; Zimprich, Daniel

    2016-10-01

    The reminiscence bump phenomenon has frequently been reported for the recall of autobiographical memories. The present study complements previous research by examining individual differences in the distribution of word-cued autobiographical memories. More importantly, we introduce predictor variables that might account for individual differences in the mean (location) and the standard deviation (scale) of individual memory distributions. All variables were derived from different theoretical accounts for the reminiscence bump phenomenon. We used a mixed location-scale logitnormal model, to analyse the 4602 autobiographical memories reported by 118 older participants. Results show reliable individual differences in the location and the scale. After controlling for age and gender, individual proportions of first-time experiences and individual proportions of positive memories, as well as the ratings on Openness to new Experiences and Self-Concept Clarity accounted for 29% of individual differences in location and 42% of individual differences in scale of autobiographical memory distributions. Results dovetail with a life-story account for the reminiscence bump which integrates central components of previous accounts.

  1. The Generalized Quantum Episodic Memory Model.

    PubMed

    Trueblood, Jennifer S; Hemmer, Pernille

    2017-11-01

    Recent evidence suggests that experienced events are often mapped to too many episodic states, including those that are logically or experimentally incompatible with one another. For example, episodic over-distribution patterns show that the probability of accepting an item under different mutually exclusive conditions violates the disjunction rule. A related example, called subadditivity, occurs when the probability of accepting an item under mutually exclusive and exhaustive instruction conditions sums to a number >1. Both the over-distribution effect and subadditivity have been widely observed in item and source-memory paradigms. These phenomena are difficult to explain using standard memory frameworks, such as signal-detection theory. A dual-trace model called the over-distribution (OD) model (Brainerd & Reyna, 2008) can explain the episodic over-distribution effect, but not subadditivity. Our goal is to develop a model that can explain both effects. In this paper, we propose the Generalized Quantum Episodic Memory (GQEM) model, which extends the Quantum Episodic Memory (QEM) model developed by Brainerd, Wang, and Reyna (2013). We test GQEM by comparing it to the OD model using data from a novel item-memory experiment and a previously published source-memory experiment (Kellen, Singmann, & Klauer, 2014) examining the over-distribution effect. Using the best-fit parameters from the over-distribution experiments, we conclude by showing that the GQEM model can also account for subadditivity. Overall these results add to a growing body of evidence suggesting that quantum probability theory is a valuable tool in modeling recognition memory. Copyright © 2016 Cognitive Science Society, Inc.

  2. Differentiation and Response Bias in Episodic Memory: Evidence from Reaction Time Distributions

    ERIC Educational Resources Information Center

    Criss, Amy H.

    2010-01-01

    In differentiation models, the processes of encoding and retrieval produce an increase in the distribution of memory strength for targets and a decrease in the distribution of memory strength for foils as the amount of encoding increases. This produces an increase in the hit rate and decrease in the false-alarm rate for a strongly encoded compared…

  3. The Galactic Distribution of Massive Star Formation from the Red MSX Source Survey

    NASA Astrophysics Data System (ADS)

    Figura, Charles C.; Urquhart, J. S.

    2013-01-01

    Massive stars inject enormous amounts of energy into their environments in the form of UV radiation and molecular outflows, creating HII regions and enriching local chemistry. These effects provide feedback mechanisms that aid in regulating star formation in the region, and may trigger the formation of subsequent generations of stars. Understanding the mechanics of massive star formation presents an important key to understanding this process and its role in shaping the dynamics of galactic structure. The Red MSX Source (RMS) survey is a multi-wavelength investigation of ~1200 massive young stellar objects (MYSO) and ultra-compact HII (UCHII) regions identified from a sample of colour-selected sources from the Midcourse Space Experiment (MSX) point source catalog and Two Micron All Sky Survey. We present a study of over 900 MYSO and UCHII regions investigated by the RMS survey. We review the methods used to determine distances, and investigate the radial galactocentric distribution of these sources in context with the observed structure of the galaxy. The distribution of MYSO and UCHII regions is found to be spatially correlated with the spiral arms and galactic bar. We examine the radial distribution of MYSOs and UCHII regions and find variations in the star formation rate between the inner and outer Galaxy and discuss the implications for star formation throughout the galactic disc.

  4. Acute Infection with Epstein-Barr Virus Targets and Overwhelms the Peripheral Memory B-Cell Compartment with Resting, Latently Infected Cells

    PubMed Central

    Hochberg, Donna; Souza, Tatyana; Catalina, Michelle; Sullivan, John L.; Luzuriaga, Katherine; Thorley-Lawson, David A.

    2004-01-01

    In this paper we demonstrate that during acute infection with Epstein-Barr virus (EBV), the peripheral blood fills up with latently infected, resting memory B cells to the point where up to 50% of all the memory cells may carry EBV. Despite this massive invasion of the memory compartment, the virus remains tightly restricted to memory cells, such that, in one donor, fewer than 1 in 104 infected cells were found in the naive compartment. We conclude that, even during acute infection, EBV persistence is tightly regulated. This result confirms the prediction that during the early phase of infection, before cellular immunity is effective, there is nothing to prevent amplification of the viral cycle of infection, differentiation, and reactivation, causing the peripheral memory compartment to fill up with latently infected cells. Subsequently, there is a rapid decline in infected cells for the first few weeks that approximates the decay in the cytotoxic-T-cell responses to viral replicative antigens. This phase is followed by a slower decline that, even by 1 year, had not reached a steady state. Therefore, EBV may approach but never reach a stable equilibrium. PMID:15113901

  5. An FPGA-Based Massively Parallel Neuromorphic Cortex Simulator

    PubMed Central

    Wang, Runchun M.; Thakur, Chetan S.; van Schaik, André

    2018-01-01

    This paper presents a massively parallel and scalable neuromorphic cortex simulator designed for simulating large and structurally connected spiking neural networks, such as complex models of various areas of the cortex. The main novelty of this work is the abstraction of a neuromorphic architecture into clusters represented by minicolumns and hypercolumns, analogously to the fundamental structural units observed in neurobiology. Without this approach, simulating large-scale fully connected networks needs prohibitively large memory to store look-up tables for point-to-point connections. Instead, we use a novel architecture, based on the structural connectivity in the neocortex, such that all the required parameters and connections can be stored in on-chip memory. The cortex simulator can be easily reconfigured for simulating different neural networks without any change in hardware structure by programming the memory. A hierarchical communication scheme allows one neuron to have a fan-out of up to 200 k neurons. As a proof-of-concept, an implementation on one Altera Stratix V FPGA was able to simulate 20 million to 2.6 billion leaky-integrate-and-fire (LIF) neurons in real time. We verified the system by emulating a simplified auditory cortex (with 100 million neurons). This cortex simulator achieved a low power dissipation of 1.62 μW per neuron. With the advent of commercially available FPGA boards, our system offers an accessible and scalable tool for the design, real-time simulation, and analysis of large-scale spiking neural networks. PMID:29692702

  6. An FPGA-Based Massively Parallel Neuromorphic Cortex Simulator.

    PubMed

    Wang, Runchun M; Thakur, Chetan S; van Schaik, André

    2018-01-01

    This paper presents a massively parallel and scalable neuromorphic cortex simulator designed for simulating large and structurally connected spiking neural networks, such as complex models of various areas of the cortex. The main novelty of this work is the abstraction of a neuromorphic architecture into clusters represented by minicolumns and hypercolumns, analogously to the fundamental structural units observed in neurobiology. Without this approach, simulating large-scale fully connected networks needs prohibitively large memory to store look-up tables for point-to-point connections. Instead, we use a novel architecture, based on the structural connectivity in the neocortex, such that all the required parameters and connections can be stored in on-chip memory. The cortex simulator can be easily reconfigured for simulating different neural networks without any change in hardware structure by programming the memory. A hierarchical communication scheme allows one neuron to have a fan-out of up to 200 k neurons. As a proof-of-concept, an implementation on one Altera Stratix V FPGA was able to simulate 20 million to 2.6 billion leaky-integrate-and-fire (LIF) neurons in real time. We verified the system by emulating a simplified auditory cortex (with 100 million neurons). This cortex simulator achieved a low power dissipation of 1.62 μW per neuron. With the advent of commercially available FPGA boards, our system offers an accessible and scalable tool for the design, real-time simulation, and analysis of large-scale spiking neural networks.

  7. Disk-based k-mer counting on a PC

    PubMed Central

    2013-01-01

    Background The k-mer counting problem, which is to build the histogram of occurrences of every k-symbol long substring in a given text, is important for many bioinformatics applications. They include developing de Bruijn graph genome assemblers, fast multiple sequence alignment and repeat detection. Results We propose a simple, yet efficient, parallel disk-based algorithm for counting k-mers. Experiments show that it usually offers the fastest solution to the considered problem, while demanding a relatively small amount of memory. In particular, it is capable of counting the statistics for short-read human genome data, in input gzipped FASTQ file, in less than 40 minutes on a PC with 16 GB of RAM and 6 CPU cores, and for long-read human genome data in less than 70 minutes. On a more powerful machine, using 32 GB of RAM and 32 CPU cores, the tasks are accomplished in less than half the time. No other algorithm for most tested settings of this problem and mammalian-size data can accomplish this task in comparable time. Our solution also belongs to memory-frugal ones; most competitive algorithms cannot efficiently work on a PC with 16 GB of memory for such massive data. Conclusions By making use of cheap disk space and exploiting CPU and I/O parallelism we propose a very competitive k-mer counting procedure, called KMC. Our results suggest that judicious resource management may allow to solve at least some bioinformatics problems with massive data on a commodity personal computer. PMID:23679007

  8. Massive binary stars as a probe of massive star formation

    NASA Astrophysics Data System (ADS)

    Kiminki, Daniel C.

    2010-10-01

    Massive stars are among the largest and most influential objects we know of on a sub-galactic scale. Binary systems, composed of at least one of these stars, may be responsible for several types of phenomena, including type Ib/c supernovae, short and long gamma ray bursts, high-velocity runaway O and B-type stars, and the density of the parent star clusters. Our understanding of these stars has met with limited success, especially in the area of their formation. Current formation theories rely on the accumulated statistics of massive binary systems that are limited because of their sample size or the inhomogeneous environments from which the statistics are collected. The purpose of this work is to provide a higher-level analysis of close massive binary characteristics using the radial velocity information of 113 massive stars (B3 and earlier) and binary orbital properties for the 19 known close massive binaries in the Cygnus OB2 Association. This work provides an analysis using the largest amount of massive star and binary information ever compiled for an O-star rich cluster like Cygnus OB2, and compliments other O-star binary studies such as NGC 6231, NGC 2244, and NGC 6611. I first report the discovery of 73 new O or B-type stars and 13 new massive binaries by this survey. This work involved the use of 75 successful nights of spectroscopic observation at the Wyoming Infrared Observatory in addition to observations obtained using the Hydra multi-object spectrograph at WIYN, the HIRES echelle spectrograph at KECK, and the Hamilton spectrograph at LICK. I use these data to estimate the spectrophotometric distance to the cluster and to measure the mean systemic velocity and the one-sided velocity dispersion of the cluster. Finally, I compare these data to a series of Monte Carlo models, the results of which indicate that the binary fraction of the cluster is 57 +/- 5% and that the indices for the power law distributions, describing the log of the periods, mass-ratios, and eccentricities, are --0.2 +/- 0.3, 0.3 +/- 0.3, and --0.8 +/- 0.3 respectively (or not consistent with a simple power law distribution). The observed distributions indicate a preference for short period systems with nearly circular orbits and companions that are not likely drawn from a standard initial mass function, as would be expected from random pairing. An interesting and unexpected result is that the period distribution is inconsistent with a standard power-law slope stemming mainly from an excess of periods between 3 and 5 days and an absence of periods between 7 and 14 days. One possible explanation of this phenomenon is that the binary systems with periods from 7--14 days are migrating to periods of 3--5 days. In addition, the binary distribution here is not consistent with previous suggestions in the literature that 45% of OB binaries are members of twin systems (mass ratio near 1).

  9. The MASSIVE Survey - X. Misalignment between Kinematic and Photometric Axes and Intrinsic Shapes of Massive Early-Type Galaxies

    NASA Astrophysics Data System (ADS)

    Ene, Irina; Ma, Chung-Pei; Veale, Melanie; Greene, Jenny E.; Thomas, Jens; Blakeslee, John P.; Foster, Caroline; Walsh, Jonelle L.; Ito, Jennifer; Goulding, Andy D.

    2018-06-01

    We use spatially resolved two-dimensional stellar velocity maps over a 107″ × 107″ field of view to investigate the kinematic features of 90 early-type galaxies above stellar mass 1011.5M⊙ in the MASSIVE survey. We measure the misalignment angle Ψ between the kinematic and photometric axes and identify local features such as velocity twists and kinematically distinct components. We find 46% of the sample to be well aligned (Ψ < 15°), 33% misaligned, and 21% without detectable rotation (non-rotators). Only 24% of the sample are fast rotators, the majority of which (91%) are aligned, whereas 57% of the slow rotators are misaligned with a nearly flat distribution of Ψ from 15° to 90°. 11 galaxies have Ψ ≳ 60° and thus exhibit minor-axis ("prolate") rotation in which the rotation is preferentially around the photometric major axis. Kinematic misalignments occur more frequently for lower galaxy spin or denser galaxy environments. Using the observed misalignment and ellipticity distributions, we infer the intrinsic shape distribution of our sample and find that MASSIVE slow rotators are consistent with being mildly triaxial, with mean axis ratios of b/a = 0.88 and c/a = 0.65. In terms of local kinematic features, 51% of the sample exhibit kinematic twists of larger than 20°, and 2 galaxies have kinematically distinct components. The frequency of misalignment and the broad distribution of Ψ reported here suggest that the most massive early-type galaxies are mildly triaxial, and that formation processes resulting in kinematically misaligned slow rotators such as gas-poor mergers occur frequently in this mass range.

  10. Compact modeling of CRS devices based on ECM cells for memory, logic and neuromorphic applications.

    PubMed

    Linn, E; Menzel, S; Ferch, S; Waser, R

    2013-09-27

    Dynamic physics-based models of resistive switching devices are of great interest for the realization of complex circuits required for memory, logic and neuromorphic applications. Here, we apply such a model of an electrochemical metallization (ECM) cell to complementary resistive switches (CRSs), which are favorable devices to realize ultra-dense passive crossbar arrays. Since a CRS consists of two resistive switching devices, it is straightforward to apply the dynamic ECM model for CRS simulation with MATLAB and SPICE, enabling study of the device behavior in terms of sweep rate and series resistance variations. Furthermore, typical memory access operations as well as basic implication logic operations can be analyzed, revealing requirements for proper spike and level read operations. This basic understanding facilitates applications of massively parallel computing paradigms required for neuromorphic applications.

  11. Traumatic memories, eye movements, phobia, and panic: a critical note on the proliferation of EMDR.

    PubMed

    Muris, P; Merckelbach, H

    1999-01-01

    In the past years, Eye Movement Desensitization and Reprocessing (EMDR) has become increasingly popular as a treatment method for Posttraumatic Stress Disorder (PTSD). The current article critically evaluates three recurring assumptions in EMDR literature: (a) the notion that traumatic memories are fixed and stable and that flashbacks are accurate reproductions of the traumatic incident; (b) the idea that eye movements, or other lateralized rhythmic behaviors have an inhibitory effect on emotional memories; and (c) the assumption that EMDR is not only effective in treating PTSD, but can also be successfully applied to other psychopathological conditions. There is little support for any of these three assumptions. Meanwhile, the expansion of the theoretical underpinnings of EMDR in the absence of a sound empirical basis casts doubts on the massive proliferation of this treatment method.

  12. Second International Workshop on Software Engineering and Code Design in Parallel Meteorological and Oceanographic Applications

    NASA Technical Reports Server (NTRS)

    OKeefe, Matthew (Editor); Kerr, Christopher L. (Editor)

    1998-01-01

    This report contains the abstracts and technical papers from the Second International Workshop on Software Engineering and Code Design in Parallel Meteorological and Oceanographic Applications, held June 15-18, 1998, in Scottsdale, Arizona. The purpose of the workshop is to bring together software developers in meteorology and oceanography to discuss software engineering and code design issues for parallel architectures, including Massively Parallel Processors (MPP's), Parallel Vector Processors (PVP's), Symmetric Multi-Processors (SMP's), Distributed Shared Memory (DSM) multi-processors, and clusters. Issues to be discussed include: (1) code architectures for current parallel models, including basic data structures, storage allocation, variable naming conventions, coding rules and styles, i/o and pre/post-processing of data; (2) designing modular code; (3) load balancing and domain decomposition; (4) techniques that exploit parallelism efficiently yet hide the machine-related details from the programmer; (5) tools for making the programmer more productive; and (6) the proliferation of programming models (F--, OpenMP, MPI, and HPF).

  13. Highly parallel sparse Cholesky factorization

    NASA Technical Reports Server (NTRS)

    Gilbert, John R.; Schreiber, Robert

    1990-01-01

    Several fine grained parallel algorithms were developed and compared to compute the Cholesky factorization of a sparse matrix. The experimental implementations are on the Connection Machine, a distributed memory SIMD machine whose programming model conceptually supplies one processor per data element. In contrast to special purpose algorithms in which the matrix structure conforms to the connection structure of the machine, the focus is on matrices with arbitrary sparsity structure. The most promising algorithm is one whose inner loop performs several dense factorizations simultaneously on a 2-D grid of processors. Virtually any massively parallel dense factorization algorithm can be used as the key subroutine. The sparse code attains execution rates comparable to those of the dense subroutine. Although at present architectural limitations prevent the dense factorization from realizing its potential efficiency, it is concluded that a regular data parallel architecture can be used efficiently to solve arbitrarily structured sparse problems. A performance model is also presented and it is used to analyze the algorithms.

  14. Complete tomography of a high-fidelity solid-state entangled spin-photon qubit pair.

    PubMed

    De Greve, Kristiaan; McMahon, Peter L; Yu, Leo; Pelc, Jason S; Jones, Cody; Natarajan, Chandra M; Kim, Na Young; Abe, Eisuke; Maier, Sebastian; Schneider, Christian; Kamp, Martin; Höfling, Sven; Hadfield, Robert H; Forchel, Alfred; Fejer, M M; Yamamoto, Yoshihisa

    2013-01-01

    Entanglement between stationary quantum memories and photonic qubits is crucial for future quantum communication networks. Although high-fidelity spin-photon entanglement was demonstrated in well-isolated atomic and ionic systems, in the solid-state, where massively parallel, scalable networks are most realistically conceivable, entanglement fidelities are typically limited due to intrinsic environmental interactions. Distilling high-fidelity entangled pairs from lower-fidelity precursors can act as a remedy, but the required overhead scales unfavourably with the initial entanglement fidelity. With spin-photon entanglement as a crucial building block for entangling quantum network nodes, obtaining high-fidelity entangled pairs becomes imperative for practical realization of such networks. Here we report the first results of complete state tomography of a solid-state spin-photon-polarization-entangled qubit pair, using a single electron-charged indium arsenide quantum dot. We demonstrate record-high fidelity in the solid-state of well over 90%, and the first (99.9%-confidence) achievement of a fidelity that will unambiguously allow for entanglement distribution in solid-state quantum repeater networks.

  15. The design and implementation of a parallel unstructured Euler solver using software primitives

    NASA Technical Reports Server (NTRS)

    Das, R.; Mavriplis, D. J.; Saltz, J.; Gupta, S.; Ponnusamy, R.

    1992-01-01

    This paper is concerned with the implementation of a three-dimensional unstructured grid Euler-solver on massively parallel distributed-memory computer architectures. The goal is to minimize solution time by achieving high computational rates with a numerically efficient algorithm. An unstructured multigrid algorithm with an edge-based data structure has been adopted, and a number of optimizations have been devised and implemented in order to accelerate the parallel communication rates. The implementation is carried out by creating a set of software tools, which provide an interface between the parallelization issues and the sequential code, while providing a basis for future automatic run-time compilation support. Large practical unstructured grid problems are solved on the Intel iPSC/860 hypercube and Intel Touchstone Delta machine. The quantitative effect of the various optimizations are demonstrated, and we show that the combined effect of these optimizations leads to roughly a factor of three performance improvement. The overall solution efficiency is compared with that obtained on the CRAY-YMP vector supercomputer.

  16. Signal and noise extraction from analog memory elements for neuromorphic computing.

    PubMed

    Gong, N; Idé, T; Kim, S; Boybat, I; Sebastian, A; Narayanan, V; Ando, T

    2018-05-29

    Dense crossbar arrays of non-volatile memory (NVM) can potentially enable massively parallel and highly energy-efficient neuromorphic computing systems. The key requirements for the NVM elements are continuous (analog-like) conductance tuning capability and switching symmetry with acceptable noise levels. However, most NVM devices show non-linear and asymmetric switching behaviors. Such non-linear behaviors render separation of signal and noise extremely difficult with conventional characterization techniques. In this study, we establish a practical methodology based on Gaussian process regression to address this issue. The methodology is agnostic to switching mechanisms and applicable to various NVM devices. We show tradeoff between switching symmetry and signal-to-noise ratio for HfO 2 -based resistive random access memory. Then, we characterize 1000 phase-change memory devices based on Ge 2 Sb 2 Te 5 and separate total variability into device-to-device variability and inherent randomness from individual devices. These results highlight the usefulness of our methodology to realize ideal NVM devices for neuromorphic computing.

  17. Shared versus distributed memory multiprocessors

    NASA Technical Reports Server (NTRS)

    Jordan, Harry F.

    1991-01-01

    The question of whether multiprocessors should have shared or distributed memory has attracted a great deal of attention. Some researchers argue strongly for building distributed memory machines, while others argue just as strongly for programming shared memory multiprocessors. A great deal of research is underway on both types of parallel systems. Special emphasis is placed on systems with a very large number of processors for computation intensive tasks and considers research and implementation trends. It appears that the two types of systems will likely converge to a common form for large scale multiprocessors.

  18. Associative visual agnosia: a case study.

    PubMed

    Charnallet, A; Carbonnel, S; David, D; Moreaud, O

    2008-01-01

    We report a case of massive associative visual agnosia. In the light of current theories of identification and semantic knowledge organization, a deficit involving both levels of structural description system and visual semantics must be assumed to explain the case. We suggest, in line with a previous case study, an alternative account in the framework of (non abstractive) episodic models of memory.

  19. Thermal form factor approach to the ground-state correlation functions of the XXZ chain in the antiferromagnetic massive regime

    NASA Astrophysics Data System (ADS)

    Dugave, Maxime; Göhmann, Frank; Kozlowski, Karol K.; Suzuki, Junji

    2016-09-01

    We use the form factors of the quantum transfer matrix in the zero-temperature limit in order to study the two-point ground-state correlation functions of the XXZ chain in the antiferromagnetic massive regime. We obtain novel form factor series representations of the correlation functions which differ from those derived either from the q-vertex-operator approach or from the algebraic Bethe Ansatz approach to the usual transfer matrix. We advocate that our novel representations are numerically more efficient and allow for a straightforward calculation of the large-distance asymptotic behaviour of the two-point functions. Keeping control over the temperature corrections to the two-point functions we see that these are of order {T}∞ in the whole antiferromagnetic massive regime. The isotropic limit of our result yields a novel form factor series representation for the two-point correlation functions of the XXX chain at zero magnetic field. Dedicated to the memory of Petr Petrovich Kulish.

  20. Learning from Massive Distributed Data Sets (Invited)

    NASA Astrophysics Data System (ADS)

    Kang, E. L.; Braverman, A. J.

    2013-12-01

    Technologies for remote sensing and ever-expanding computer experiments in climate science are generating massive data sets. Meanwhile, it has been common in all areas of large-scale science to have these 'big data' distributed over multiple different physical locations, and moving large amounts of data can be impractical. In this talk, we will discuss efficient ways for us to summarize and learn from distributed data. We formulate a graphical model to mimic the main characteristics of a distributed-data network, including the size of the data sets and speed of moving data. With this nominal model, we investigate the trade off between prediction accurate and cost of data movement, theoretically and through simulation experiments. We will also discuss new implementations of spatial and spatio-temporal statistical methods optimized for distributed data.

  1. Statistical prediction with Kanerva's sparse distributed memory

    NASA Technical Reports Server (NTRS)

    Rogers, David

    1989-01-01

    A new viewpoint of the processing performed by Kanerva's sparse distributed memory (SDM) is presented. In conditions of near- or over-capacity, where the associative-memory behavior of the model breaks down, the processing performed by the model can be interpreted as that of a statistical predictor. Mathematical results are presented which serve as the framework for a new statistical viewpoint of sparse distributed memory and for which the standard formulation of SDM is a special case. This viewpoint suggests possible enhancements to the SDM model, including a procedure for improving the predictiveness of the system based on Holland's work with genetic algorithms, and a method for improving the capacity of SDM even when used as an associative memory.

  2. Supporting shared data structures on distributed memory architectures

    NASA Technical Reports Server (NTRS)

    Koelbel, Charles; Mehrotra, Piyush; Vanrosendale, John

    1990-01-01

    Programming nonshared memory systems is more difficult than programming shared memory systems, since there is no support for shared data structures. Current programming languages for distributed memory architectures force the user to decompose all data structures into separate pieces, with each piece owned by one of the processors in the machine, and with all communication explicitly specified by low-level message-passing primitives. A new programming environment is presented for distributed memory architectures, providing a global name space and allowing direct access to remote parts of data values. The analysis and program transformations required to implement this environment are described, and the efficiency of the resulting code on the NCUBE/7 and IPSC/2 hypercubes are described.

  3. Differential memory in the earth's magnetotail

    NASA Technical Reports Server (NTRS)

    Burkhart, G. R.; Chen, J.

    1991-01-01

    The process of 'differential memory' in the earth's magnetotail is studied in the framework of the modified Harris magnetotail geometry. It is verified that differential memory can generate non-Maxwellian features in the modified Harris field model. The time scales and the potentially observable distribution functions associated with the process of differential memory are investigated, and it is shown that non-Maxwelllian distributions can evolve as a test particle response to distribution function boundary conditions in a Harris field magnetotail model. The non-Maxwellian features which arise from distribution function mapping have definite time scales associated with them, which are generally shorter than the earthward convection time scale but longer than the typical Alfven crossing time.

  4. Low Latency Messages on Distributed Memory Multiprocessors

    DOE PAGES

    Rosing, Matt; Saltz, Joel

    1995-01-01

    This article describes many of the issues in developing an efficient interface for communication on distributed memory machines. Although the hardware component of message latency is less than 1 ws on many distributed memory machines, the software latency associated with sending and receiving typed messages is on the order of 50 μs. The reason for this imbalance is that the software interface does not match the hardware. By changing the interface to match the hardware more closely, applications with fine grained communication can be put on these machines. This article describes several tests performed and many of the issues involvedmore » in supporting low latency messages on distributed memory machines.« less

  5. Limits of the memory coefficient in measuring correlated bursts

    NASA Astrophysics Data System (ADS)

    Jo, Hang-Hyun; Hiraoka, Takayuki

    2018-03-01

    Temporal inhomogeneities in event sequences of natural and social phenomena have been characterized in terms of interevent times and correlations between interevent times. The inhomogeneities of interevent times have been extensively studied, while the correlations between interevent times, often called correlated bursts, are far from being fully understood. For measuring the correlated bursts, two relevant approaches were suggested, i.e., memory coefficient and burst size distribution. Here a burst size denotes the number of events in a bursty train detected for a given time window. Empirical analyses have revealed that the larger memory coefficient tends to be associated with the heavier tail of the burst size distribution. In particular, empirical findings in human activities appear inconsistent, such that the memory coefficient is close to 0, while burst size distributions follow a power law. In order to comprehend these observations, by assuming the conditional independence between consecutive interevent times, we derive the analytical form of the memory coefficient as a function of parameters describing interevent time and burst size distributions. Our analytical result can explain the general tendency of the larger memory coefficient being associated with the heavier tail of burst size distribution. We also find that the apparently inconsistent observations in human activities are compatible with each other, indicating that the memory coefficient has limits to measure the correlated bursts.

  6. Notes on implementation of sparsely distributed memory

    NASA Technical Reports Server (NTRS)

    Keeler, J. D.; Denning, P. J.

    1986-01-01

    The Sparsely Distributed Memory (SDM) developed by Kanerva is an unconventional memory design with very interesting and desirable properties. The memory works in a manner that is closely related to modern theories of human memory. The SDM model is discussed in terms of its implementation in hardware. Two appendices discuss the unconventional approaches of the SDM: Appendix A treats a resistive circuit for fast, parallel address decoding; and Appendix B treats a systolic array for high throughput read and write operations.

  7. Parallelization and checkpointing of GPU applications through program transformation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Solano-Quinde, Lizandro Damian

    2012-01-01

    GPUs have emerged as a powerful tool for accelerating general-purpose applications. The availability of programming languages that makes writing general-purpose applications for running on GPUs tractable have consolidated GPUs as an alternative for accelerating general purpose applications. Among the areas that have benefited from GPU acceleration are: signal and image processing, computational fluid dynamics, quantum chemistry, and, in general, the High Performance Computing (HPC) Industry. In order to continue to exploit higher levels of parallelism with GPUs, multi-GPU systems are gaining popularity. In this context, single-GPU applications are parallelized for running in multi-GPU systems. Furthermore, multi-GPU systems help to solvemore » the GPU memory limitation for applications with large application memory footprint. Parallelizing single-GPU applications has been approached by libraries that distribute the workload at runtime, however, they impose execution overhead and are not portable. On the other hand, on traditional CPU systems, parallelization has been approached through application transformation at pre-compile time, which enhances the application to distribute the workload at application level and does not have the issues of library-based approaches. Hence, a parallelization scheme for GPU systems based on application transformation is needed. Like any computing engine of today, reliability is also a concern in GPUs. GPUs are vulnerable to transient and permanent failures. Current checkpoint/restart techniques are not suitable for systems with GPUs. Checkpointing for GPU systems present new and interesting challenges, primarily due to the natural differences imposed by the hardware design, the memory subsystem architecture, the massive number of threads, and the limited amount of synchronization among threads. Therefore, a checkpoint/restart technique suitable for GPU systems is needed. The goal of this work is to exploit higher levels of parallelism and to develop support for application-level fault tolerance in applications using multiple GPUs. Our techniques reduce the burden of enhancing single-GPU applications to support these features. To achieve our goal, this work designs and implements a framework for enhancing a single-GPU OpenCL application through application transformation.« less

  8. Memory-Scalable GPU Spatial Hierarchy Construction.

    PubMed

    Qiming Hou; Xin Sun; Kun Zhou; Lauterbach, C; Manocha, D

    2011-04-01

    Recent GPU algorithms for constructing spatial hierarchies have achieved promising performance for moderately complex models by using the breadth-first search (BFS) construction order. While being able to exploit the massive parallelism on the GPU, the BFS order also consumes excessive GPU memory, which becomes a serious issue for interactive applications involving very complex models with more than a few million triangles. In this paper, we propose to use the partial breadth-first search (PBFS) construction order to control memory consumption while maximizing performance. We apply the PBFS order to two hierarchy construction algorithms. The first algorithm is for kd-trees that automatically balances between the level of parallelism and intermediate memory usage. With PBFS, peak memory consumption during construction can be efficiently controlled without costly CPU-GPU data transfer. We also develop memory allocation strategies to effectively limit memory fragmentation. The resulting algorithm scales well with GPU memory and constructs kd-trees of models with millions of triangles at interactive rates on GPUs with 1 GB memory. Compared with existing algorithms, our algorithm is an order of magnitude more scalable for a given GPU memory bound. The second algorithm is for out-of-core bounding volume hierarchy (BVH) construction for very large scenes based on the PBFS construction order. At each iteration, all constructed nodes are dumped to the CPU memory, and the GPU memory is freed for the next iteration's use. In this way, the algorithm is able to build trees that are too large to be stored in the GPU memory. Experiments show that our algorithm can construct BVHs for scenes with up to 20 M triangles, several times larger than previous GPU algorithms.

  9. Distributed memory compiler methods for irregular problems: Data copy reuse and runtime partitioning

    NASA Technical Reports Server (NTRS)

    Das, Raja; Ponnusamy, Ravi; Saltz, Joel; Mavriplis, Dimitri

    1991-01-01

    Outlined here are two methods which we believe will play an important role in any distributed memory compiler able to handle sparse and unstructured problems. We describe how to link runtime partitioners to distributed memory compilers. In our scheme, programmers can implicitly specify how data and loop iterations are to be distributed between processors. This insulates users from having to deal explicitly with potentially complex algorithms that carry out work and data partitioning. We also describe a viable mechanism for tracking and reusing copies of off-processor data. In many programs, several loops access the same off-processor memory locations. As long as it can be verified that the values assigned to off-processor memory locations remain unmodified, we show that we can effectively reuse stored off-processor data. We present experimental data from a 3-D unstructured Euler solver run on iPSC/860 to demonstrate the usefulness of our methods.

  10. The Theory of Localist Representation and of a Purely Abstract Cognitive System: The Evidence from Cortical Columns, Category Cells, and Multisensory Neurons.

    PubMed

    Roy, Asim

    2017-01-01

    The debate about representation in the brain and the nature of the cognitive system has been going on for decades now. This paper examines the neurophysiological evidence, primarily from single cell recordings, to get a better perspective on both the issues. After an initial review of some basic concepts, the paper reviews the data from single cell recordings - in cortical columns and of category-selective and multisensory neurons. In neuroscience, columns in the neocortex (cortical columns) are understood to be a basic functional/computational unit. The paper reviews the fundamental discoveries about the columnar organization and finds that it reveals a massively parallel search mechanism. This columnar organization could be the most extensive neurophysiological evidence for the widespread use of localist representation in the brain. The paper also reviews studies of category-selective cells. The evidence for category-selective cells reveals that localist representation is also used to encode complex abstract concepts at the highest levels of processing in the brain. A third major issue is the nature of the cognitive system in the brain and whether there is a form that is purely abstract and encoded by single cells. To provide evidence for a single-cell based purely abstract cognitive system, the paper reviews some of the findings related to multisensory cells. It appears that there is widespread usage of multisensory cells in the brain in the same areas where sensory processing takes place. Plus there is evidence for abstract modality invariant cells at higher levels of cortical processing. Overall, that reveals the existence of a purely abstract cognitive system in the brain. The paper also argues that since there is no evidence for dense distributed representation and since sparse representation is actually used to encode memories, there is actually no evidence for distributed representation in the brain. Overall, it appears that, at an abstract level, the brain is a massively parallel, distributed computing system that is symbolic. The paper also explains how grounded cognition and other theories of the brain are fully compatible with localist representation and a purely abstract cognitive system.

  11. The Theory of Localist Representation and of a Purely Abstract Cognitive System: The Evidence from Cortical Columns, Category Cells, and Multisensory Neurons

    PubMed Central

    Roy, Asim

    2017-01-01

    The debate about representation in the brain and the nature of the cognitive system has been going on for decades now. This paper examines the neurophysiological evidence, primarily from single cell recordings, to get a better perspective on both the issues. After an initial review of some basic concepts, the paper reviews the data from single cell recordings – in cortical columns and of category-selective and multisensory neurons. In neuroscience, columns in the neocortex (cortical columns) are understood to be a basic functional/computational unit. The paper reviews the fundamental discoveries about the columnar organization and finds that it reveals a massively parallel search mechanism. This columnar organization could be the most extensive neurophysiological evidence for the widespread use of localist representation in the brain. The paper also reviews studies of category-selective cells. The evidence for category-selective cells reveals that localist representation is also used to encode complex abstract concepts at the highest levels of processing in the brain. A third major issue is the nature of the cognitive system in the brain and whether there is a form that is purely abstract and encoded by single cells. To provide evidence for a single-cell based purely abstract cognitive system, the paper reviews some of the findings related to multisensory cells. It appears that there is widespread usage of multisensory cells in the brain in the same areas where sensory processing takes place. Plus there is evidence for abstract modality invariant cells at higher levels of cortical processing. Overall, that reveals the existence of a purely abstract cognitive system in the brain. The paper also argues that since there is no evidence for dense distributed representation and since sparse representation is actually used to encode memories, there is actually no evidence for distributed representation in the brain. Overall, it appears that, at an abstract level, the brain is a massively parallel, distributed computing system that is symbolic. The paper also explains how grounded cognition and other theories of the brain are fully compatible with localist representation and a purely abstract cognitive system. PMID:28261127

  12. [Early episodic memory impairments in Alzheimer's disease].

    PubMed

    Ergis, A-M; Eusop-Roussel, E

    2008-05-01

    Patients with Alzheimer's disease (AD) show early episodic memory impairments. Such deficits reflect specific impairments affecting one or several stages of encoding, storage and retrieval processes. However, AD patients not only have great difficulty retrieving memories and information but also suffer from distortions of memory, as intrusions and false recognitions. Intrusions can be defined as the unintentional recall of inappropriate information in a laboratory-learning tasks such as word-list recall and story recall. False recognition refers to the erroneous recognition of information that was not previously presented. The first objective of this review is to present studies from the literature that allowed a better understanding of the nature of episodic memory deficits in AD, and to examine recent research on false memories. The second part of this review is aimed at presenting recent research conducted on prospective memory (PM) in Alzheimer's disease. Prospective memory situations involve forming intentions and then realizing those intentions at some appropriate time in the future. Everyday examples of prospective memory include remembering to buy bread on the way home from work, remembering to give friends a message upon next encountering them, and remembering to take medication. Patients suffering from AD show difficulties in performing prospective tasks in daily life, according to the complaints of their care givers, and these difficulties are massively present at the first stages of the disease. Nevertheless, very few studies have been dedicated to this subject, although the evaluation of PM could be helpful for the early diagnosis of AD.

  13. Implementation of a Fully-Balanced Periodic Tridiagonal Solver on a Parallel Distributed Memory Architecture

    DTIC Science & Technology

    1994-05-01

    PARALLEL DISTRIBUTED MEMORY ARCHITECTURE LTJh T. M. Eidson 0 - 8 l 9 5 " G. Erlebacher _ _ _. _ DTIe QUALITY INSPECTED a Contract NAS I - 19480 May 1994...DISTRIBUTED MEMORY ARCHITECTURE T.M. Eidson * High Technology Corporation Hampton, VA 23665 G. Erlebachert Institute for Computer Applications in Science and...developed and evaluated. Simple model calculations as well as timing results are pres.nted to evaluate the various strategies. The particular

  14. Examining procedural working memory processing in obsessive-compulsive disorder.

    PubMed

    Shahar, Nitzan; Teodorescu, Andrei R; Anholt, Gideon E; Karmon-Presser, Anat; Meiran, Nachshon

    2017-07-01

    Previous research has suggested that a deficit in working memory might underlie the difficulty of obsessive-compulsive disorder (OCD) patients to control their thoughts and actions. However, a recent meta-analyses found only small effect sizes for working memory deficits in OCD. Recently, a distinction has been made between declarative and procedural working memory. Working memory in OCD was tested mostly using declarative measurements. However, OCD symptoms typically concerns actions, making procedural working-memory more relevant. Here, we tested the operation of procedural working memory in OCD. Participants with OCD and healthy controls performed a battery of choice reaction tasks under high and low procedural working memory demands. Reaction-times (RT) were estimated using ex-Gaussian distribution fitting, revealing no group differences in the size of the RT distribution tail (i.e., τ parameter), known to be sensitive to procedural working memory manipulations. Group differences, unrelated to working memory manipulations, were found in the leading-edge of the RT distribution and analyzed using a two-stage evidence accumulation model. Modeling results suggested that perceptual difficulties might underlie the current group differences. In conclusion, our results suggest that procedural working-memory processing is most likely intact in OCD, and raise a novel, yet untested assumption regarding perceptual deficits in OCD. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  15. MASSIVE+: The Growth Histories of MASSIVE Survey Galaxies from their Globular Cluster Colors

    NASA Astrophysics Data System (ADS)

    Blakeslee, John

    2017-08-01

    The MASSIVE survey is targeting the 100 most massive galaxies within 108 Mpc that are visible in the northern sky. These most massive galaxies in the present-day universe reside in a surprisingly wide variety of environments, from rich clusters to fossil groups to near isolation. We propose to use WFC3/UVIS and ACS to carry out a deep imaging study of the globular cluster populations around a selected subset of the MASSIVE targets. Though much is known about GC systems of bright galaxies in rich clusters, we know surprisingly little about the effects of environment on these systems. The MASSIVE sample provides a golden opportunity to learn about the systematics of GC systems and what they can tell us about environmental drivers on the evolution of the highest mass galaxies. The most pressing questions to be addressed include: (1) Do isolated giants have the same constant mass fraction of GCs to total halo mass as BCGs of similar luminosity? (2) Do their GC systems show the same color (metallicity) distribution, which is an outcome of the mass spectrum of gas-rich halos during hierarchical growth? (3) Do the GCs in isolated high-mass galaxies follow the same radial distribution versus metallicity as in rich environments (a test of the relative importance of growth by accretion)? (4) Do the GCs of galaxies in sparse environments follow the same mass function? Our proposed second-band imaging will enable us to secure answers to these questions and add enormously to the legacy value of existing HST imaging of the highest mass galaxies in the universe.

  16. SAR processing on the MPP

    NASA Technical Reports Server (NTRS)

    Batcher, K. E.; Eddey, E. E.; Faiss, R. O.; Gilmore, P. A.

    1981-01-01

    The processing of synthetic aperture radar (SAR) signals using the massively parallel processor (MPP) is discussed. The fast Fourier transform convolution procedures employed in the algorithms are described. The MPP architecture comprises an array unit (ARU) which processes arrays of data; an array control unit which controls the operation of the ARU and performs scalar arithmetic; a program and data management unit which controls the flow of data; and a unique staging memory (SM) which buffers and permutes data. The ARU contains a 128 by 128 array of bit-serial processing elements (PE). Two-by-four surarrays of PE's are packaged in a custom VLSI HCMOS chip. The staging memory is a large multidimensional-access memory which buffers and permutes data flowing with the system. Efficient SAR processing is achieved via ARU communication paths and SM data manipulation. Real time processing capability can be realized via a multiple ARU, multiple SM configuration.

  17. High Cycle-life Shape Memory Polymer at High Temperature

    PubMed Central

    Kong, Deyan; Xiao, Xinli

    2016-01-01

    High cycle-life is important for shape memory materials exposed to numerous cycles, and here we report shape memory polyimide that maintained both high shape fixity (Rf) and shape recovery (Rr) during the more than 1000 bending cycles tested. Its critical stress is 2.78 MPa at 250 °C, and the shape recovery process can produce stored energy of 0.218 J g−1 at the efficiency of 31.3%. Its high Rf is determined by the large difference in storage modulus at rubbery and glassy states, while the high Rr mainly originates from its permanent phase composed of strong π-π interactions and massive chain entanglements. Both difference in storage modulus and overall permanent phase were preserved during the bending deformation cycles, and thus high Rf and Rr were observed in every cycle and the high cycle-life will expand application areas of SMPs enormously. PMID:27641148

  18. High-capacity optical long data memory based on enhanced Young's modulus in nanoplasmonic hybrid glass composites.

    PubMed

    Zhang, Qiming; Xia, Zhilin; Cheng, Yi-Bing; Gu, Min

    2018-03-22

    Emerging as an inevitable outcome of the big data era, long data are the massive amount of data that captures changes in the real world over a long period of time. In this context, recording and reading the data of a few terabytes in a single storage device repeatedly with a century-long unchanged baseline is in high demand. Here, we demonstrate the concept of optical long data memory with nanoplasmonic hybrid glass composites. Through the sintering-free incorporation of nanorods into the earth abundant hybrid glass composite, Young's modulus is enhanced by one to two orders of magnitude. This discovery, enabling reshaping control of plasmonic nanoparticles of multiple-length allows for continuous multi-level recording and reading with a capacity over 10 terabytes with no appreciable change of the baseline over 600 years, which opens new opportunities for long data memory that affects the past and future.

  19. Associative Visual Agnosia: A Case Study

    PubMed Central

    Charnallet, A.; Carbonnel, S.; David, D.; Moreaud, O.

    2008-01-01

    We report a case of massive associative visual agnosia. In the light of current theories of identification and semantic knowledge organization, a deficit involving both levels of structural description system and visual semantics must be assumed to explain the case. We suggest, in line with a previous case study [1], an alternative account in the framework of (non abstractive) episodic models of memory [4]. PMID:18413915

  20. The Emanuel Miller Memorial Lecture 2006: Adoption as Intervention. Meta-Analytic Evidence for Massive Catch-Up and Plasticity in Physical, Socio-Emotional, and Cognitive Development

    ERIC Educational Resources Information Center

    Van IJzendoorn, Marinus H.; Juffer, Femmie

    2006-01-01

    Background: Adopted children have been said to be difficult children, scarred by their past experiences in maltreating families or neglecting orphanages, or by genetic or pre- and perinatal problems. Is (domestic or international) adoption an effective intervention in the developmental domains of physical growth, attachment security, cognitive…

  1. Parallel k-means++ for Multiple Shared-Memory Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mackey, Patrick S.; Lewis, Robert R.

    2016-09-22

    In recent years k-means++ has become a popular initialization technique for improved k-means clustering. To date, most of the work done to improve its performance has involved parallelizing algorithms that are only approximations of k-means++. In this paper we present a parallelization of the exact k-means++ algorithm, with a proof of its correctness. We develop implementations for three distinct shared-memory architectures: multicore CPU, high performance GPU, and the massively multithreaded Cray XMT platform. We demonstrate the scalability of the algorithm on each platform. In addition we present a visual approach for showing which platform performed k-means++ the fastest for varyingmore » data sizes.« less

  2. GraphReduce: Processing Large-Scale Graphs on Accelerator-Based Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sengupta, Dipanjan; Song, Shuaiwen; Agarwal, Kapil

    2015-11-15

    Recent work on real-world graph analytics has sought to leverage the massive amount of parallelism offered by GPU devices, but challenges remain due to the inherent irregularity of graph algorithms and limitations in GPU-resident memory for storing large graphs. We present GraphReduce, a highly efficient and scalable GPU-based framework that operates on graphs that exceed the device’s internal memory capacity. GraphReduce adopts a combination of edge- and vertex-centric implementations of the Gather-Apply-Scatter programming model and operates on multiple asynchronous GPU streams to fully exploit the high degrees of parallelism in GPUs with efficient graph data movement between the host andmore » device.« less

  3. The efficiency of seismic attributes to differentiate between massive and non-massive carbonate successions for hydrocarbon exploration activity

    NASA Astrophysics Data System (ADS)

    Sarhan, Mohammad Abdelfattah

    2017-12-01

    The present work investigates the efficiency of applying volume seismic attributes to differentiate between massive and non-massive carbonate sedimentary successions on using seismic data. The main objective of this work is to provide a pre-drilling technique to recognize the porous carbonate section (probable hydrocarbon reservoirs) based on seismic data. A case study from the Upper Cretaceous - Eocene carbonate successions of Abu Gharadig Basin, northern Western Desert of Egypt has been tested in this work. The qualitative interpretations of the well-log data of four available wells distributed in the study area, namely; AG-2, AG-5, AG-6 and AG-15 wells, has confirmed that the Upper Cretaceous Khoman A Member represents the massive carbonate section whereas the Eocene Apollonia Formation represents the non-massive carbonate unit. The present work have proved that the most promising seismic attributes capable of differentiating between massive and non-massive carbonate sequences are; Root Mean Square (RMS) Amplitude, Envelope (Reflection Strength), Instantaneous Frequency, Chaos, Local Flatness and Relative Acoustic Impedance.

  4. Episodic memory in aspects of large-scale brain networks

    PubMed Central

    Jeong, Woorim; Chung, Chun Kee; Kim, June Sic

    2015-01-01

    Understanding human episodic memory in aspects of large-scale brain networks has become one of the central themes in neuroscience over the last decade. Traditionally, episodic memory was regarded as mostly relying on medial temporal lobe (MTL) structures. However, recent studies have suggested involvement of more widely distributed cortical network and the importance of its interactive roles in the memory process. Both direct and indirect neuro-modulations of the memory network have been tried in experimental treatments of memory disorders. In this review, we focus on the functional organization of the MTL and other neocortical areas in episodic memory. Task-related neuroimaging studies together with lesion studies suggested that specific sub-regions of the MTL are responsible for specific components of memory. However, recent studies have emphasized that connectivity within MTL structures and even their network dynamics with other cortical areas are essential in the memory process. Resting-state functional network studies also have revealed that memory function is subserved by not only the MTL system but also a distributed network, particularly the default-mode network (DMN). Furthermore, researchers have begun to investigate memory networks throughout the entire brain not restricted to the specific resting-state network (RSN). Altered patterns of functional connectivity (FC) among distributed brain regions were observed in patients with memory impairments. Recently, studies have shown that brain stimulation may impact memory through modulating functional networks, carrying future implications of a novel interventional therapy for memory impairment. PMID:26321939

  5. Modeling Confidence and Response Time in Recognition Memory

    ERIC Educational Resources Information Center

    Ratcliff, Roger; Starns, Jeffrey J.

    2009-01-01

    A new model for confidence judgments in recognition memory is presented. In the model, the match between a single test item and memory produces a distribution of evidence, with better matches corresponding to distributions with higher means. On this match dimension, confidence criteria are placed, and the areas between the criteria under the…

  6. CUDA Optimization Strategies for Compute- and Memory-Bound Neuroimaging Algorithms

    PubMed Central

    Lee, Daren; Dinov, Ivo; Dong, Bin; Gutman, Boris; Yanovsky, Igor; Toga, Arthur W.

    2011-01-01

    As neuroimaging algorithms and technology continue to grow faster than CPU performance in complexity and image resolution, data-parallel computing methods will be increasingly important. The high performance, data-parallel architecture of modern graphical processing units (GPUs) can reduce computational times by orders of magnitude. However, its massively threaded architecture introduces challenges when GPU resources are exceeded. This paper presents optimization strategies for compute- and memory-bound algorithms for the CUDA architecture. For compute-bound algorithms, the registers are reduced through variable reuse via shared memory and the data throughput is increased through heavier thread workloads and maximizing the thread configuration for a single thread block per multiprocessor. For memory-bound algorithms, fitting the data into the fast but limited GPU resources is achieved through reorganizing the data into self-contained structures and employing a multi-pass approach. Memory latencies are reduced by selecting memory resources whose cache performance are optimized for the algorithm's access patterns. We demonstrate the strategies on two computationally expensive algorithms and achieve optimized GPU implementations that perform up to 6× faster than unoptimized ones. Compared to CPU implementations, we achieve peak GPU speedups of 129× for the 3D unbiased nonlinear image registration technique and 93× for the non-local means surface denoising algorithm. PMID:21159404

  7. CUDA optimization strategies for compute- and memory-bound neuroimaging algorithms.

    PubMed

    Lee, Daren; Dinov, Ivo; Dong, Bin; Gutman, Boris; Yanovsky, Igor; Toga, Arthur W

    2012-06-01

    As neuroimaging algorithms and technology continue to grow faster than CPU performance in complexity and image resolution, data-parallel computing methods will be increasingly important. The high performance, data-parallel architecture of modern graphical processing units (GPUs) can reduce computational times by orders of magnitude. However, its massively threaded architecture introduces challenges when GPU resources are exceeded. This paper presents optimization strategies for compute- and memory-bound algorithms for the CUDA architecture. For compute-bound algorithms, the registers are reduced through variable reuse via shared memory and the data throughput is increased through heavier thread workloads and maximizing the thread configuration for a single thread block per multiprocessor. For memory-bound algorithms, fitting the data into the fast but limited GPU resources is achieved through reorganizing the data into self-contained structures and employing a multi-pass approach. Memory latencies are reduced by selecting memory resources whose cache performance are optimized for the algorithm's access patterns. We demonstrate the strategies on two computationally expensive algorithms and achieve optimized GPU implementations that perform up to 6× faster than unoptimized ones. Compared to CPU implementations, we achieve peak GPU speedups of 129× for the 3D unbiased nonlinear image registration technique and 93× for the non-local means surface denoising algorithm. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  8. An enhanced lumped element electrical model of a double barrier memristive device

    NASA Astrophysics Data System (ADS)

    Solan, Enver; Dirkmann, Sven; Hansen, Mirko; Schroeder, Dietmar; Kohlstedt, Hermann; Ziegler, Martin; Mussenbrock, Thomas; Ochs, Karlheinz

    2017-05-01

    The massive parallel approach of neuromorphic circuits leads to effective methods for solving complex problems. It has turned out that resistive switching devices with a continuous resistance range are potential candidates for such applications. These devices are memristive systems—nonlinear resistors with memory. They are fabricated in nanotechnology and hence parameter spread during fabrication may aggravate reproducible analyses. This issue makes simulation models of memristive devices worthwhile. Kinetic Monte-Carlo simulations based on a distributed model of the device can be used to understand the underlying physical and chemical phenomena. However, such simulations are very time-consuming and neither convenient for investigations of whole circuits nor for real-time applications, e.g. emulation purposes. Instead, a concentrated model of the device can be used for both fast simulations and real-time applications, respectively. We introduce an enhanced electrical model of a valence change mechanism (VCM) based double barrier memristive device (DBMD) with a continuous resistance range. This device consists of an ultra-thin memristive layer sandwiched between a tunnel barrier and a Schottky-contact. The introduced model leads to very fast simulations by using usual circuit simulation tools while maintaining physically meaningful parameters. Kinetic Monte-Carlo simulations based on a distributed model and experimental data have been utilized as references to verify the concentrated model.

  9. Scalable, High-performance 3D Imaging Software Platform: System Architecture and Application to Virtual Colonoscopy

    PubMed Central

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli; Brett, Bevin

    2013-01-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. In this work, we have developed a software platform that is designed to support high-performance 3D medical image processing for a wide range of applications using increasingly available and affordable commodity computing systems: multi-core, clusters, and cloud computing systems. To achieve scalable, high-performance computing, our platform (1) employs size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D image processing algorithms; (2) supports task scheduling for efficient load distribution and balancing; and (3) consists of a layered parallel software libraries that allow a wide range of medical applications to share the same functionalities. We evaluated the performance of our platform by applying it to an electronic cleansing system in virtual colonoscopy, with initial experimental results showing a 10 times performance improvement on an 8-core workstation over the original sequential implementation of the system. PMID:23366803

  10. Combining Distributed and Shared Memory Models: Approach and Evolution of the Global Arrays Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nieplocha, Jarek; Harrison, Robert J.; Kumar, Mukul

    2002-07-29

    Both shared memory and distributed memory models have advantages and shortcomings. Shared memory model is much easier to use but it ignores data locality/placement. Given the hierarchical nature of the memory subsystems in the modern computers this characteristic might have a negative impact on performance and scalability. Various techniques, such as code restructuring to increase data reuse and introducing blocking in data accesses, can address the problem and yield performance competitive with message passing[Singh], however at the cost of compromising the ease of use feature. Distributed memory models such as message passing or one-sided communication offer performance and scalability butmore » they compromise the ease-of-use. In this context, the message-passing model is sometimes referred to as?assembly programming for the scientific computing?. The Global Arrays toolkit[GA1, GA2] attempts to offer the best features of both models. It implements a shared-memory programming model in which data locality is managed explicitly by the programmer. This management is achieved by explicit calls to functions that transfer data between a global address space (a distributed array) and local storage. In this respect, the GA model has similarities to the distributed shared-memory models that provide an explicit acquire/release protocol. However, the GA model acknowledges that remote data is slower to access than local data and allows data locality to be explicitly specified and hence managed. The GA model exposes to the programmer the hierarchical memory of modern high-performance computer systems, and by recognizing the communication overhead for remote data transfer, it promotes data reuse and locality of reference. This paper describes the characteristics of the Global Arrays programming model, capabilities of the toolkit, and discusses its evolution.« less

  11. Programming distributed memory architectures using Kali

    NASA Technical Reports Server (NTRS)

    Mehrotra, Piyush; Vanrosendale, John

    1990-01-01

    Programming nonshared memory systems is more difficult than programming shared memory systems, in part because of the relatively low level of current programming environments for such machines. A new programming environment is presented, Kali, which provides a global name space and allows direct access to remote data values. In order to retain efficiency, Kali provides a system on annotations, allowing the user to control those aspects of the program critical to performance, such as data distribution and load balancing. The primitives and constructs provided by the language is described, and some of the issues raised in translating a Kali program for execution on distributed memory systems are also discussed.

  12. Distributed simulation using a real-time shared memory network

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Mattern, Duane L.; Wong, Edmond; Musgrave, Jeffrey L.

    1993-01-01

    The Advanced Control Technology Branch of the NASA Lewis Research Center performs research in the area of advanced digital controls for aeronautic and space propulsion systems. This work requires the real-time implementation of both control software and complex dynamical models of the propulsion system. We are implementing these systems in a distributed, multi-vendor computer environment. Therefore, a need exists for real-time communication and synchronization between the distributed multi-vendor computers. A shared memory network is a potential solution which offers several advantages over other real-time communication approaches. A candidate shared memory network was tested for basic performance. The shared memory network was then used to implement a distributed simulation of a ramjet engine. The accuracy and execution time of the distributed simulation was measured and compared to the performance of the non-partitioned simulation. The ease of partitioning the simulation, the minimal time required to develop for communication between the processors and the resulting execution time all indicate that the shared memory network is a real-time communication technique worthy of serious consideration.

  13. Distributed Compressive CSIT Estimation and Feedback for FDD Multi-User Massive MIMO Systems

    NASA Astrophysics Data System (ADS)

    Rao, Xiongbin; Lau, Vincent K. N.

    2014-06-01

    To fully utilize the spatial multiplexing gains or array gains of massive MIMO, the channel state information must be obtained at the transmitter side (CSIT). However, conventional CSIT estimation approaches are not suitable for FDD massive MIMO systems because of the overwhelming training and feedback overhead. In this paper, we consider multi-user massive MIMO systems and deploy the compressive sensing (CS) technique to reduce the training as well as the feedback overhead in the CSIT estimation. The multi-user massive MIMO systems exhibits a hidden joint sparsity structure in the user channel matrices due to the shared local scatterers in the physical propagation environment. As such, instead of naively applying the conventional CS to the CSIT estimation, we propose a distributed compressive CSIT estimation scheme so that the compressed measurements are observed at the users locally, while the CSIT recovery is performed at the base station jointly. A joint orthogonal matching pursuit recovery algorithm is proposed to perform the CSIT recovery, with the capability of exploiting the hidden joint sparsity in the user channel matrices. We analyze the obtained CSIT quality in terms of the normalized mean absolute error, and through the closed-form expressions, we obtain simple insights into how the joint channel sparsity can be exploited to improve the CSIT recovery performance.

  14. The mass distribution of Population III stars

    NASA Astrophysics Data System (ADS)

    Fraser, M.; Casey, A. R.; Gilmore, G.; Heger, A.; Chan, C.

    2017-06-01

    Extremely metal-poor (EMP) stars are uniquely informative on the nature of massive Population III stars. Modulo a few elements that vary with stellar evolution, the present-day photospheric abundances observed in EMP stars are representative of their natal gas cloud composition. For this reason, the chemistry of EMP stars closely reflects the nucleosynthetic yields of supernovae from massive Population III stars. Here we collate detailed abundances of 53 EMP stars from the literature and infer the masses of their Population III progenitors. We fit a simple initial mass function (IMF) to a subset of 29 of the inferred Population III star masses, and find that the mass distribution is well represented by a power-law IMF with exponent α = 2.35^{+0.29}_{-0.24}. The inferred maximum progenitor mass for supernovae from massive Population III stars is M_{max} = 87^{+13}_{-33} M⊙, and we find no evidence in our sample for a contribution from stars with masses above ˜120 M⊙. The minimum mass is strongly consistent with the theoretical lower mass limit for Population III supernovae. We conclude that the IMF for massive Population III stars is consistent with the IMF of present-day massive stars and there may well have formed stars much below the supernova mass limit that could have survived to the present day.

  15. Resurrecting hot dark matter - Large-scale structure from cosmic strings and massive neutrinos

    NASA Technical Reports Server (NTRS)

    Scherrer, Robert J.

    1988-01-01

    These are the results of a numerical simulation of the formation of large-scale structure from cosmic-string loops in a universe dominated by massive neutrinos (hot dark matter). This model has several desirable features. The final matter distribution contains isolated density peaks embedded in a smooth background, producing a natural bias in the distribution of luminous matter. Because baryons can accrete onto the cosmic strings before the neutrinos, the galaxies will have baryon cores and dark neutrino halos. Galaxy formation in this model begins much earlier than in random-phase models. On large scales the distribution of clustered matter visually resembles the CfA survey, with large voids and filaments.

  16. Massive neutrinos and the pancake theory of galaxy formation

    NASA Technical Reports Server (NTRS)

    Schaeffer, R.; Silk, J.

    1984-01-01

    Three problems encountered by the pancake theory of galaxy formation in a massive neutrino-dominated universe are discussed. A nonlinear model for pancakes is shown to reconcile the data with the predicted coherence length and velocity field, and minimal predictions are given of the contribution from the large-scale matter distribution.

  17. Frequent Statement and Dereference Elimination for Imperative and Object-Oriented Distributed Programs

    PubMed Central

    El-Zawawy, Mohamed A.

    2014-01-01

    This paper introduces new approaches for the analysis of frequent statement and dereference elimination for imperative and object-oriented distributed programs running on parallel machines equipped with hierarchical memories. The paper uses languages whose address spaces are globally partitioned. Distributed programs allow defining data layout and threads writing to and reading from other thread memories. Three type systems (for imperative distributed programs) are the tools of the proposed techniques. The first type system defines for every program point a set of calculated (ready) statements and memory accesses. The second type system uses an enriched version of types of the first type system and determines which of the ready statements and memory accesses are used later in the program. The third type system uses the information gather so far to eliminate unnecessary statement computations and memory accesses (the analysis of frequent statement and dereference elimination). Extensions to these type systems are also presented to cover object-oriented distributed programs. Two advantages of our work over related work are the following. The hierarchical style of concurrent parallel computers is similar to the memory model used in this paper. In our approach, each analysis result is assigned a type derivation (serves as a correctness proof). PMID:24892098

  18. Visual working memory is more tolerant than visual long-term memory.

    PubMed

    Schurgin, Mark W; Flombaum, Jonathan I

    2018-05-07

    Human visual memory is tolerant, meaning that it supports object recognition despite variability across encounters at the image level. Tolerant object recognition remains one capacity in which artificial intelligence trails humans. Typically, tolerance is described as a property of human visual long-term memory (VLTM). In contrast, visual working memory (VWM) is not usually ascribed a role in tolerant recognition, with tests of that system usually demanding discriminatory power-identifying changes, not sameness. There are good reasons to expect that VLTM is more tolerant; functionally, recognition over the long-term must accommodate the fact that objects will not be viewed under identical conditions; and practically, the passive and massive nature of VLTM may impose relatively permissive criteria for thinking that two inputs are the same. But empirically, tolerance has never been compared across working and long-term visual memory. We therefore developed a novel paradigm for equating encoding and test across different memory types. In each experiment trial, participants saw two objects, memory for one tested immediately (VWM) and later for the other (VLTM). VWM performance was better than VLTM and remained robust despite the introduction of image and object variability. In contrast, VLTM performance suffered linearly as more variability was introduced into test stimuli. Additional experiments excluded interference effects as causes for the observed differences. These results suggest the possibility of a previously unidentified role for VWM in the acquisition of tolerant representations for object recognition. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  19. Design and testing of the first 2D Prototype Vertically Integrated Pattern Recognition Associative Memory

    NASA Astrophysics Data System (ADS)

    Liu, T.; Deptuch, G.; Hoff, J.; Jindariani, S.; Joshi, S.; Olsen, J.; Tran, N.; Trimpl, M.

    2015-02-01

    An associative memory-based track finding approach has been proposed for a Level 1 tracking trigger to cope with increasing luminosities at the LHC. The associative memory uses a massively parallel architecture to tackle the intrinsically complex combinatorics of track finding algorithms, thus avoiding the typical power law dependence of execution time on occupancy and solving the pattern recognition in times roughly proportional to the number of hits. This is of crucial importance given the large occupancies typical of hadronic collisions. The design of an associative memory system capable of dealing with the complexity of HL-LHC collisions and with the short latency required by Level 1 triggering poses significant, as yet unsolved, technical challenges. For this reason, an aggressive R&D program has been launched at Fermilab to advance state of-the-art associative memory technology, the so called VIPRAM (Vertically Integrated Pattern Recognition Associative Memory) project. The VIPRAM leverages emerging 3D vertical integration technology to build faster and denser Associative Memory devices. The first step is to implement in conventional VLSI the associative memory building blocks that can be used in 3D stacking; in other words, the building blocks are laid out as if it is a 3D design. In this paper, we report on the first successful implementation of a 2D VIPRAM demonstrator chip (protoVIPRAM00). The results show that these building blocks are ready for 3D stacking.

  20. Design and testing of the first 2D Prototype Vertically Integrated Pattern Recognition Associative Memory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, T.; Deptuch, G.; Hoff, J.

    An associative memory-based track finding approach has been proposed for a Level 1 tracking trigger to cope with increasing luminosities at the LHC. The associative memory uses a massively parallel architecture to tackle the intrinsically complex combinatorics of track finding algorithms, thus avoiding the typical power law dependence of execution time on occupancy and solving the pattern recognition in times roughly proportional to the number of hits. This is of crucial importance given the large occupancies typical of hadronic collisions. The design of an associative memory system capable of dealing with the complexity of HL-LHC collisions and with the shortmore » latency required by Level 1 triggering poses significant, as yet unsolved, technical challenges. For this reason, an aggressive R&D program has been launched at Fermilab to advance state of-the-art associative memory technology, the so called VIPRAM (Vertically Integrated Pattern Recognition Associative Memory) project. The VIPRAM leverages emerging 3D vertical integration technology to build faster and denser Associative Memory devices. The first step is to implement in conventional VLSI the associative memory building blocks that can be used in 3D stacking, in other words, the building blocks are laid out as if it is a 3D design. In this paper, we report on the first successful implementation of a 2D VIPRAM demonstrator chip (protoVIPRAM00). The results show that these building blocks are ready for 3D stacking.« less

  1. An empirical investigation of sparse distributed memory using discrete speech recognition

    NASA Technical Reports Server (NTRS)

    Danforth, Douglas G.

    1990-01-01

    Presented here is a step by step analysis of how the basic Sparse Distributed Memory (SDM) model can be modified to enhance its generalization capabilities for classification tasks. Data is taken from speech generated by a single talker. Experiments are used to investigate the theory of associative memories and the question of generalization from specific instances.

  2. Implementation of a parallel unstructured Euler solver on shared and distributed memory architectures

    NASA Technical Reports Server (NTRS)

    Mavriplis, D. J.; Das, Raja; Saltz, Joel; Vermeland, R. E.

    1992-01-01

    An efficient three dimensional unstructured Euler solver is parallelized on a Cray Y-MP C90 shared memory computer and on an Intel Touchstone Delta distributed memory computer. This paper relates the experiences gained and describes the software tools and hardware used in this study. Performance comparisons between two differing architectures are made.

  3. Switching behavior of resistive change memory using oxide nanowires

    NASA Astrophysics Data System (ADS)

    Aono, Takashige; Sugawa, Kosuke; Shimizu, Tomohiro; Shingubara, Shoso; Takase, Kouichi

    2018-06-01

    Resistive change random access memory (ReRAM), which is expected to be the next-generation nonvolatile memory, often has wide switching voltage distributions due to many kinds of conductive filaments. In this study, we have tried to suppress the distribution through the structural restriction of the filament-forming area using NiO nanowires. The capacitor with Ni metal nanowires whose surface is oxidized showed good switching behaviors with narrow distributions. The knowledge gained from our study will be very helpful in producing practical ReRAM devices.

  4. Massively parallel quantum computer simulator

    NASA Astrophysics Data System (ADS)

    De Raedt, K.; Michielsen, K.; De Raedt, H.; Trieu, B.; Arnold, G.; Richter, M.; Lippert, Th.; Watanabe, H.; Ito, N.

    2007-01-01

    We describe portable software to simulate universal quantum computers on massive parallel computers. We illustrate the use of the simulation software by running various quantum algorithms on different computer architectures, such as a IBM BlueGene/L, a IBM Regatta p690+, a Hitachi SR11000/J1, a Cray X1E, a SGI Altix 3700 and clusters of PCs running Windows XP. We study the performance of the software by simulating quantum computers containing up to 36 qubits, using up to 4096 processors and up to 1 TB of memory. Our results demonstrate that the simulator exhibits nearly ideal scaling as a function of the number of processors and suggest that the simulation software described in this paper may also serve as benchmark for testing high-end parallel computers.

  5. Cultural scripts guide recall of intensely positive life events.

    PubMed

    Collins, Katherine A; Pillemer, David B; Ivcevic, Zorana; Gooze, Rachel A

    2007-06-01

    In four studies, we examined the temporal distribution of positive and negative memories of momentous life events. College students and middle-aged adults reported events occurring from the ages of 8 to 18 years in which they had felt especially good or especially bad about themselves. Distributions of positive memories showed a marked peak at ages 17 and 18. In contrast, distributions of negative memories were relatively flat. These patterns were consistent for males and females and for younger and older adults. Content analyses indicated that a substantial proportion of positive memories from late adolescence described culturally prescribed landmark events surrounding the major life transition from high school to college. When the participants were asked for recollections from life periods that lack obvious age-linked milestone events, age distributions of positive and negative memories were similar. The results support and extend Berntsen and Rubin's (2004) conclusion that cultural expectations, or life scripts, organize recall of positive, but not negative, events.

  6. The Tarantula Massive Binary Monitoring. I. Observational campaign and OB-type spectroscopic binaries

    NASA Astrophysics Data System (ADS)

    Almeida, L. A.; Sana, H.; Taylor, W.; Barbá, R.; Bonanos, A. Z.; Crowther, P.; Damineli, A.; de Koter, A.; de Mink, S. E.; Evans, C. J.; Gieles, M.; Grin, N. J.; Hénault-Brunet, V.; Langer, N.; Lennon, D.; Lockwood, S.; Maíz Apellániz, J.; Moffat, A. F. J.; Neijssel, C.; Norman, C.; Ramírez-Agudelo, O. H.; Richardson, N. D.; Schootemeijer, A.; Shenar, T.; Soszyński, I.; Tramper, F.; Vink, J. S.

    2017-02-01

    Context. Massive binaries play a crucial role in the Universe. Knowing the distributions of their orbital parameters is important for a wide range of topics from stellar feedback to binary evolution channels and from the distribution of supernova types to gravitational wave progenitors, yet no direct measurements exist outside the Milky Way. Aims: The Tarantula Massive Binary Monitoring project was designed to help fill this gap by obtaining multi-epoch radial velocity (RV) monitoring of 102 massive binaries in the 30 Doradus region. Methods: In this paper we analyze 32 FLAMES/GIRAFFE observations of 93 O- and 7 B-type binaries. We performed a Fourier analysis and obtained orbital solutions for 82 systems: 51 single-lined (SB1) and 31 double-lined (SB2) spectroscopic binaries. Results: Overall, the binary fraction and orbital properties across the 30 Doradus region are found to be similar to existing Galactic samples. This indicates that within these domains environmental effects are of second order in shaping the properties of massive binary systems. A small difference is found in the distribution of orbital periods, which is slightly flatter (in log space) in 30 Doradus than in the Galaxy, although this may be compatible within error estimates and differences in the fitting methodology. Also, orbital periods in 30 Doradus can be as short as 1.1 d, somewhat shorter than seen in Galactic samples. Equal mass binaries (q> 0.95) in 30 Doradus are all found outside NGC 2070, the central association that surrounds R136a, the very young and massive cluster at 30 Doradus's core. Most of the differences, albeit small, are compatible with expectations from binary evolution. One outstanding exception, however, is the fact that earlier spectral types (O2-O7) tend to have shorter orbital periods than later spectral types (O9.2-O9.7). Conclusions: Our results point to a relative universality of the incidence rate of massive binaries and their orbital properties in the metallicity range from solar (Z⊙) to about half solar. This provides the first direct constraints on massive binary properties in massive star-forming galaxies at the Universe's peak of star formation at redshifts z 1 to 2 which are estimated to have Z 0.5 Z⊙. The log of observations and RV measurements for all targets are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/598/A84

  7. Simplified Antenna Group Determination of RS Overhead Reduced Massive MIMO for Wireless Sensor Networks.

    PubMed

    Lee, Byung Moo

    2017-12-29

    Massive multiple-input multiple-output (MIMO) systems can be applied to support numerous internet of things (IoT) devices using its excessive amount of transmitter (TX) antennas. However, one of the big obstacles for the realization of the massive MIMO system is the overhead of reference signal (RS), because the number of RS is proportional to the number of TX antennas and/or related user equipments (UEs). It has been already reported that antenna group-based RS overhead reduction can be very effective to the efficient operation of massive MIMO, but the method of deciding the number of antennas needed in each group is at question. In this paper, we propose a simplified determination scheme of the number of antennas needed in each group for RS overhead reduced massive MIMO to support many IoT devices. Supporting many distributed IoT devices is a framework to configure wireless sensor networks. Our contribution can be divided into two parts. First, we derive simple closed-form approximations of the achievable spectral efficiency (SE) by using zero-forcing (ZF) and matched filtering (MF) precoding for the RS overhead reduced massive MIMO systems with channel estimation error. The closed-form approximations include a channel error factor that can be adjusted according to the method of the channel estimation. Second, based on the closed-form approximation, we present an efficient algorithm determining the number of antennas needed in each group for the group-based RS overhead reduction scheme. The algorithm depends on the exact inverse functions of the derived closed-form approximations of SE. It is verified with theoretical analysis and simulation that the proposed algorithm works well, and thus can be used as an important tool for massive MIMO systems to support many distributed IoT devices.

  8. Simplified Antenna Group Determination of RS Overhead Reduced Massive MIMO for Wireless Sensor Networks

    PubMed Central

    2017-01-01

    Massive multiple-input multiple-output (MIMO) systems can be applied to support numerous internet of things (IoT) devices using its excessive amount of transmitter (TX) antennas. However, one of the big obstacles for the realization of the massive MIMO system is the overhead of reference signal (RS), because the number of RS is proportional to the number of TX antennas and/or related user equipments (UEs). It has been already reported that antenna group-based RS overhead reduction can be very effective to the efficient operation of massive MIMO, but the method of deciding the number of antennas needed in each group is at question. In this paper, we propose a simplified determination scheme of the number of antennas needed in each group for RS overhead reduced massive MIMO to support many IoT devices. Supporting many distributed IoT devices is a framework to configure wireless sensor networks. Our contribution can be divided into two parts. First, we derive simple closed-form approximations of the achievable spectral efficiency (SE) by using zero-forcing (ZF) and matched filtering (MF) precoding for the RS overhead reduced massive MIMO systems with channel estimation error. The closed-form approximations include a channel error factor that can be adjusted according to the method of the channel estimation. Second, based on the closed-form approximation, we present an efficient algorithm determining the number of antennas needed in each group for the group-based RS overhead reduction scheme. The algorithm depends on the exact inverse functions of the derived closed-form approximations of SE. It is verified with theoretical analysis and simulation that the proposed algorithm works well, and thus can be used as an important tool for massive MIMO systems to support many distributed IoT devices. PMID:29286339

  9. Distributed learning enhances relational memory consolidation.

    PubMed

    Litman, Leib; Davachi, Lila

    2008-09-01

    It has long been known that distributed learning (DL) provides a mnemonic advantage over massed learning (ML). However, the underlying mechanisms that drive this robust mnemonic effect remain largely unknown. In two experiments, we show that DL across a 24 hr interval does not enhance immediate memory performance but instead slows the rate of forgetting relative to ML. Furthermore, we demonstrate that this savings in forgetting is specific to relational, but not item, memory. In the context of extant theories and knowledge of memory consolidation, these results suggest that an important mechanism underlying the mnemonic benefit of DL is enhanced memory consolidation. We speculate that synaptic strengthening mechanisms supporting long-term memory consolidation may be differentially mediated by the spacing of memory reactivation. These findings have broad implications for the scientific study of episodic memory consolidation and, more generally, for educational curriculum development and policy.

  10. Single Sided Messaging v. 0.6.6

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curry, Matthew Leon; Farmer, Matthew Shane; Hassani, Amin

    Single-Sided Messaging (SSM) is a portable, multitransport networking library that enables applications to leverage potential one-sided capabilities of underlying network transports. It also provides desirable semantics that services for highperformance, massively parallel computers can leverage, such as an explicit cancel operation for pending transmissions, as well as enhanced matching semantics favoring large numbers of buffers attached to a single match entry. This release supports TCP/IP, shared memory, and Infiniband.

  11. High-performance computing — an overview

    NASA Astrophysics Data System (ADS)

    Marksteiner, Peter

    1996-08-01

    An overview of high-performance computing (HPC) is given. Different types of computer architectures used in HPC are discussed: vector supercomputers, high-performance RISC processors, various parallel computers like symmetric multiprocessors, workstation clusters, massively parallel processors. Software tools and programming techniques used in HPC are reviewed: vectorizing compilers, optimization and vector tuning, optimization for RISC processors; parallel programming techniques like shared-memory parallelism, message passing and data parallelism; and numerical libraries.

  12. Achieving Superior Two-Way Actuation by the Stress-Coupling of Nanoribbons and Nanocrystalline Shape Memory Alloy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hao, Shijie; Liu, Yinong; Ren, Yang

    2016-06-08

    Inspired by the driving principle of traditional bias-type two-way actuators, we developed a novel two-way actuation nanocomposite wire in which a massive number of Nb nanoribbons with ultra-large elastic strains are loaded inside a shape memory alloy (SMA) matrix to form a continuous array of nano bias actuation pairs for two-way actuation. The composite exhibits a two-way actuation strain of 3.2% during a thermal cycle and an actuation stress of 934 MPa upon heating, which is about twice higher than that (~500 MPa) found in reported two-way SMAs. Upon cooling, the composite shows an actuation stress of 134 MPa andmore » a mechanical work output of 1.08*106 J/ m3, which are about three and five times higher than that of reported two-way SMAs, respectively. It is revealed that the massive number of Nb nanoribbons in compressive state provides the high actuation stress and high work output upon cooling and the SMA matrix with high yield strength offers the high actuation stress upon heating. Compared to traditional bias-type two-way actuators, the two-way actuation composite with small volume and simple construct is in favour of the miniaturization and simplification of actuators.« less

  13. Escape of gravitational radiation from the field of massive bodies

    NASA Technical Reports Server (NTRS)

    Price, Richard H.; Pullin, Jorge; Kundu, Prasun K.

    1993-01-01

    We consider a compact source of gravitational waves of frequency omega in or near a massive spherically symmetric distribution of matter or a black hole. Recent calculations have led to apparently contradictory results for the influence of the massive body on the propagation of the waves. We show here that the results are in fact consistent and in agreement with the 'standard' viewpoint in which the high-frequency compact source produces the radiation as if in a flat background, and the background curvature affects the propagation of these waves.

  14. Application of CHAD hydrodynamics to shock-wave problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trease, H.E.; O`Rourke, P.J.; Sahota, M.S.

    1997-12-31

    CHAD is the latest in a sequence of continually evolving computer codes written to effectively utilize massively parallel computer architectures and the latest grid generators for unstructured meshes. Its applications range from automotive design issues such as in-cylinder and manifold flows of internal combustion engines, vehicle aerodynamics, underhood cooling and passenger compartment heating, ventilation, and air conditioning to shock hydrodynamics and materials modeling. CHAD solves the full unsteady Navier-Stoke equations with the k-epsilon turbulence model in three space dimensions. The code has four major features that distinguish it from the earlier KIVA code, also developed at Los Alamos. First, itmore » is based on a node-centered, finite-volume method in which, like finite element methods, all fluid variables are located at computational nodes. The computational mesh efficiently and accurately handles all element shapes ranging from tetrahedra to hexahedra. Second, it is written in standard Fortran 90 and relies on automatic domain decomposition and a universal communication library written in standard C and MPI for unstructured grids to effectively exploit distributed-memory parallel architectures. Thus the code is fully portable to a variety of computing platforms such as uniprocessor workstations, symmetric multiprocessors, clusters of workstations, and massively parallel platforms. Third, CHAD utilizes a variable explicit/implicit upwind method for convection that improves computational efficiency in flows that have large velocity Courant number variations due to velocity of mesh size variations. Fourth, CHAD is designed to also simulate shock hydrodynamics involving multimaterial anisotropic behavior under high shear. The authors will discuss CHAD capabilities and show several sample calculations showing the strengths and weaknesses of CHAD.« less

  15. A Pervasive Parallel Processing Framework for Data Visualization and Analysis at Extreme Scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moreland, Kenneth; Geveci, Berk

    2014-11-01

    The evolution of the computing world from teraflop to petaflop has been relatively effortless, with several of the existing programming models scaling effectively to the petascale. The migration to exascale, however, poses considerable challenges. All industry trends infer that the exascale machine will be built using processors containing hundreds to thousands of cores per chip. It can be inferred that efficient concurrency on exascale machines requires a massive amount of concurrent threads, each performing many operations on a localized piece of data. Currently, visualization libraries and applications are based off what is known as the visualization pipeline. In the pipelinemore » model, algorithms are encapsulated as filters with inputs and outputs. These filters are connected by setting the output of one component to the input of another. Parallelism in the visualization pipeline is achieved by replicating the pipeline for each processing thread. This works well for today’s distributed memory parallel computers but cannot be sustained when operating on processors with thousands of cores. Our project investigates a new visualization framework designed to exhibit the pervasive parallelism necessary for extreme scale machines. Our framework achieves this by defining algorithms in terms of worklets, which are localized stateless operations. Worklets are atomic operations that execute when invoked unlike filters, which execute when a pipeline request occurs. The worklet design allows execution on a massive amount of lightweight threads with minimal overhead. Only with such fine-grained parallelism can we hope to fill the billions of threads we expect will be necessary for efficient computation on an exascale machine.« less

  16. A Spatially-Registered, Massively Parallelised Data Structure for Interacting with Large, Integrated Geodatasets

    NASA Astrophysics Data System (ADS)

    Irving, D. H.; Rasheed, M.; O'Doherty, N.

    2010-12-01

    The efficient storage, retrieval and interactive use of subsurface data present great challenges in geodata management. Data volumes are typically massive, complex and poorly indexed with inadequate metadata. Derived geomodels and interpretations are often tightly bound in application-centric and proprietary formats; open standards for long-term stewardship are poorly developed. Consequently current data storage is a combination of: complex Logical Data Models (LDMs) based on file storage formats; 2D GIS tree-based indexing of spatial data; and translations of serialised memory-based storage techniques into disk-based storage. Whilst adequate for working at the mesoscale over a short timeframes, these approaches all possess technical and operational shortcomings: data model complexity; anisotropy of access; scalability to large and complex datasets; and weak implementation and integration of metadata. High performance hardware such as parallelised storage and Relational Database Management System (RDBMS) have long been exploited in many solutions but the underlying data structure must provide commensurate efficiencies to allow multi-user, multi-application and near-realtime data interaction. We present an open Spatially-Registered Data Structure (SRDS) built on Massively Parallel Processing (MPP) database architecture implemented by a ANSI SQL 2008 compliant RDBMS. We propose a LDM comprising a 3D Earth model that is decomposed such that each increasing Level of Detail (LoD) is achieved by recursively halving the bin size until it is less than the error in each spatial dimension for that data point. The value of an attribute at that point is stored as a property of that point and at that LoD. It is key to the numerical efficiency of the SRDS that it is under-pinned by a power-of-two relationship thus precluding the need for computationally intensive floating point arithmetic. Our approach employed a tightly clustered MPP array with small clusters of storage, processors and memory communicating over a high-speed network inter-connect. This is a shared-nothing architecture where resources are managed within each cluster unlike most other RDBMSs. Data are accessed on this architecture by their primary index values which utilises the hashing algorithm for point-to-point access. The hashing algorithm’s main role is the efficient distribution of data across the clusters based on the primary index. In this study we used 3D seismic volumes, 2D seismic profiles and borehole logs to demonstrate application in both (x,y,TWT) and (x,y,z)-space. In the SRDS the primary index is a composite column index of (x,y) to avoid invoking time-consuming full table scans as is the case in tree-based systems. This means that data access is isotropic. A query for data in a specified spatial range permits retrieval recursively by point-to-point queries within each nested LoD yielding true linear performance up to the Petabyte scale with hardware scaling presenting the primary limiting factor. Our architecture and LDM promotes: realtime interaction with massive data volumes; streaming of result sets and server-rendered 2D/3D imagery; rigorous workflow control and auditing; and in-database algorithms run directly against data as a HPC cloud service.

  17. Parallel inversion of a massive ERT data set to characterize deep vadose zone contamination beneath former nuclear waste infiltration galleries at the Hanford Site B-Complex (Invited)

    NASA Astrophysics Data System (ADS)

    Johnson, T.; Rucker, D. F.; Wellman, D.

    2013-12-01

    The Hanford Site, located in south-central Washington, USA, originated in the early 1940's as part of the Manhattan Project and produced plutonium used to build the United States nuclear weapons stockpile. In accordance with accepted industrial practice of that time, a substantial portion of relatively low-activity liquid radioactive waste was disposed of by direct discharge to either surface soil or into near-surface infiltration galleries such as cribs and trenches. This practice was supported by early investigations beginning in the 1940s, including studies by Geological Survey (USGS) experts, whose investigations found vadose zone soils at the site suitable for retaining radionuclides to the extent necessary to protect workers and members of the general public based on the standards of that time. That general disposal practice has long since been discontinued, and the US Department of Energy (USDOE) is now investigating residual contamination at former infiltration galleries as part of its overall environmental management and remediation program. Most of the liquid wastes released into the subsurface were highly ionic and electrically conductive, and therefore present an excellent target for imaging by Electrical Resistivity Tomography (ERT) within the low-conductivity sands and gravels comprising Hanford's vadose zone. In 2006, USDOE commissioned a large scale surface ERT survey to characterize vadose zone contamination beneath the Hanford Site B-Complex, which contained 8 infiltration trenches, 12 cribs, and one tile field. The ERT data were collected in a pole-pole configuration with 18 north-south trending lines, and 18 east-west trending lines ranging from 417m to 816m in length. The final data set consisted of 208,411 measurements collected on 4859 electrodes, covering an area of 600m x 600m. Given the computational demands of inverting this massive data set as a whole, the data were initially inverted in parts with a shared memory inversion code, which revealed the general footprint of vadose zone contamination beneath infiltration galleries. In 2011, the USDOE commissioned an effort to re-invert the B-Complex ERT data as a whole using a recently developed massively parallel 3D ERT inversion code. The computational mesh included approximately 1.085 million elements and closely honored the 37m of topographic relief as determined by LiDAR imaging. The water table and tank boundaries were also incorporated into the mesh to facilitate regularization disconnects, enabling sharp conductivity contrasts where they occur naturally without penalty. The data were inverted using 1024 processors, requiring 910 Gb of memory and 11.5 hours of computation time. The imaging results revealed previously unrealized detail concerning the distribution and behavior of contaminants migrating through the vadose zone, and are currently being used by site cleanup operators and regulators to understand the origin of a groundwater nitrate plume emerging from one of the infiltration galleries. The results overall demonstrate the utility of high performance computing, unstructured meshing, and custom regularization constraints for optimal processing of massive ERT data sets enabled by modern ERT survey hardware.

  18. What's in It for Me? Incentives, Learning, and Completion in Massive Open Online Courses

    ERIC Educational Resources Information Center

    Reeves, Todd D.; Tawfik, Andrew A.; Msilu, Fortunata; Simsek, Irfan

    2017-01-01

    This study investigated the distribution of incentives (e.g., certificates, badges) for massive open online course (MOOC) completion, and relationships between incentives and MOOC outcomes. Participants were 779 MOOC students internationally who participated in at least 303 different MOOCs offered by at least 12 providers. MOOC participants most…

  19. Geochemical studies of rare earth elements in the Portuguese pyrite belt, and geologic and geochemical controls on gold distribution

    USGS Publications Warehouse

    Grimes, David J.; Earhart, Robert L.; de Carvalho, Delfim; Oliveira, Vitor; Oliveira, Jose T.; Castro, Paulo

    1998-01-01

    This report describes geochemical and geological studies which were conducted by the U.S. Geological Survey (USGS) and the Servicos Geologicos de Portugal (SPG) in the Portuguese pyrite belt (PPB) in southern Portugal. The studies included rare earth element (REE) distributions and geological and geochemical controls on the distribution of gold. Rare earth element distributions were determined in representative samples of the volcanic rocks from five west-trending sub-belts of the PPB in order to test the usefulness of REE as a tool for the correlation of volcanic events, and to determine their mobility and application as hydrothermal tracers. REE distributions in felsic volcanic rocks show increases in the relative abundances of heavy REE and a decrease in La/Yb ratios from north to south in the Portuguese pyrite belt. Anomalous amounts of gold are distributed in and near massive and disseminated sulfide deposits in the PPB. Gold is closely associated with copper in the middle and lower parts of the deposits. Weakly anomalous concentrations of gold were noted in exhalative sedimentary rocks that are stratigraphically above massive sulfide deposits in a distal manganiferous facies, whereas anomalously low concentrations were detected in the barite-rich, proximal-facies exhalites. Altered and pyritic felsic volcanic rocks locally contain highly anomalous concentrations of gold, suggesting that disseminated sulfide deposits and the non-ore parts of massive sulfide deposits should be evaluated for their gold potential.

  20. Gravitational Instabilities in the Disks of Massive Protostars as an Explanation for Linear Distributions of Methanol Masers

    NASA Astrophysics Data System (ADS)

    Durisen, Richard H.; Mejia, Annie C.; Pickett, Brian K.; Hartquist, Thomas W.

    2001-12-01

    Evidence suggests that some masers associated with massive protostars may originate in the outer regions of large disks, at radii of hundreds to thousands of AU from the central mass. This is particularly true for methanol (CH3OH), for which linear distributions of masers are found with disklike kinematics. In three-dimensional hydrodynamics simulations we have made to study the effects of gravitational instabilities in the outer parts of disks around young low-mass stars, the nonlinear development of the instabilities leads to a complex of intersecting spiral shocks, clumps, and arclets within the disk and to significant time-dependent, nonaxisymmetric distortions of the disk surface. A rescaling of our disk simulations to the case of a massive protostar shows that conditions in the disturbed outer disk seem conducive to the appearance of masers if it is viewed edge-on.

  1. SED Modeling of 20 Massive Young Stellar Objects

    NASA Astrophysics Data System (ADS)

    Tanti, Kamal Kumar

    In this paper, we present the spectral energy distributions (SEDs) modeling of twenty massive young stellar objects (MYSOs) and subsequently estimated different physical and structural/geometrical parameters for each of the twenty central YSO outflow candidates, along with their associated circumstellar disks and infalling envelopes. The SEDs for each of the MYSOs been reconstructed by using 2MASS, MSX, IRAS, IRAC & MIPS, SCUBA, WISE, SPIRE and IRAM data, with the help of a SED Fitting Tool, that uses a grid of 2D radiative transfer models. Using the detailed analysis of SEDs and subsequent estimation of physical and geometrical parameters for the central YSO sources along with its circumstellar disks and envelopes, the cumulative distribution of the stellar, disk and envelope parameters can be analyzed. This leads to a better understanding of massive star formation processes in their respective star forming regions in different molecular clouds.

  2. GraphReduce: Large-Scale Graph Analytics on Accelerator-Based HPC Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sengupta, Dipanjan; Agarwal, Kapil; Song, Shuaiwen

    2015-09-30

    Recent work on real-world graph analytics has sought to leverage the massive amount of parallelism offered by GPU devices, but challenges remain due to the inherent irregularity of graph algorithms and limitations in GPU-resident memory for storing large graphs. We present GraphReduce, a highly efficient and scalable GPU-based framework that operates on graphs that exceed the device’s internal memory capacity. GraphReduce adopts a combination of both edge- and vertex-centric implementations of the Gather-Apply-Scatter programming model and operates on multiple asynchronous GPU streams to fully exploit the high degrees of parallelism in GPUs with efficient graph data movement between the hostmore » and the device.« less

  3. a Gross Error Elimination Method for Point Cloud Data Based on Kd-Tree

    NASA Astrophysics Data System (ADS)

    Kang, Q.; Huang, G.; Yang, S.

    2018-04-01

    Point cloud data has been one type of widely used data sources in the field of remote sensing. Key steps of point cloud data's pro-processing focus on gross error elimination and quality control. Owing to the volume feature of point could data, existed gross error elimination methods need spend massive memory both in space and time. This paper employed a new method which based on Kd-tree algorithm to construct, k-nearest neighbor algorithm to search, settled appropriate threshold to determine with result turns out a judgement that whether target point is or not an outlier. Experimental results show that, our proposed algorithm will help to delete gross error in point cloud data and facilitate to decrease memory consumption, improve efficiency.

  4. Separation in 5 Msun Binaries

    NASA Astrophysics Data System (ADS)

    Evans, Nancy R.; Bond, H. E.; Schaefer, G.; Mason, B. D.; Karovska, M.; Tingle, E.

    2013-01-01

    Cepheids (5 Msun stars) provide an excellent sample for determining the binary properties of fairly massive stars. International Ultraviolet Explorer (IUE) observations of Cepheids brighter than 8th magnitude resulted in a list of ALL companions more massive than 2.0 Msun uniformly sensitive to all separations. Hubble Space Telescope Wide Field Camera 3 (WFC3) has resolved three of these binaries (Eta Aql, S Nor, and V659 Cen). Combining these separations with orbital data in the literature, we derive an unbiased distribution of binary separations for a sample of 18 Cepheids, and also a distribution of mass ratios. The distribution of orbital periods shows that the 5 Msun binaries prefer shorter periods than 1 Msun stars, reflecting differences in star formation processes.

  5. Vienna FORTRAN: A FORTRAN language extension for distributed memory multiprocessors

    NASA Technical Reports Server (NTRS)

    Chapman, Barbara; Mehrotra, Piyush; Zima, Hans

    1991-01-01

    Exploiting the performance potential of distributed memory machines requires a careful distribution of data across the processors. Vienna FORTRAN is a language extension of FORTRAN which provides the user with a wide range of facilities for such mapping of data structures. However, programs in Vienna FORTRAN are written using global data references. Thus, the user has the advantage of a shared memory programming paradigm while explicitly controlling the placement of data. The basic features of Vienna FORTRAN are presented along with a set of examples illustrating the use of these features.

  6. The Effect of the Underlying Distribution in Hurst Exponent Estimation

    PubMed Central

    Sánchez, Miguel Ángel; Trinidad, Juan E.; García, José; Fernández, Manuel

    2015-01-01

    In this paper, a heavy-tailed distribution approach is considered in order to explore the behavior of actual financial time series. We show that this kind of distribution allows to properly fit the empirical distribution of the stocks from S&P500 index. In addition to that, we explain in detail why the underlying distribution of the random process under study should be taken into account before using its self-similarity exponent as a reliable tool to state whether that financial series displays long-range dependence or not. Finally, we show that, under this model, no stocks from S&P500 index show persistent memory, whereas some of them do present anti-persistent memory and most of them present no memory at all. PMID:26020942

  7. Modeling Distributions of Immediate Memory Effects: No Strategies Needed?

    ERIC Educational Resources Information Center

    Beaman, C. Philip; Neath, Ian; Surprenant, Aimee M.

    2008-01-01

    Many models of immediate memory predict the presence or absence of various effects, but none have been tested to see whether they predict an appropriate distribution of effect sizes. The authors show that the feature model (J. S. Nairne, 1990) produces appropriate distributions of effect sizes for both the phonological confusion effect and the…

  8. Address tracing for parallel machines

    NASA Technical Reports Server (NTRS)

    Stunkel, Craig B.; Janssens, Bob; Fuchs, W. Kent

    1991-01-01

    Recently implemented parallel system address-tracing methods based on several metrics are surveyed. The issues specific to collection of traces for both shared and distributed memory parallel computers are highlighted. Five general categories of address-trace collection methods are examined: hardware-captured, interrupt-based, simulation-based, altered microcode-based, and instrumented program-based traces. The problems unique to shared memory and distributed memory multiprocessors are examined separately.

  9. Implementation of a 3D version of ponderomotive guiding center solver in particle-in-cell code OSIRIS

    NASA Astrophysics Data System (ADS)

    Helm, Anton; Vieira, Jorge; Silva, Luis; Fonseca, Ricardo

    2016-10-01

    Laser-driven accelerators gained an increased attention over the past decades. Typical modeling techniques for laser wakefield acceleration (LWFA) are based on particle-in-cell (PIC) simulations. PIC simulations, however, are very computationally expensive due to the disparity of the relevant scales ranging from the laser wavelength, in the micrometer range, to the acceleration length, currently beyond the ten centimeter range. To minimize the gap between these despair scales the ponderomotive guiding center (PGC) algorithm is a promising approach. By describing the evolution of the laser pulse envelope separately, only the scales larger than the plasma wavelength are required to be resolved in the PGC algorithm, leading to speedups in several orders of magnitude. Previous work was limited to two dimensions. Here we present the implementation of the 3D version of a PGC solver into the massively parallel, fully relativistic PIC code OSIRIS. We extended the solver to include periodic boundary conditions and parallelization in all spatial dimensions. We present benchmarks for distributed and shared memory parallelization. We also discuss the stability of the PGC solver.

  10. Internal velocity and mass distributions in simulated clusters of galaxies for a variety of cosmogonic models

    NASA Technical Reports Server (NTRS)

    Cen, Renyue

    1994-01-01

    The mass and velocity distributions in the outskirts (0.5-3.0/h Mpc) of simulated clusters of galaxies are examined for a suite of cosmogonic models (two Omega(sub 0) = 1 and two Omega(sub 0) = 0.2 models) utilizing large-scale particle-mesh (PM) simulations. Through a series of model computations, designed to isolate the different effects, we find that both Omega(sub 0) and P(sub k) (lambda less than or = 16/h Mpc) are important to the mass distributions in clusters of galaxies. There is a correlation between power, P(sub k), and density profiles of massive clusters; more power tends to point to the direction of a stronger correlation between alpha and M(r less than 1.5/h Mpc); i.e., massive clusters being relatively extended and small mass clusters being relatively concentrated. A lower Omega(sub 0) universe tends to produce relatively concentrated massive clusters and relatively extended small mass clusters compared to their counterparts in a higher Omega(sub 0) model with the same power. Models with little (initial) small-scale power, such as the hot dark matter (HDM) model, produce more extended mass distributions than the isothermal distribution for most of the mass clusters. But the cold dark matter (CDM) models show mass distributions of most of the clusters more concentrated than the isothermal distribution. X-ray and gravitational lensing observations are beginning providing useful information on the mass distribution in and around clusters; some interesting constraints on Omega(sub 0) and/or the (initial) power of the density fluctuations on scales lambda less than or = 16/h Mpc (where linear extrapolation is invalid) can be obtained when larger observational data sets, such as the Sloan Digital Sky Survey, become available.

  11. I/O efficient algorithms and applications in geographic information systems

    NASA Astrophysics Data System (ADS)

    Danner, Andrew

    Modern remote sensing methods such a laser altimetry (lidar) and Interferometric Synthetic Aperture Radar (IfSAR) produce georeferenced elevation data at unprecedented rates. Many Geographic Information System (GIS) algorithms designed for terrain modelling applications cannot process these massive data sets. The primary problem is that these data sets are too large to fit in the main internal memory of modern computers and must therefore reside on larger, but considerably slower disks. In these applications, the transfer of data between disk and main memory, or I/O, becomes the primary bottleneck. Working in a theoretical model that more accurately represents this two level memory hierarchy, we can develop algorithms that are I/O-efficient and reduce the amount of disk I/O needed to solve a problem. In this thesis we aim to modernize GIS algorithms and develop a number of I/O-efficient algorithms for processing geographic data derived from massive elevation data sets. For each application, we convert a geographic question to an algorithmic question, develop an I/O-efficient algorithm that is theoretically efficient, implement our approach and verify its performance using real-world data. The applications we consider include constructing a gridded digital elevation model (DEM) from an irregularly spaced point cloud, removing topological noise from a DEM, modeling surface water flow over a terrain, extracting river networks and watershed hierarchies from the terrain, and locating polygons containing query points in a planar subdivision. We initially developed solutions to each of these applications individually. However, we also show how to combine individual solutions to form a scalable geo-processing pipeline that seamlessly solves a sequence of sub-problems with little or no manual intervention. We present experimental results that demonstrate orders of magnitude improvement over previously known algorithms.

  12. Enabling Graph Appliance for Genome Assembly

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, Rina; Graves, Jeffrey A; Lee, Sangkeun

    2015-01-01

    In recent years, there has been a huge growth in the amount of genomic data available as reads generated from various genome sequencers. The number of reads generated can be huge, ranging from hundreds to billions of nucleotide, each varying in size. Assembling such large amounts of data is one of the challenging computational problems for both biomedical and data scientists. Most of the genome assemblers developed have used de Bruijn graph techniques. A de Bruijn graph represents a collection of read sequences by billions of vertices and edges, which require large amounts of memory and computational power to storemore » and process. This is the major drawback to de Bruijn graph assembly. Massively parallel, multi-threaded, shared memory systems can be leveraged to overcome some of these issues. The objective of our research is to investigate the feasibility and scalability issues of de Bruijn graph assembly on Cray s Urika-GD system; Urika-GD is a high performance graph appliance with a large shared memory and massively multithreaded custom processor designed for executing SPARQL queries over large-scale RDF data sets. However, to the best of our knowledge, there is no research on representing a de Bruijn graph as an RDF graph or finding Eulerian paths in RDF graphs using SPARQL for potential genome discovery. In this paper, we address the issues involved in representing a de Bruin graphs as RDF graphs and propose an iterative querying approach for finding Eulerian paths in large RDF graphs. We evaluate the performance of our implementation on real world ebola genome datasets and illustrate how genome assembly can be accomplished with Urika-GD using iterative SPARQL queries.« less

  13. Digging deeper on "deep" learning: A computational ecology approach.

    PubMed

    Buscema, Massimo; Sacco, Pier Luigi

    2017-01-01

    We propose an alternative approach to "deep" learning that is based on computational ecologies of structurally diverse artificial neural networks, and on dynamic associative memory responses to stimuli. Rather than focusing on massive computation of many different examples of a single situation, we opt for model-based learning and adaptive flexibility. Cross-fertilization of learning processes across multiple domains is the fundamental feature of human intelligence that must inform "new" artificial intelligence.

  14. The language parallel Pascal and other aspects of the massively parallel processor

    NASA Technical Reports Server (NTRS)

    Reeves, A. P.; Bruner, J. D.

    1982-01-01

    A high level language for the Massively Parallel Processor (MPP) was designed. This language, called Parallel Pascal, is described in detail. A description of the language design, a description of the intermediate language, Parallel P-Code, and details for the MPP implementation are included. Formal descriptions of Parallel Pascal and Parallel P-Code are given. A compiler was developed which converts programs in Parallel Pascal into the intermediate Parallel P-Code language. The code generator to complete the compiler for the MPP is being developed independently. A Parallel Pascal to Pascal translator was also developed. The architecture design for a VLSI version of the MPP was completed with a description of fault tolerant interconnection networks. The memory arrangement aspects of the MPP are discussed and a survey of other high level languages is given.

  15. Reliability and coverage analysis of non-repairable fault-tolerant memory systems

    NASA Technical Reports Server (NTRS)

    Cox, G. W.; Carroll, B. D.

    1976-01-01

    A method was developed for the construction of probabilistic state-space models for nonrepairable systems. Models were developed for several systems which achieved reliability improvement by means of error-coding, modularized sparing, massive replication and other fault-tolerant techniques. From the models developed, sets of reliability and coverage equations for the systems were developed. Comparative analyses of the systems were performed using these equation sets. In addition, the effects of varying subunit reliabilities on system reliability and coverage were described. The results of these analyses indicated that a significant gain in system reliability may be achieved by use of combinations of modularized sparing, error coding, and software error control. For sufficiently reliable system subunits, this gain may far exceed the reliability gain achieved by use of massive replication techniques, yet result in a considerable saving in system cost.

  16. Lack of Original Antigenic Sin in Recall CD8+ T Cell Responses

    PubMed Central

    Zehn, Dietmar; Turner, Michael J.; Lefrançois, Leo; Bevan, Michael J.

    2010-01-01

    In the real world, mice and men are not immunologically naive, having been exposed to numerous antigenic challenges. Prior infections sometimes negatively impact the response to a subsequent infection. This can occur in serial infections with pathogens sharing cross-reactive Ags. At the T cell level it has been proposed that preformed memory T cells, which cross-react with low avidity to epitopes presented in subsequent infections, dampen the response of high-avidity T cells. We investigated this with a series of related MHC class-I restricted Ags expressed by bacterial and viral pathogens. In all cases, we find that high-avidity CD8+ T cell precursors, either naive or memory, massively expand in secondary cross-reactive infections to dominate the response over low-avidity memory T cells. This holds true even when >10% of the CD8+ T cell compartment consists of memory T cells that cross-react weakly with the rechallenge ligand. Occasionally, memory cells generated by low-avidity stimulation in a primary infection recognize a cross-reactive epitope with high avidity and contribute positively to the response to a second infection. Taken together, our data show that the phenomenon of original antigenic sin does not occur in all heterologous infections. PMID:20439913

  17. Electrically charged: An effective mechanism for soft EOS supporting massive neutron star

    NASA Astrophysics Data System (ADS)

    Jing, ZhenZhen; Wen, DeHua; Zhang, XiangDong

    2015-10-01

    The massive neutron star discoverer announced that strange particles, such as hyperons should be ruled out in the neutron star core as the soft Equation of State (EOS) can-not support a massive neutron star. However, many of the nuclear theories and laboratory experiments support that at high density the strange particles will appear and the corresponding EOS of super-dense matters will become soft. This situation promotes a challenge between the astro-observation and nuclear physics. In this work, we introduce an effective mechanism to answer this challenge, that is, if a neutron star is electrically charged, a soft EOS will be equivalently stiffened and thus can support a massive neutron star. By employing a representative soft EOS, it is found that in order to obtain an evident effect on the EOS and thus increasing the maximum stellar mass by the electrostatic field, the total net charge should be in an order of 1020 C. Moreover, by comparing the results of two kind of charge distributions, it is found that even for different distributions, a similar total charge: ~ 2.3 × 1020 C is needed to support a ~ 2.0 M ⊙ neutron star.

  18. No evidence of disk destruction by OB stars

    NASA Astrophysics Data System (ADS)

    Richert, Alexander J. W.; Feigelson, Eric

    2015-01-01

    It has been suggested that the hostile environments observed in massive star forming regions are inhospitable to protoplanetary disks and therefore to the formation of planets. The Orion Proplyds show disk evaporation by extreme ultraviolet (EUV) photons from Theta1 Orionis C (spectral type O6). In this work, we examine the spatial distributions of disk-bearing and non-disk bearing young stellar objects (YSOs) relative to OB stars in 17 massive star forming regions in the MYStIX (Massive Young Star-Forming Complex Study in Infrared and X-ray) survey. Any tendency of disky YSOs, identified by their infrared excess, to avoid OB stars would reveal complete disk destruction.We consider a sample of MYStIX that includes 78 O3-O9 stars, 256 B stars, 5,606 disky YSOs, and 5,794 non-disky YSOs. For each OB star, we compare the cumulative distribution functions of distances to disky and non-disky YSOs. We find no significant avoidance of OB stars by disky YSOs. This result indicates that OB stars are not sufficiently EUV-luminous and long-lived to completely destroy a disk within its ordinary lifetime. We therefore conclude that massive star forming regions are not clearly hostile to the formation of planets.

  19. An Effective Massive Sensor Network Data Access Scheme Based on Topology Control for the Internet of Things.

    PubMed

    Yi, Meng; Chen, Qingkui; Xiong, Neal N

    2016-11-03

    This paper considers the distributed access and control problem of massive wireless sensor networks' data access center for the Internet of Things, which is an extension of wireless sensor networks and an element of its topology structure. In the context of the arrival of massive service access requests at a virtual data center, this paper designs a massive sensing data access and control mechanism to improve the access efficiency of service requests and makes full use of the available resources at the data access center for the Internet of things. Firstly, this paper proposes a synergistically distributed buffer access model, which separates the information of resource and location. Secondly, the paper divides the service access requests into multiple virtual groups based on their characteristics and locations using an optimized self-organizing feature map neural network. Furthermore, this paper designs an optimal scheduling algorithm of group migration based on the combination scheme between the artificial bee colony algorithm and chaos searching theory. Finally, the experimental results demonstrate that this mechanism outperforms the existing schemes in terms of enhancing the accessibility of service requests effectively, reducing network delay, and has higher load balancing capacity and higher resource utility rate.

  20. Memory-assisted quantum key distribution resilient against multiple-excitation effects

    NASA Astrophysics Data System (ADS)

    Lo Piparo, Nicolò; Sinclair, Neil; Razavi, Mohsen

    2018-01-01

    Memory-assisted measurement-device-independent quantum key distribution (MA-MDI-QKD) has recently been proposed as a technique to improve the rate-versus-distance behavior of QKD systems by using existing, or nearly-achievable, quantum technologies. The promise is that MA-MDI-QKD would require less demanding quantum memories than the ones needed for probabilistic quantum repeaters. Nevertheless, early investigations suggest that, in order to beat the conventional memory-less QKD schemes, the quantum memories used in the MA-MDI-QKD protocols must have high bandwidth-storage products and short interaction times. Among different types of quantum memories, ensemble-based memories offer some of the required specifications, but they typically suffer from multiple excitation effects. To avoid the latter issue, in this paper, we propose two new variants of MA-MDI-QKD both relying on single-photon sources for entangling purposes. One is based on known techniques for entanglement distribution in quantum repeaters. This scheme turns out to offer no advantage even if one uses ideal single-photon sources. By finding the root cause of the problem, we then propose another setup, which can outperform single memory-less setups even if we allow for some imperfections in our single-photon sources. For such a scheme, we compare the key rate for different types of ensemble-based memories and show that certain classes of atomic ensembles can improve the rate-versus-distance behavior.

  1. Execution time supports for adaptive scientific algorithms on distributed memory machines

    NASA Technical Reports Server (NTRS)

    Berryman, Harry; Saltz, Joel; Scroggs, Jeffrey

    1990-01-01

    Optimizations are considered that are required for efficient execution of code segments that consists of loops over distributed data structures. The PARTI (Parallel Automated Runtime Toolkit at ICASE) execution time primitives are designed to carry out these optimizations and can be used to implement a wide range of scientific algorithms on distributed memory machines. These primitives allow the user to control array mappings in a way that gives an appearance of shared memory. Computations can be based on a global index set. Primitives are used to carry out gather and scatter operations on distributed arrays. Communications patterns are derived at runtime, and the appropriate send and receive messages are automatically generated.

  2. Execution time support for scientific programs on distributed memory machines

    NASA Technical Reports Server (NTRS)

    Berryman, Harry; Saltz, Joel; Scroggs, Jeffrey

    1990-01-01

    Optimizations are considered that are required for efficient execution of code segments that consists of loops over distributed data structures. The PARTI (Parallel Automated Runtime Toolkit at ICASE) execution time primitives are designed to carry out these optimizations and can be used to implement a wide range of scientific algorithms on distributed memory machines. These primitives allow the user to control array mappings in a way that gives an appearance of shared memory. Computations can be based on a global index set. Primitives are used to carry out gather and scatter operations on distributed arrays. Communications patterns are derived at runtime, and the appropriate send and receive messages are automatically generated.

  3. The Correspondence between Convergence Peaks from Weak Lensing and Massive Dark Matter Haloes

    NASA Astrophysics Data System (ADS)

    Wei, Chengliang; Li, Guoliang; Kang, Xi; Liu, Xiangkun; Fan, Zuhui; Yuan, Shuo; Pan, Chuzhong

    2018-05-01

    The convergence peaks, constructed from galaxy shape measurement in weak lensing, is a powerful probe of cosmology as the peaks can be connected with the underlined dark matter haloes. However the capability of convergence peak statistic is affected by the noise in galaxy shape measurement, signal to noise ratio as well as the contribution from the projected mass distribution from the large-scale structures along the line of sight (LOS). In this paper we use the ray-tracing simulation on a curved sky to investigate the correspondence between the convergence peak and the dark matter haloes at the LOS. We find that, in case of no noise and for source galaxies at zs = 1, more than 65% peaks with SNR ≥ 3 (signal to noise ratio) are related to more than one massive haloes with mass larger than 1013M⊙. Those massive haloes contribute 87.2% to high peaks (SNR ≥ 5) with the remaining contributions are from the large-scale structures. On the other hand, the peaks distribution is skewed by the noise in galaxy shape measurement, especially for lower SNR peaks. In the noisy field where the shape noise is modelled as a Gaussian distribution, about 60% high peaks (SNR ≥ 5) are true peaks and the fraction decreases to 20% for lower peaks (3 ≤ SNR < 5). Furthermore, we find that high peaks (SNR ≥ 5) are dominated by very massive haloes larger than 1014M⊙.

  4. Forgetting in immediate serial recall: decay, temporal distinctiveness, or interference?

    PubMed

    Oberauer, Klaus; Lewandowsky, Stephan

    2008-07-01

    Three hypotheses of forgetting from immediate memory were tested: time-based decay, decreasing temporal distinctiveness, and interference. The hypotheses were represented by 3 models of serial recall: the primacy model, the SIMPLE (scale-independent memory, perception, and learning) model, and the SOB (serial order in a box) model, respectively. The models were fit to 2 experiments investigating the effect of filled delays between items at encoding or at recall. Short delays between items, filled with articulatory suppression, led to massive impairment of memory relative to a no-delay baseline. Extending the delays had little additional effect, suggesting that the passage of time alone does not cause forgetting. Adding a choice reaction task in the delay periods to block attention-based rehearsal did not change these results. The interference-based SOB fit the data best; the primacy model overpredicted the effect of lengthening delays, and SIMPLE was unable to explain the effect of delays at encoding. The authors conclude that purely temporal views of forgetting are inadequate. Copyright (c) 2008 APA, all rights reserved.

  5. Logic gates realized by nonvolatile GeTe/Sb2Te3 super lattice phase-change memory with a magnetic field input

    NASA Astrophysics Data System (ADS)

    Lu, Bin; Cheng, Xiaomin; Feng, Jinlong; Guan, Xiawei; Miao, Xiangshui

    2016-07-01

    Nonvolatile memory devices or circuits that can implement both storage and calculation are a crucial requirement for the efficiency improvement of modern computer. In this work, we realize logic functions by using [GeTe/Sb2Te3]n super lattice phase change memory (PCM) cell in which higher threshold voltage is needed for phase change with a magnetic field applied. First, the [GeTe/Sb2Te3]n super lattice cells were fabricated and the R-V curve was measured. Then we designed the logic circuits with the super lattice PCM cell verified by HSPICE simulation and experiments. Seven basic logic functions are first demonstrated in this letter; then several multi-input logic gates are presented. The proposed logic devices offer the advantages of simple structures and low power consumption, indicating that the super lattice PCM has the potential in the future nonvolatile central processing unit design, facilitating the development of massive parallel computing architecture.

  6. Attenuation of the NMR signal in a field gradient due to stochastic dynamics with memory

    NASA Astrophysics Data System (ADS)

    Lisý, Vladimír; Tóthová, Jana

    2017-03-01

    The attenuation function S(t) for an ensemble of spins in a magnetic-field gradient is calculated by accumulation of the phase shifts in the rotating frame resulting from the displacements of spin-bearing particles. The found S(t), expressed through the particle mean square displacement, is applicable for any kind of stationary stochastic motion of spins, including their non-markovian dynamics with memory. The known expressions valid for normal and anomalous diffusion are obtained as special cases in the long time approximation. The method is also applicable to the NMR pulse sequences based on the refocusing principle. This is demonstrated by describing the Hahn spin echo experiment. The attenuation of the NMR signal is also evaluated providing that the random motion of particle is modeled by the generalized Langevin equation with the memory kernel exponentially decaying in time. The models considered in our paper assume massive particles driven by much smaller particles.

  7. AQUAdexIM: highly efficient in-memory indexing and querying of astronomy time series images

    NASA Astrophysics Data System (ADS)

    Hong, Zhi; Yu, Ce; Wang, Jie; Xiao, Jian; Cui, Chenzhou; Sun, Jizhou

    2016-12-01

    Astronomy has always been, and will continue to be, a data-based science, and astronomers nowadays are faced with increasingly massive datasets, one key problem of which is to efficiently retrieve the desired cup of data from the ocean. AQUAdexIM, an innovative spatial indexing and querying method, performs highly efficient on-the-fly queries under users' request to search for Time Series Images from existing observation data on the server side and only return the desired FITS images to users, so users no longer need to download entire datasets to their local machines, which will only become more and more impractical as the data size keeps increasing. Moreover, AQUAdexIM manages to keep a very low storage space overhead and its specially designed in-memory index structure enables it to search for Time Series Images of a given area of the sky 10 times faster than using Redis, a state-of-the-art in-memory database.

  8. DISCRN: A Distributed Storytelling Framework for Intelligence Analysis.

    PubMed

    Shukla, Manu; Dos Santos, Raimundo; Chen, Feng; Lu, Chang-Tien

    2017-09-01

    Storytelling connects entities (people, organizations) using their observed relationships to establish meaningful storylines. This can be extended to spatiotemporal storytelling that incorporates locations, time, and graph computations to enhance coherence and meaning. But when performed sequentially these computations become a bottleneck because the massive number of entities make space and time complexity untenable. This article presents DISCRN, or distributed spatiotemporal ConceptSearch-based storytelling, a distributed framework for performing spatiotemporal storytelling. The framework extracts entities from microblogs and event data, and links these entities using a novel ConceptSearch to derive storylines in a distributed fashion utilizing key-value pair paradigm. Performing these operations at scale allows deeper and broader analysis of storylines. The novel parallelization techniques speed up the generation and filtering of storylines on massive datasets. Experiments with microblog posts such as Twitter data and Global Database of Events, Language, and Tone events show the efficiency of the techniques in DISCRN.

  9. Two alternate proofs of Wang's lune formula for sparse distributed memory and an integral approximation

    NASA Technical Reports Server (NTRS)

    Jaeckel, Louis A.

    1988-01-01

    In Kanerva's Sparse Distributed Memory, writing to and reading from the memory are done in relation to spheres in an n-dimensional binary vector space. Thus it is important to know how many points are in the intersection of two spheres in this space. Two proofs are given of Wang's formula for spheres of unequal radii, and an integral approximation for the intersection in this case.

  10. BINARY ASTROMETRIC MICROLENSING WITH GAIA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sajadian, Sedighe, E-mail: sajadian@ipm.ir; Department of Physics, Sharif University of Technology, P.O. Box 11155-9161, Tehran

    2015-04-15

    We investigate whether or not Gaia can specify the binary fractions of massive stellar populations in the Galactic disk through astrometric microlensing. Furthermore, we study whether or not some information about their mass distributions can be inferred via this method. In this regard, we simulate the binary astrometric microlensing events due to massive stellar populations according to the Gaia observing strategy by considering (i) stellar-mass black holes, (ii) neutron stars, (iii) white dwarfs, and (iv) main-sequence stars as microlenses. The Gaia efficiency for detecting the binary signatures in binary astrometric microlensing events is ∼10%–20%. By calculating the optical depth duemore » to the mentioned stellar populations, the numbers of the binary astrometric microlensing events being observed with Gaia with detectable binary signatures, for the binary fraction of about 0.1, are estimated to be 6, 11, 77, and 1316, respectively. Consequently, Gaia can potentially specify the binary fractions of these massive stellar populations. However, the binary fraction of black holes measured with this method has a large uncertainty owing to a low number of the estimated events. Knowing the binary fractions in massive stellar populations helps with studying the gravitational waves. Moreover, we investigate the number of massive microlenses for which Gaia specifies masses through astrometric microlensing of single lenses toward the Galactic bulge. The resulting efficiencies of measuring the mass of mentioned populations are 9.8%, 2.9%, 1.2%, and 0.8%, respectively. The numbers of their astrometric microlensing events being observed in the Gaia era in which the lens mass can be inferred with the relative error less than 0.5 toward the Galactic bulge are estimated as 45, 34, 76, and 786, respectively. Hence, Gaia potentially gives us some information about the mass distribution of these massive stellar populations.« less

  11. Automatic selection of dynamic data partitioning schemes for distributed memory multicomputers

    NASA Technical Reports Server (NTRS)

    Palermo, Daniel J.; Banerjee, Prithviraj

    1995-01-01

    For distributed memory multicomputers such as the Intel Paragon, the IBM SP-2, the NCUBE/2, and the Thinking Machines CM-5, the quality of the data partitioning for a given application is crucial to obtaining high performance. This task has traditionally been the user's responsibility, but in recent years much effort has been directed to automating the selection of data partitioning schemes. Several researchers have proposed systems that are able to produce data distributions that remain in effect for the entire execution of an application. For complex programs, however, such static data distributions may be insufficient to obtain acceptable performance. The selection of distributions that dynamically change over the course of a program's execution adds another dimension to the data partitioning problem. In this paper, we present a technique that can be used to automatically determine which partitionings are most beneficial over specific sections of a program while taking into account the added overhead of performing redistribution. This system is being built as part of the PARADIGM (PARAllelizing compiler for DIstributed memory General-purpose Multicomputers) project at the University of Illinois. The complete system will provide a fully automated means to parallelize programs written in a serial programming model obtaining high performance on a wide range of distributed-memory multicomputers.

  12. Reliable, Memory Speed Storage for Cluster Computing Frameworks

    DTIC Science & Technology

    2014-06-16

    specification API that can capture computations in many of today’s popular data -parallel computing models, e.g., MapReduce and SQL. We also ported the Hadoop ...today’s big data workloads: • Immutable data : Data is immutable once written, since dominant underlying storage systems, such as HDFS [3], only support...network transfers, so reads can be data -local. • Program size vs. data size: In big data processing, the same operation is repeatedly applied on massive

  13. GPU-based Branchless Distance-Driven Projection and Backprojection

    PubMed Central

    Liu, Rui; Fu, Lin; De Man, Bruno; Yu, Hengyong

    2017-01-01

    Projection and backprojection operations are essential in a variety of image reconstruction and physical correction algorithms in CT. The distance-driven (DD) projection and backprojection are widely used for their highly sequential memory access pattern and low arithmetic cost. However, a typical DD implementation has an inner loop that adjusts the calculation depending on the relative position between voxel and detector cell boundaries. The irregularity of the branch behavior makes it inefficient to be implemented on massively parallel computing devices such as graphics processing units (GPUs). Such irregular branch behaviors can be eliminated by factorizing the DD operation as three branchless steps: integration, linear interpolation, and differentiation, all of which are highly amenable to massive vectorization. In this paper, we implement and evaluate a highly parallel branchless DD algorithm for 3D cone beam CT. The algorithm utilizes the texture memory and hardware interpolation on GPUs to achieve fast computational speed. The developed branchless DD algorithm achieved 137-fold speedup for forward projection and 188-fold speedup for backprojection relative to a single-thread CPU implementation. Compared with a state-of-the-art 32-thread CPU implementation, the proposed branchless DD achieved 8-fold acceleration for forward projection and 10-fold acceleration for backprojection. GPU based branchless DD method was evaluated by iterative reconstruction algorithms with both simulation and real datasets. It obtained visually identical images as the CPU reference algorithm. PMID:29333480

  14. GPU-based Branchless Distance-Driven Projection and Backprojection.

    PubMed

    Liu, Rui; Fu, Lin; De Man, Bruno; Yu, Hengyong

    2017-12-01

    Projection and backprojection operations are essential in a variety of image reconstruction and physical correction algorithms in CT. The distance-driven (DD) projection and backprojection are widely used for their highly sequential memory access pattern and low arithmetic cost. However, a typical DD implementation has an inner loop that adjusts the calculation depending on the relative position between voxel and detector cell boundaries. The irregularity of the branch behavior makes it inefficient to be implemented on massively parallel computing devices such as graphics processing units (GPUs). Such irregular branch behaviors can be eliminated by factorizing the DD operation as three branchless steps: integration, linear interpolation, and differentiation, all of which are highly amenable to massive vectorization. In this paper, we implement and evaluate a highly parallel branchless DD algorithm for 3D cone beam CT. The algorithm utilizes the texture memory and hardware interpolation on GPUs to achieve fast computational speed. The developed branchless DD algorithm achieved 137-fold speedup for forward projection and 188-fold speedup for backprojection relative to a single-thread CPU implementation. Compared with a state-of-the-art 32-thread CPU implementation, the proposed branchless DD achieved 8-fold acceleration for forward projection and 10-fold acceleration for backprojection. GPU based branchless DD method was evaluated by iterative reconstruction algorithms with both simulation and real datasets. It obtained visually identical images as the CPU reference algorithm.

  15. The MASSIVE Survey. VI. The Spatial Distribution and Kinematics of Warm Ionized Gas in the Most Massive Local Early-type Galaxies

    NASA Astrophysics Data System (ADS)

    Pandya, Viraj; Greene, Jenny E.; Ma, Chung-Pei; Veale, Melanie; Ene, Irina; Davis, Timothy A.; Blakeslee, John P.; Goulding, Andy D.; McConnell, Nicholas J.; Nyland, Kristina; Thomas, Jens

    2017-03-01

    We present the first systematic investigation of the existence, spatial distribution, and kinematics of warm ionized gas as traced by the [O II] 3727 Å emission line in 74 of the most massive galaxies in the local universe. All of our galaxies have deep integral-field spectroscopy from the volume- and magnitude-limited MASSIVE survey of early-type galaxies with stellar mass {log}({M}* /{M}⊙ )> 11.5 (M K < -25.3 mag) and distance D < 108 Mpc. Of the 74 galaxies in our sample, we detect warm ionized gas in 28, which yields a global detection fraction of 38 ± 6% down to a typical [O II] equivalent width limit of 2 Å. MASSIVE fast rotators are more likely to have gas than MASSIVE slow rotators with detection fractions of 80 ± 10% and 28 ± 6%, respectively. The spatial extents span a wide range of radii (0.6-18.2 kpc; 0.1-4R e ), and the gas morphologies are diverse, with 17/28 ≈ 61 ± 9% being centrally concentrated, 8/28 ≈ 29 ± 9% exhibiting clear rotation out to several kiloparsecs, and 3/28 ≈ 11 ± 6% being extended but patchy. Three out of four fast rotators show kinematic alignment between the stars and gas, whereas the two slow rotators with robust kinematic measurements available exhibit kinematic misalignment. Our inferred warm ionized gas masses are roughly ˜105 M ⊙. The emission line ratios and radial equivalent width profiles are generally consistent with excitation of the gas by the old underlying stellar population. We explore different gas origin scenarios for MASSIVE galaxies and find that a variety of physical processes are likely at play, including internal gas recycling, cooling out of the hot gaseous halo, and gas acquired via mergers.

  16. Are memory traces localized or distributed?

    PubMed

    Thompson, R F

    1991-01-01

    Evidence supports the view that "memory traces" are formed in the hippocampus and in the cerebellum in classical conditioning of discrete behavioral responses (e.g. eyeblink conditioning). In the hippocampus, learning results in long-lasting increases in excitability of pyramidal neurons that appear to be localized to these neurons (i.e. changes in membrane properties and receptor function). However, these learning-altered pyramidal neurons are distributed widely throughout CA3 and CA1. Although it plays a key role in certain aspects of classical conditioning, the hippocampus is not necessary for learning and memory of the basic conditioned responses. The cerebellum and its associated brain stem circuitry, on the other hand, does appear to be essential (necessary and sufficient) for learning and memory of the conditioned response. Evidence to date is most consistent with a localized trace in the interpositus nucleus and multiple localized traces in cerebellar cortex, each involving relatively large ensembles of neurons. Perhaps "procedural" memory traces are relatively localized and "declarative" traces more widely distributed.

  17. Distributed Saturation

    NASA Technical Reports Server (NTRS)

    Chung, Ming-Ying; Ciardo, Gianfranco; Siminiceanu, Radu I.

    2007-01-01

    The Saturation algorithm for symbolic state-space generation, has been a recent break-through in the exhaustive veri cation of complex systems, in particular globally-asyn- chronous/locally-synchronous systems. The algorithm uses a very compact Multiway Decision Diagram (MDD) encoding for states and the fastest symbolic exploration algo- rithm to date. The distributed version of Saturation uses the overall memory available on a network of workstations (NOW) to efficiently spread the memory load during the highly irregular exploration. A crucial factor in limiting the memory consumption during the symbolic state-space generation is the ability to perform garbage collection to free up the memory occupied by dead nodes. However, garbage collection over a NOW requires a nontrivial communication overhead. In addition, operation cache policies become critical while analyzing large-scale systems using the symbolic approach. In this technical report, we develop a garbage collection scheme and several operation cache policies to help on solving extremely complex systems. Experiments show that our schemes improve the performance of the original distributed implementation, SmArTNow, in terms of time and memory efficiency.

  18. Sparse distributed memory: understanding the speed and robustness of expert memory

    PubMed Central

    Brogliato, Marcelo S.; Chada, Daniel M.; Linhares, Alexandre

    2014-01-01

    How can experts, sometimes in exacting detail, almost immediately and very precisely recall memory items from a vast repertoire? The problem in which we will be interested concerns models of theoretical neuroscience that could explain the speed and robustness of an expert's recollection. The approach is based on Sparse Distributed Memory, which has been shown to be plausible, both in a neuroscientific and in a psychological manner, in a number of ways. A crucial characteristic concerns the limits of human recollection, the “tip-of-tongue” memory event—which is found at a non-linearity in the model. We expand the theoretical framework, deriving an optimization formula to solve this non-linearity. Numerical results demonstrate how the higher frequency of rehearsal, through work or study, immediately increases the robustness and speed associated with expert memory. PMID:24808842

  19. Thermodynamic Model of Spatial Memory

    NASA Astrophysics Data System (ADS)

    Kaufman, Miron; Allen, P.

    1998-03-01

    We develop and test a thermodynamic model of spatial memory. Our model is an application of statistical thermodynamics to cognitive science. It is related to applications of the statistical mechanics framework in parallel distributed processes research. Our macroscopic model allows us to evaluate an entropy associated with spatial memory tasks. We find that older adults exhibit higher levels of entropy than younger adults. Thurstone's Law of Categorical Judgment, according to which the discriminal processes along the psychological continuum produced by presentations of a single stimulus are normally distributed, is explained by using a Hooke spring model of spatial memory. We have also analyzed a nonlinear modification of the ideal spring model of spatial memory. This work is supported by NIH/NIA grant AG09282-06.

  20. How I Learned to Stop Worrying and Love Eclipsing Binaries

    NASA Astrophysics Data System (ADS)

    Moe, Maxwell Cassady

    Relatively massive B-type stars with closely orbiting stellar companions can evolve to produce Type Ia supernovae, X-ray binaries, millisecond pulsars, mergers of neutron stars, gamma ray bursts, and sources of gravitational waves. However, the formation mechanism, intrinsic frequency, and evolutionary processes of B-type binaries are poorly understood. As of 2012, the binary statistics of massive stars had not been measured at low metallicities, extreme mass ratios, or intermediate orbital periods. This thesis utilizes large data sets of eclipsing binaries to measure the physical properties of B-type binaries in these previously unexplored portions of the parameter space. The updated binary statistics provide invaluable insight into the formation of massive stars and binaries as well as reliable initial conditions for population synthesis studies of binary star evolution. We first compare the properties of B-type eclipsing binaries in our Milky Way Galaxy and the nearby Magellanic Cloud Galaxies. We model the eclipsing binary light curves and perform detailed Monte Carlo simulations to recover the intrinsic properties and distributions of the close binary population. We find the frequency, period distribution, and mass-ratio distribution of close B-type binaries do not significantly depend on metallicity or environment. These results indicate the formation of massive binaries are relatively insensitive to their chemical abundances or immediate surroundings. Second, we search for low-mass eclipsing companions to massive B-type stars in the Large Magellanic Cloud Galaxy. In addition to finding such extreme mass-ratio binaries, we serendipitously discover a new class of eclipsing binaries. Each system comprises a massive B-type star that is fully formed and a nascent low-mass companion that is still contracting toward its normal phase of evolution. The large low-mass secondaries discernibly reflect much of the light they intercept from the hot B-type stars, thereby producing sinusoidal variations in perceived brightness as they orbit. These nascent eclipsing binaries are embedded in the hearts of star-forming emission nebulae, and therefore provide a unique snapshot into the formation and evolution of massive binaries and stellar nurseries. We next examine a large sample of B-type eclipsing binaries with intermediate orbital periods. To achieve such a task, we develop an automated pipeline to classify the eclipsing binaries, measure their physical properties from the observed light curves, and recover the intrinsic binary statistics by correcting for selection effects. We find the population of massive binaries at intermediate separations differ from those orbiting in close proximity. Close massive binaries favor small eccentricities and have correlated component masses, demonstrating they coevolved via competitive accretion during their formation in the circumbinary disk. Meanwhile, B-type binaries at slightly wider separations are born with large eccentricities and are weighted toward extreme mass ratios, indicating the components formed relatively independently and subsequently evolved to their current configurations via dynamical interactions. By using eclipsing binaries as accurate age indicators, we also reveal that the binary orbital eccentricities and the line-of-sight dust extinctions are anticorrelated with respect to time. These empirical relations provide robust constraints for tidal evolution in massive binaries and the evolution of the dust content in their surrounding environments. Finally, we compile observations of early-type binaries identified via spectroscopy, eclipses, long-baseline interferometry, adaptive optics, lucky imaging, high-contrast photometry, and common proper motion. We combine the samples from the various surveys and correct for their respective selection effects to determine a comprehensive nature of the intrinsic binary statistics of massive stars. We find the probability distributions of primary mass, secondary mass, orbital period, and orbital eccentricity are all interrelated. These updated multiplicity statistics imply a greater frequency of low-mass X-ray binaries, millisecond pulsars, and Type Ia supernovae than previously predicted.

  1. Forensic Analysis of Window’s(Registered) Virtual Memory Incorporating the System’s Page-File

    DTIC Science & Technology

    2008-12-01

    Management and Budget, Paperwork Reduction Project (0704-0188) Washington DC 20503. 1. AGENCY USE ONLY (Leave blank) 2. REPORT DATE December...data in a meaningful way. One reason for this is how memory is managed by the operating system. Data belonging to one process can be distributed...way. One reason for this is how memory is managed by the operating system. Data belonging to one process can be distributed arbitrarily across

  2. Distributed practice can boost evaluative conditioning by increasing memory for the stimulus pairs.

    PubMed

    Richter, Jasmin; Gast, Anne

    2017-09-01

    When presenting a neutral stimulus (CS) in close temporal and spatial proximity to a positive or negative stimulus (US) the former is often observed to adopt the valence of the latter, a phenomenon named evaluative conditioning (EC). It is already well established that under most conditions, contingency awareness is important for an EC effect to occur. In addition to that, some findings suggest that awareness of the stimulus pairs is not only relevant during the learning phase, but that it is also relevant whether memory for the pairings is still available during the measurement phase. As previous research has shown that memory is better after temporally distributed than after contiguous (massed) repetitions, it seems plausible that also EC effects are moderated by distributed practice manipulations. This was tested in the current studies. In two experiments with successful distributed practice manipulations on memory, we show that also the magnitude of the EC effect was larger for pairs learned under spaced compared to massed conditions. Both effects, on memory and on EC, are found after a within-participant and after a between-participant manipulation. However, we did not find significant differences in the EC effect for different conditions of spaced practice. These findings are in line with the assumption that EC is based on similar processes as memory for the pairings. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. A theory for how sensorimotor skills are learned and retained in noisy and nonstationary neural circuits

    PubMed Central

    Ajemian, Robert; D’Ausilio, Alessandro; Moorman, Helene; Bizzi, Emilio

    2013-01-01

    During the process of skill learning, synaptic connections in our brains are modified to form motor memories of learned sensorimotor acts. The more plastic the adult brain is, the easier it is to learn new skills or adapt to neurological injury. However, if the brain is too plastic and the pattern of synaptic connectivity is constantly changing, new memories will overwrite old memories, and learning becomes unstable. This trade-off is known as the stability–plasticity dilemma. Here a theory of sensorimotor learning and memory is developed whereby synaptic strengths are perpetually fluctuating without causing instability in motor memory recall, as long as the underlying neural networks are sufficiently noisy and massively redundant. The theory implies two distinct stages of learning—preasymptotic and postasymptotic—because once the error drops to a level comparable to that of the noise-induced error, further error reduction requires altered network dynamics. A key behavioral prediction derived from this analysis is tested in a visuomotor adaptation experiment, and the resultant learning curves are modeled with a nonstationary neural network. Next, the theory is used to model two-photon microscopy data that show, in animals, high rates of dendritic spine turnover, even in the absence of overt behavioral learning. Finally, the theory predicts enhanced task selectivity in the responses of individual motor cortical neurons as the level of task expertise increases. From these considerations, a unique interpretation of sensorimotor memory is proposed—memories are defined not by fixed patterns of synaptic weights but, rather, by nonstationary synaptic patterns that fluctuate coherently. PMID:24324147

  4. BIRD: A general interface for sparse distributed memory simulators

    NASA Technical Reports Server (NTRS)

    Rogers, David

    1990-01-01

    Kanerva's sparse distributed memory (SDM) has now been implemented for at least six different computers, including SUN3 workstations, the Apple Macintosh, and the Connection Machine. A common interface for input of commands would both aid testing of programs on a broad range of computer architectures and assist users in transferring results from research environments to applications. A common interface also allows secondary programs to generate command sequences for a sparse distributed memory, which may then be executed on the appropriate hardware. The BIRD program is an attempt to create such an interface. Simplifying access to different simulators should assist developers in finding appropriate uses for SDM.

  5. Autobiographical Memory in Semantic Dementia: A Longitudinal fMRI Study

    ERIC Educational Resources Information Center

    Maguire, Eleanor A.; Kumaran, Dharshan; Hassabis, Demis; Kopelman, Michael D.

    2010-01-01

    Whilst patients with semantic dementia (SD) are known to suffer from semantic memory and language impairments, there is less agreement about whether memory for personal everyday experiences, autobiographical memory, is compromised. In healthy individuals, functional MRI (fMRI) has helped to delineate a consistent and distributed brain network…

  6. cuTauLeaping: A GPU-Powered Tau-Leaping Stochastic Simulator for Massive Parallel Analyses of Biological Systems

    PubMed Central

    Besozzi, Daniela; Pescini, Dario; Mauri, Giancarlo

    2014-01-01

    Tau-leaping is a stochastic simulation algorithm that efficiently reconstructs the temporal evolution of biological systems, modeled according to the stochastic formulation of chemical kinetics. The analysis of dynamical properties of these systems in physiological and perturbed conditions usually requires the execution of a large number of simulations, leading to high computational costs. Since each simulation can be executed independently from the others, a massive parallelization of tau-leaping can bring to relevant reductions of the overall running time. The emerging field of General Purpose Graphic Processing Units (GPGPU) provides power-efficient high-performance computing at a relatively low cost. In this work we introduce cuTauLeaping, a stochastic simulator of biological systems that makes use of GPGPU computing to execute multiple parallel tau-leaping simulations, by fully exploiting the Nvidia's Fermi GPU architecture. We show how a considerable computational speedup is achieved on GPU by partitioning the execution of tau-leaping into multiple separated phases, and we describe how to avoid some implementation pitfalls related to the scarcity of memory resources on the GPU streaming multiprocessors. Our results show that cuTauLeaping largely outperforms the CPU-based tau-leaping implementation when the number of parallel simulations increases, with a break-even directly depending on the size of the biological system and on the complexity of its emergent dynamics. In particular, cuTauLeaping is exploited to investigate the probability distribution of bistable states in the Schlögl model, and to carry out a bidimensional parameter sweep analysis to study the oscillatory regimes in the Ras/cAMP/PKA pathway in S. cerevisiae. PMID:24663957

  7. Implementation and Characterization of Three-Dimensional Particle-in-Cell Codes on Multiple-Instruction-Multiple-Data Massively Parallel Supercomputers

    NASA Technical Reports Server (NTRS)

    Lyster, P. M.; Liewer, P. C.; Decyk, V. K.; Ferraro, R. D.

    1995-01-01

    A three-dimensional electrostatic particle-in-cell (PIC) plasma simulation code has been developed on coarse-grain distributed-memory massively parallel computers with message passing communications. Our implementation is the generalization to three-dimensions of the general concurrent particle-in-cell (GCPIC) algorithm. In the GCPIC algorithm, the particle computation is divided among the processors using a domain decomposition of the simulation domain. In a three-dimensional simulation, the domain can be partitioned into one-, two-, or three-dimensional subdomains ("slabs," "rods," or "cubes") and we investigate the efficiency of the parallel implementation of the push for all three choices. The present implementation runs on the Intel Touchstone Delta machine at Caltech; a multiple-instruction-multiple-data (MIMD) parallel computer with 512 nodes. We find that the parallel efficiency of the push is very high, with the ratio of communication to computation time in the range 0.3%-10.0%. The highest efficiency (> 99%) occurs for a large, scaled problem with 64(sup 3) particles per processing node (approximately 134 million particles of 512 nodes) which has a push time of about 250 ns per particle per time step. We have also developed expressions for the timing of the code which are a function of both code parameters (number of grid points, particles, etc.) and machine-dependent parameters (effective FLOP rate, and the effective interprocessor bandwidths for the communication of particles and grid points). These expressions can be used to estimate the performance of scaled problems--including those with inhomogeneous plasmas--to other parallel machines once the machine-dependent parameters are known.

  8. The effects of child abuse as seen in adults: George Orwell.

    PubMed

    Shengold, L

    1985-01-01

    The author presents the case of a patient who showed the massive defensive effects seen in people who were abused in childhood. These effects are similar to those described in George Orwell's 1984 and in his autobiographical writings: denial and "doublethink"; masochistic submission to the tormentor; turning of anger against the self and loving "Big Brother"; identifying with the abuser and tormenting others; a burgeoning of anal mechanisms and obsessive phenomena that results in a massive isolation of affects; excessive emotional control alongside outbursts of rage. The interference with memory and emotions compromises identity and humanity. The unforseeable evolution of innate gifts in a child sometimes permits a partial transcendence of these crippling defenses, as Orwell partially transcended what appears to have been the emotional deprivation of his childhood and what he felt to have been the abuse of his schoolboy years.

  9. Crystal MD: The massively parallel molecular dynamics software for metal with BCC structure

    NASA Astrophysics Data System (ADS)

    Hu, Changjun; Bai, He; He, Xinfu; Zhang, Boyao; Nie, Ningming; Wang, Xianmeng; Ren, Yingwen

    2017-02-01

    Material irradiation effect is one of the most important keys to use nuclear power. However, the lack of high-throughput irradiation facility and knowledge of evolution process, lead to little understanding of the addressed issues. With the help of high-performance computing, we could make a further understanding of micro-level-material. In this paper, a new data structure is proposed for the massively parallel simulation of the evolution of metal materials under irradiation environment. Based on the proposed data structure, we developed the new molecular dynamics software named Crystal MD. The simulation with Crystal MD achieved over 90% parallel efficiency in test cases, and it takes more than 25% less memory on multi-core clusters than LAMMPS and IMD, which are two popular molecular dynamics simulation software. Using Crystal MD, a two trillion particles simulation has been performed on Tianhe-2 cluster.

  10. Reactivating fear memory under propranolol resets pre-trauma levels of dendritic spines in basolateral amygdala but not dorsal hippocampus neurons

    PubMed Central

    Vetere, Gisella; Piserchia, Valentina; Borreca, Antonella; Novembre, Giovanni; Aceti, Massimiliano; Ammassari-Teule, Martine

    2013-01-01

    Fear memory enhances connectivity in cortical and limbic circuits but whether treatments disrupting fear reset connectivity to pre-trauma level is unknown. Here we report that C56BL/6J mice exposed to a tone-shock association in context A (conditioning), and briefly re-exposed to the same tone-shock association in context B (reactivation), exhibit strong freezing to the tone alone delivered 48 h later in context B (long term fear memory). This intense fear response is associated with a massive increase in dendritic spines and phospho-Erk (p-ERK) signaling in basolateral amygdala (BLA) but neurons. We then show that propranolol (a central/peripheral β-adrenergic receptor blocker) administered before, but not after, the reactivation trial attenuates long term fear memory assessed drug free 48 h later, and completely prevents the increase in spines and p-ERK signaling in BLA neurons. An increase in spines, but not of p-ERK, was also detected in the dorsal hippocampus (DH) of the conditioned mice. DH spines, however, were unaffected by propranolol suggesting their independence from the ERK/β-ARs cascade. We conclude that propranolol selectively blocks dendritic spines and p-ERK signaling enhancement in the BLA; its effect on fear memory is, however, less pronounced suggesting that the persistence of spines at other brain sites decreases the sensitivity of the fear memory trace to treatments selectively targeting β ARs in the BLA. PMID:24391566

  11. DART: A Community Facility Providing State-of-the-Art, Efficient Ensemble Data Assimilation for Large (Coupled) Geophysical Models

    NASA Astrophysics Data System (ADS)

    Hoar, T. J.; Anderson, J. L.; Collins, N.; Kershaw, H.; Hendricks, J.; Raeder, K.; Mizzi, A. P.; Barré, J.; Gaubert, B.; Madaus, L. E.; Aydogdu, A.; Raeder, J.; Arango, H.; Moore, A. M.; Edwards, C. A.; Curchitser, E. N.; Escudier, R.; Dussin, R.; Bitz, C. M.; Zhang, Y. F.; Shrestha, P.; Rosolem, R.; Rahman, M.

    2016-12-01

    Strongly-coupled ensemble data assimilation with multiple high-resolution model components requires massive state vectors which need to be efficiently stored and accessed throughout the assimilation process. Supercomputer architectures are tending towards increasing the number of cores per node but have the same or less memory per node. Recent advances in the Data Assimilation Research Testbed (DART), a freely-available community ensemble data assimilation facility that works with dozens of large geophysical models, have addressed the need to run with a smaller memory footprint on a higher node count by utilizing MPI-2 one-sided communication to do non-blocking asynchronous access of distributed data. DART runs efficiently on many computational platforms ranging from laptops through thousands of cores on the newest supercomputers. Benefits of the new DART implementation will be shown. In addition, overviews of the most recently supported models will be presented: CAM-CHEM, WRF-CHEM, CM1, OpenGGCM, FESOM, ROMS, CICE5, TerrSysMP (COSMO, CLM, ParFlow), JULES, and CABLE. DART provides a comprehensive suite of software, documentation, and tutorials that can be used for ensemble data assimilation research, operations, and education. Scientists and software engineers at NCAR are available to support DART users who want to use existing DART products or develop their own applications. Current DART users range from university professors teaching data assimilation, to individual graduate students working with simple models, through national laboratories and state agencies doing operational prediction with large state-of-the-art models.

  12. Efficient High Performance Collective Communication for Distributed Memory Environments

    ERIC Educational Resources Information Center

    Ali, Qasim

    2009-01-01

    Collective communication allows efficient communication and synchronization among a collection of processes, unlike point-to-point communication that only involves a pair of communicating processes. Achieving high performance for both kernels and full-scale applications running on a distributed memory system requires an efficient implementation of…

  13. The multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) high performance computing infrastructure: applications in neuroscience and neuroinformatics research

    PubMed Central

    Goscinski, Wojtek J.; McIntosh, Paul; Felzmann, Ulrich; Maksimenko, Anton; Hall, Christopher J.; Gureyev, Timur; Thompson, Darren; Janke, Andrew; Galloway, Graham; Killeen, Neil E. B.; Raniga, Parnesh; Kaluza, Owen; Ng, Amanda; Poudel, Govinda; Barnes, David G.; Nguyen, Toan; Bonnington, Paul; Egan, Gary F.

    2014-01-01

    The Multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) is a national imaging and visualization facility established by Monash University, the Australian Synchrotron, the Commonwealth Scientific Industrial Research Organization (CSIRO), and the Victorian Partnership for Advanced Computing (VPAC), with funding from the National Computational Infrastructure and the Victorian Government. The MASSIVE facility provides hardware, software, and expertise to drive research in the biomedical sciences, particularly advanced brain imaging research using synchrotron x-ray and infrared imaging, functional and structural magnetic resonance imaging (MRI), x-ray computer tomography (CT), electron microscopy and optical microscopy. The development of MASSIVE has been based on best practice in system integration methodologies, frameworks, and architectures. The facility has: (i) integrated multiple different neuroimaging analysis software components, (ii) enabled cross-platform and cross-modality integration of neuroinformatics tools, and (iii) brought together neuroimaging databases and analysis workflows. MASSIVE is now operational as a nationally distributed and integrated facility for neuroinfomatics and brain imaging research. PMID:24734019

  14. Online-offline activities and game-playing behaviors of avatars in a massive multiplayer online role-playing game

    NASA Astrophysics Data System (ADS)

    Jiang, Zhi-Qiang; Zhou, Wei-Xing; Tan, Qun-Zhao

    2009-11-01

    Massive multiplayer online role-playing games (MMORPGs) are very popular in China, which provides a potential platform for scientific research. We study the online-offline activities of avatars in an MMORPG to understand their game-playing behavior. The statistical analysis unveils that the active avatars can be classified into three types. The avatars of the first type are owned by game cheaters who go online and offline in preset time intervals with the online duration distributions dominated by pulses. The second type of avatars is characterized by a Weibull distribution in the online durations, which is confirmed by statistical tests. The distributions of online durations of the remaining individual avatars differ from the above two types and cannot be described by a simple form. These findings have potential applications in the game industry.

  15. Distributed Fast Self-Organized Maps for Massive Spectrophotometric Data Analysis †.

    PubMed

    Dafonte, Carlos; Garabato, Daniel; Álvarez, Marco A; Manteiga, Minia

    2018-05-03

    Analyzing huge amounts of data becomes essential in the era of Big Data, where databases are populated with hundreds of Gigabytes that must be processed to extract knowledge. Hence, classical algorithms must be adapted towards distributed computing methodologies that leverage the underlying computational power of these platforms. Here, a parallel, scalable, and optimized design for self-organized maps (SOM) is proposed in order to analyze massive data gathered by the spectrophotometric sensor of the European Space Agency (ESA) Gaia spacecraft, although it could be extrapolated to other domains. The performance comparison between the sequential implementation and the distributed ones based on Apache Hadoop and Apache Spark is an important part of the work, as well as the detailed analysis of the proposed optimizations. Finally, a domain-specific visualization tool to explore astronomical SOMs is presented.

  16. Simulations of Fractal Star Cluster Formation. I. New Insights for Measuring Mass Segregation of Star Clusters with Substructure

    NASA Astrophysics Data System (ADS)

    Yu, Jincheng; Puzia, Thomas H.; Lin, Congping; Zhang, Yiwei

    2017-05-01

    We compare the existent methods, including the minimum spanning tree based method and the local stellar density based method, in measuring mass segregation of star clusters. We find that the minimum spanning tree method reflects more the compactness, which represents the global spatial distribution of massive stars, while the local stellar density method reflects more the crowdedness, which provides the local gravitational potential information. It is suggested to measure the local and the global mass segregation simultaneously. We also develop a hybrid method that takes both aspects into account. This hybrid method balances the local and the global mass segregation in the sense that the predominant one is either caused by dynamical evolution or purely accidental, especially when such information is unknown a priori. In addition, we test our prescriptions with numerical models and show the impact of binaries in estimating the mass segregation value. As an application, we use these methods on the Orion Nebula Cluster (ONC) observations and the Taurus cluster. We find that the ONC is significantly mass segregated down to the 20th most massive stars. In contrast, the massive stars of the Taurus cluster are sparsely distributed in many different subclusters, showing a low degree of compactness. The massive stars of Taurus are also found to be distributed in the high-density region of the subclusters, showing significant mass segregation at subcluster scales. Meanwhile, we also apply these methods to discuss the possible mechanisms of the dynamical evolution of the simulated substructured star clusters.

  17. Simulations of Fractal Star Cluster Formation. I. New Insights for Measuring Mass Segregation of Star Clusters with Substructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Jincheng; Puzia, Thomas H.; Lin, Congping

    2017-05-10

    We compare the existent methods, including the minimum spanning tree based method and the local stellar density based method, in measuring mass segregation of star clusters. We find that the minimum spanning tree method reflects more the compactness, which represents the global spatial distribution of massive stars, while the local stellar density method reflects more the crowdedness, which provides the local gravitational potential information. It is suggested to measure the local and the global mass segregation simultaneously. We also develop a hybrid method that takes both aspects into account. This hybrid method balances the local and the global mass segregationmore » in the sense that the predominant one is either caused by dynamical evolution or purely accidental, especially when such information is unknown a priori. In addition, we test our prescriptions with numerical models and show the impact of binaries in estimating the mass segregation value. As an application, we use these methods on the Orion Nebula Cluster (ONC) observations and the Taurus cluster. We find that the ONC is significantly mass segregated down to the 20th most massive stars. In contrast, the massive stars of the Taurus cluster are sparsely distributed in many different subclusters, showing a low degree of compactness. The massive stars of Taurus are also found to be distributed in the high-density region of the subclusters, showing significant mass segregation at subcluster scales. Meanwhile, we also apply these methods to discuss the possible mechanisms of the dynamical evolution of the simulated substructured star clusters.« less

  18. An Effective Massive Sensor Network Data Access Scheme Based on Topology Control for the Internet of Things

    PubMed Central

    Yi, Meng; Chen, Qingkui; Xiong, Neal N.

    2016-01-01

    This paper considers the distributed access and control problem of massive wireless sensor networks’ data access center for the Internet of Things, which is an extension of wireless sensor networks and an element of its topology structure. In the context of the arrival of massive service access requests at a virtual data center, this paper designs a massive sensing data access and control mechanism to improve the access efficiency of service requests and makes full use of the available resources at the data access center for the Internet of things. Firstly, this paper proposes a synergistically distributed buffer access model, which separates the information of resource and location. Secondly, the paper divides the service access requests into multiple virtual groups based on their characteristics and locations using an optimized self-organizing feature map neural network. Furthermore, this paper designs an optimal scheduling algorithm of group migration based on the combination scheme between the artificial bee colony algorithm and chaos searching theory. Finally, the experimental results demonstrate that this mechanism outperforms the existing schemes in terms of enhancing the accessibility of service requests effectively, reducing network delay, and has higher load balancing capacity and higher resource utility rate. PMID:27827878

  19. Fractional crystallization-induced variations in sulfides from the Noril’sk-Talnakh mining district (polar Siberia, Russia)

    USGS Publications Warehouse

    Duran, C.J.; Barnes, S-J.; Pleše, P.; Prašek, M. Kudrna; Zientek, Michael L.; Pagé, P.

    2017-01-01

    The distribution of platinum-group elements (PGE) within zoned magmatic ore bodies has been extensively studied and appears to be controlled by the partitioning behavior of the PGE during fractional crystallization of magmatic sulfide liquids. However, other chalcophile elements, especially TABS (Te, As, Bi, Sb, and Sn) have been neglected despite their critical role in forming platinum-group minerals (PGM). TABS are volatile trace elements that are considered to be mobile so investigating their primary distribution may be challenging in magmatic ore bodies that have been somewhat altered. Magmatic sulfide ore bodies from the Noril’sk-Talnakh mining district (polar Siberia, Russia) offer an exceptional opportunity to investigate the behavior of TABS during fractional crystallization of sulfide liquids and PGM formation as the primary features of the ore bodies have been relatively well preserved. In this study, new petrographic (2D and 3D) and whole-rock geochemical data from Cu-poor to Cu-rich sulfide ores of the Noril’sk-Talnakh mining district are integrated with published data to consider the role of fractional crystallization in generating mineralogical and geochemical variations across the different ore types (disseminated to massive). Despite textural variations in Cu-rich massive sulfides (lenses, veins, and breccias), these sulfides have similar chemical compositions, which suggests that Cu-rich veins and breccias formed from fractionated sulfide liquids that were injected into the surrounding rocks. Numerical modeling using the median disseminated sulfide composition as the initial sulfide liquid composition and recent DMSS/liq and DISS/liq predicts the compositional variations observed in the massive sulfides, especially in terms of Pt, Pd, and TABS. Therefore, distribution of these elements in the massive sulfides was likely controlled by their partitioning behavior during sulfide liquid fractional crystallization, prior to PGM formation. Our observations indicate that in the Cu-poor massive sulfides the PGM formed as the result of exsolution from sulfide minerals whereas in the Cu-rich massive sulfides the PGM formed by crystallization from late-stage fractionated sulfide liquids. We suggest that the significant amount of Sn-bearing PGM may be related to crustal contamination from granodiorite, whereas As, Bi, Te, and Sb were likely added to the magma along with S from sedimentary rocks. Large PGM that are scarce and randomly distributed may account for most of the whole-rock Pt budget. Based on our results, we propose a holistic genetic model for the formation of the magmatic sulfide ore bodies of the Noril’sk-Talnakh mining district.

  20. Efficient packing of patterns in sparse distributed memory by selective weighting of input bits

    NASA Technical Reports Server (NTRS)

    Kanerva, Pentti

    1991-01-01

    When a set of patterns is stored in a distributed memory, any given storage location participates in the storage of many patterns. From the perspective of any one stored pattern, the other patterns act as noise, and such noise limits the memory's storage capacity. The more similar the retrieval cues for two patterns are, the more the patterns interfere with each other in memory, and the harder it is to separate them on retrieval. A method is described of weighting the retrieval cues to reduce such interference and thus to improve the separability of patterns that have similar cues.

  1. Avalanches and generalized memory associativity in a network model for conscious and unconscious mental functioning

    NASA Astrophysics Data System (ADS)

    Siddiqui, Maheen; Wedemann, Roseli S.; Jensen, Henrik Jeldtoft

    2018-01-01

    We explore statistical characteristics of avalanches associated with the dynamics of a complex-network model, where two modules corresponding to sensorial and symbolic memories interact, representing unconscious and conscious mental processes. The model illustrates Freud's ideas regarding the neuroses and that consciousness is related with symbolic and linguistic memory activity in the brain. It incorporates the Stariolo-Tsallis generalization of the Boltzmann Machine in order to model memory retrieval and associativity. In the present work, we define and measure avalanche size distributions during memory retrieval, in order to gain insight regarding basic aspects of the functioning of these complex networks. The avalanche sizes defined for our model should be related to the time consumed and also to the size of the neuronal region which is activated, during memory retrieval. This allows the qualitative comparison of the behaviour of the distribution of cluster sizes, obtained during fMRI measurements of the propagation of signals in the brain, with the distribution of avalanche sizes obtained in our simulation experiments. This comparison corroborates the indication that the Nonextensive Statistical Mechanics formalism may indeed be more well suited to model the complex networks which constitute brain and mental structure.

  2. Downlink Training Techniques for FDD Massive MIMO Systems: Open-Loop and Closed-Loop Training With Memory

    NASA Astrophysics Data System (ADS)

    Choi, Junil; Love, David J.; Bidigare, Patrick

    2014-10-01

    The concept of deploying a large number of antennas at the base station, often called massive multiple-input multiple-output (MIMO), has drawn considerable interest because of its potential ability to revolutionize current wireless communication systems. Most literature on massive MIMO systems assumes time division duplexing (TDD), although frequency division duplexing (FDD) dominates current cellular systems. Due to the large number of transmit antennas at the base station, currently standardized approaches would require a large percentage of the precious downlink and uplink resources in FDD massive MIMO be used for training signal transmissions and channel state information (CSI) feedback. To reduce the overhead of the downlink training phase, we propose practical open-loop and closed-loop training frameworks in this paper. We assume the base station and the user share a common set of training signals in advance. In open-loop training, the base station transmits training signals in a round-robin manner, and the user successively estimates the current channel using long-term channel statistics such as temporal and spatial correlations and previous channel estimates. In closed-loop training, the user feeds back the best training signal to be sent in the future based on channel prediction and the previously received training signals. With a small amount of feedback from the user to the base station, closed-loop training offers better performance in the data communication phase, especially when the signal-to-noise ratio is low, the number of transmit antennas is large, or prior channel estimates are not accurate at the beginning of the communication setup, all of which would be mostly beneficial for massive MIMO systems.

  3. Massive parallelization of a 3D finite difference electromagnetic forward solution using domain decomposition methods on multiple CUDA enabled GPUs

    NASA Astrophysics Data System (ADS)

    Schultz, A.

    2010-12-01

    3D forward solvers lie at the core of inverse formulations used to image the variation of electrical conductivity within the Earth's interior. This property is associated with variations in temperature, composition, phase, presence of volatiles, and in specific settings, the presence of groundwater, geothermal resources, oil/gas or minerals. The high cost of 3D solutions has been a stumbling block to wider adoption of 3D methods. Parallel algorithms for modeling frequency domain 3D EM problems have not achieved wide scale adoption, with emphasis on fairly coarse grained parallelism using MPI and similar approaches. The communications bandwidth as well as the latency required to send and receive network communication packets is a limiting factor in implementing fine grained parallel strategies, inhibiting wide adoption of these algorithms. Leading Graphics Processor Unit (GPU) companies now produce GPUs with hundreds of GPU processor cores per die. The footprint, in silicon, of the GPU's restricted instruction set is much smaller than the general purpose instruction set required of a CPU. Consequently, the density of processor cores on a GPU can be much greater than on a CPU. GPUs also have local memory, registers and high speed communication with host CPUs, usually through PCIe type interconnects. The extremely low cost and high computational power of GPUs provides the EM geophysics community with an opportunity to achieve fine grained (i.e. massive) parallelization of codes on low cost hardware. The current generation of GPUs (e.g. NVidia Fermi) provides 3 billion transistors per chip die, with nearly 500 processor cores and up to 6 GB of fast (DDR5) GPU memory. This latest generation of GPU supports fast hardware double precision (64 bit) floating point operations of the type required for frequency domain EM forward solutions. Each Fermi GPU board can sustain nearly 1 TFLOP in double precision, and multiple boards can be installed in the host computer system. We describe our ongoing efforts to achieve massive parallelization on a novel hybrid GPU testbed machine currently configured with 12 Intel Westmere Xeon CPU cores (or 24 parallel computational threads) with 96 GB DDR3 system memory, 4 GPU subsystems which in aggregate contain 960 NVidia Tesla GPU cores with 16 GB dedicated DDR3 GPU memory, and a second interleved bank of 4 GPU subsystems containing in aggregate 1792 NVidia Fermi GPU cores with 12 GB dedicated DDR5 GPU memory. We are applying domain decomposition methods to a modified version of Weiss' (2001) 3D frequency domain full physics EM finite difference code, an open source GPL licensed f90 code available for download from www.OpenEM.org. This will be the core of a new hybrid 3D inversion that parallelizes frequencies across CPUs and individual forward solutions across GPUs. We describe progress made in modifying the code to use direct solvers in GPU cores dedicated to each small subdomain, iteratively improving the solution by matching adjacent subdomain boundary solutions, rather than iterative Krylov space sparse solvers as currently applied to the whole domain.

  4. An alternative design for a sparse distributed memory

    NASA Technical Reports Server (NTRS)

    Jaeckel, Louis A.

    1989-01-01

    A new design for a Sparse Distributed Memory, called the selected-coordinate design, is described. As in the original design, there are a large number of memory locations, each of which may be activated by many different addresses (binary vectors) in a very large address space. Each memory location is defined by specifying ten selected coordinates (bit positions in the address vectors) and a set of corresponding assigned values, consisting of one bit for each selected coordinate. A memory location is activated by an address if, for all ten of the locations's selected coordinates, the corresponding bits in the address vector match the respective assigned value bits, regardless of the other bits in the address vector. Some comparative memory capacity and signal-to-noise ratio estimates for the both the new and original designs are given. A few possible hardware embodiments of the new design are described.

  5. Autobiographical memory distributions for negative self-images: memories are organised around negative as well as positive aspects of identity.

    PubMed

    Rathbone, Clare J; Steel, Craig

    2015-01-01

    The relationship between developmental experiences, and an individual's emerging beliefs about themselves and the world, is central to many forms of psychotherapy. People suffering from a variety of mental health problems have been shown to use negative memories when defining the self; however, little is known about how these negative memories might be organised and relate to negative self-images. In two online studies with middle-aged (N = 18; study 1) and young (N = 56; study 2) adults, we found that participants' negative self-images (e.g., I am a failure) were associated with sets of autobiographical memories that formed clustered distributions around times of self-formation, in much the same pattern as for positive self-images (e.g., I am talented). This novel result shows that highly organised sets of salient memories may be responsible for perpetuating negative beliefs about the self. Implications for therapy are discussed.

  6. Parallel computing for probabilistic fatigue analysis

    NASA Technical Reports Server (NTRS)

    Sues, Robert H.; Lua, Yuan J.; Smith, Mark D.

    1993-01-01

    This paper presents the results of Phase I research to investigate the most effective parallel processing software strategies and hardware configurations for probabilistic structural analysis. We investigate the efficiency of both shared and distributed-memory architectures via a probabilistic fatigue life analysis problem. We also present a parallel programming approach, the virtual shared-memory paradigm, that is applicable across both types of hardware. Using this approach, problems can be solved on a variety of parallel configurations, including networks of single or multiprocessor workstations. We conclude that it is possible to effectively parallelize probabilistic fatigue analysis codes; however, special strategies will be needed to achieve large-scale parallelism to keep large number of processors busy and to treat problems with the large memory requirements encountered in practice. We also conclude that distributed-memory architecture is preferable to shared-memory for achieving large scale parallelism; however, in the future, the currently emerging hybrid-memory architectures will likely be optimal.

  7. Sparse distributed memory and related models

    NASA Technical Reports Server (NTRS)

    Kanerva, Pentti

    1992-01-01

    Described here is sparse distributed memory (SDM) as a neural-net associative memory. It is characterized by two weight matrices and by a large internal dimension - the number of hidden units is much larger than the number of input or output units. The first matrix, A, is fixed and possibly random, and the second matrix, C, is modifiable. The SDM is compared and contrasted to (1) computer memory, (2) correlation-matrix memory, (3) feet-forward artificial neural network, (4) cortex of the cerebellum, (5) Marr and Albus models of the cerebellum, and (6) Albus' cerebellar model arithmetic computer (CMAC). Several variations of the basic SDM design are discussed: the selected-coordinate and hyperplane designs of Jaeckel, the pseudorandom associative neural memory of Hassoun, and SDM with real-valued input variables by Prager and Fallside. SDM research conducted mainly at the Research Institute for Advanced Computer Science (RIACS) in 1986-1991 is highlighted.

  8. Persistently active neurons in human medial frontal and medial temporal lobe support working memory

    PubMed Central

    Kamiński, J; Sullivan, S; Chung, JM; Ross, IB; Mamelak, AN; Rutishauser, U

    2017-01-01

    Persistent neural activity is a putative mechanism for the maintenance of working memories. Persistent activity relies on the activity of a distributed network of areas, but the differential contribution of each area remains unclear. We recorded single neurons in the human medial frontal cortex and the medial temporal lobe while subjects held up to three items in memory. We found persistently active neurons in both areas. Persistent activity of hippocampal and amygdala neurons was stimulus-specific, formed stable attractors, and was predictive of memory content. Medial frontal cortex persistent activity, on the other hand, was modulated by memory load and task set but was not stimulus-specific. Trial-by-trial variability in persistent activity in both areas was related to memory strength, because it predicted the speed and accuracy by which stimuli were remembered. This work reveals, in humans, direct evidence for a distributed network of persistently active neurons supporting working memory maintenance. PMID:28218914

  9. Feature-Based Visual Short-Term Memory Is Widely Distributed and Hierarchically Organized.

    PubMed

    Dotson, Nicholas M; Hoffman, Steven J; Goodell, Baldwin; Gray, Charles M

    2018-06-15

    Feature-based visual short-term memory is known to engage both sensory and association cortices. However, the extent of the participating circuit and the neural mechanisms underlying memory maintenance is still a matter of vigorous debate. To address these questions, we recorded neuronal activity from 42 cortical areas in monkeys performing a feature-based visual short-term memory task and an interleaved fixation task. We find that task-dependent differences in firing rates are widely distributed throughout the cortex, while stimulus-specific changes in firing rates are more restricted and hierarchically organized. We also show that microsaccades during the memory delay encode the stimuli held in memory and that units modulated by microsaccades are more likely to exhibit stimulus specificity, suggesting that eye movements contribute to visual short-term memory processes. These results support a framework in which most cortical areas, within a modality, contribute to mnemonic representations at timescales that increase along the cortical hierarchy. Copyright © 2018 Elsevier Inc. All rights reserved.

  10. 3-D readout-electronics packaging for high-bandwidth massively paralleled imager

    DOEpatents

    Kwiatkowski, Kris; Lyke, James

    2007-12-18

    Dense, massively parallel signal processing electronics are co-packaged behind associated sensor pixels. Microchips containing a linear or bilinear arrangement of photo-sensors, together with associated complex electronics, are integrated into a simple 3-D structure (a "mirror cube"). An array of photo-sensitive cells are disposed on a stacked CMOS chip's surface at a 45.degree. angle from light reflecting mirror surfaces formed on a neighboring CMOS chip surface. Image processing electronics are held within the stacked CMOS chip layers. Electrical connections couple each of said stacked CMOS chip layers and a distribution grid, the connections for distributing power and signals to components associated with each stacked CSMO chip layer.

  11. X-RAY EMISSION LINE PROFILES FROM WIND CLUMP BOW SHOCKS IN MASSIVE STARS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ignace, R.; Waldron, W. L.; Cassinelli, J. P.

    2012-05-01

    The consequences of structured flows continue to be a pressing topic in relating spectral data to physical processes occurring in massive star winds. In a preceding paper, our group reported on hydrodynamic simulations of hypersonic flow past a rigid spherical clump to explore the structure of bow shocks that can form around wind clumps. Here we report on profiles of emission lines that arise from such bow shock morphologies. To compute emission line profiles, we adopt a two-component flow structure of wind and clumps using two 'beta' velocity laws. While individual bow shocks tend to generate double-horned emission line profiles,more » a group of bow shocks can lead to line profiles with a range of shapes with blueshifted peak emission that depends on the degree of X-ray photoabsorption by the interclump wind medium, the number of clump structures in the flow, and the radial distribution of the clumps. Using the two beta law prescription, the theoretical emission measure and temperature distribution throughout the wind can be derived. The emission measure tends to be power law, and the temperature distribution is broad in terms of wind velocity. Although restricted to the case of adiabatic cooling, our models highlight the influence of bow shock effects for hot plasma temperature and emission measure distributions in stellar winds and their impact on X-ray line profile shapes. Previous models have focused on geometrical considerations of the clumps and their distribution in the wind. Our results represent the first time that the temperature distribution of wind clump structures are explicitly and self-consistently accounted for in modeling X-ray line profile shapes for massive stars.« less

  12. Protecting and rescuing the effectors: roles of differentiation and survival in the control of memory T cell development

    PubMed Central

    Kurtulus, Sema; Tripathi, Pulak; Hildeman, David A.

    2013-01-01

    Vaccines, arguably the single most important intervention in improving human health, have exploited the phenomenon of immunological memory. The elicitation of memory T cells is often an essential part of successful long-lived protective immunity. Our understanding of T cell memory has been greatly aided by the development of TCR Tg mice and MHC tetrameric staining reagents that have allowed the precise tracking of antigen-specific T cell responses. Indeed, following acute infection or immunization, naïve T cells undergo a massive expansion culminating in the generation of a robust effector T cell population. This peak effector response is relatively short-lived and, while most effector T cells die by apoptosis, some remain and develop into memory cells. Although the molecular mechanisms underlying this cell fate decision remain incompletely defined, substantial progress has been made, particularly with regards to CD8+ T cells. For example, the effector CD8+ T cells generated during a response are heterogeneous, consisting of cells with more or less potential to develop into full-fledged memory cells. Development of CD8+ T cell memory is regulated by the transcriptional programs that control the differentiation and survival of effector T cells. While the type of antigenic stimulation and level of inflammation control effector CD8+ T cell differentiation, availability of cytokines and their ability to control expression and function of Bcl-2 family members governs their survival. These distinct differentiation and survival programs may allow for finer therapeutic intervention to control both the quality and quantity of CD8+ T cell memory. Effector to memory transition of CD4+ T cells is less well characterized than CD8+ T cells, emerging details will be discussed. This review will focus on the recent progress made in our understanding of the mechanisms underlying the development of T cell memory with an emphasis on factors controlling survival of effector T cells. PMID:23346085

  13. Learning to read aloud: A neural network approach using sparse distributed memory

    NASA Technical Reports Server (NTRS)

    Joglekar, Umesh Dwarkanath

    1989-01-01

    An attempt to solve a problem of text-to-phoneme mapping is described which does not appear amenable to solution by use of standard algorithmic procedures. Experiments based on a model of distributed processing are also described. This model (sparse distributed memory (SDM)) can be used in an iterative supervised learning mode to solve the problem. Additional improvements aimed at obtaining better performance are suggested.

  14. Sequential associative memory with nonuniformity of the layer sizes.

    PubMed

    Teramae, Jun-Nosuke; Fukai, Tomoki

    2007-01-01

    Sequence retrieval has a fundamental importance in information processing by the brain, and has extensively been studied in neural network models. Most of the previous sequential associative memory embedded sequences of memory patterns have nearly equal sizes. It was recently shown that local cortical networks display many diverse yet repeatable precise temporal sequences of neuronal activities, termed "neuronal avalanches." Interestingly, these avalanches displayed size and lifetime distributions that obey power laws. Inspired by these experimental findings, here we consider an associative memory model of binary neurons that stores sequences of memory patterns with highly variable sizes. Our analysis includes the case where the statistics of these size variations obey the above-mentioned power laws. We study the retrieval dynamics of such memory systems by analytically deriving the equations that govern the time evolution of macroscopic order parameters. We calculate the critical sequence length beyond which the network cannot retrieve memory sequences correctly. As an application of the analysis, we show how the present variability in sequential memory patterns degrades the power-law lifetime distribution of retrieved neural activities.

  15. Parallel processing architecture for H.264 deblocking filter on multi-core platforms

    NASA Astrophysics Data System (ADS)

    Prasad, Durga P.; Sonachalam, Sekar; Kunchamwar, Mangesh K.; Gunupudi, Nageswara Rao

    2012-03-01

    Massively parallel computing (multi-core) chips offer outstanding new solutions that satisfy the increasing demand for high resolution and high quality video compression technologies such as H.264. Such solutions not only provide exceptional quality but also efficiency, low power, and low latency, previously unattainable in software based designs. While custom hardware and Application Specific Integrated Circuit (ASIC) technologies may achieve lowlatency, low power, and real-time performance in some consumer devices, many applications require a flexible and scalable software-defined solution. The deblocking filter in H.264 encoder/decoder poses difficult implementation challenges because of heavy data dependencies and the conditional nature of the computations. Deblocking filter implementations tend to be fixed and difficult to reconfigure for different needs. The ability to scale up for higher quality requirements such as 10-bit pixel depth or a 4:2:2 chroma format often reduces the throughput of a parallel architecture designed for lower feature set. A scalable architecture for deblocking filtering, created with a massively parallel processor based solution, means that the same encoder or decoder will be deployed in a variety of applications, at different video resolutions, for different power requirements, and at higher bit-depths and better color sub sampling patterns like YUV, 4:2:2, or 4:4:4 formats. Low power, software-defined encoders/decoders may be implemented using a massively parallel processor array, like that found in HyperX technology, with 100 or more cores and distributed memory. The large number of processor elements allows the silicon device to operate more efficiently than conventional DSP or CPU technology. This software programing model for massively parallel processors offers a flexible implementation and a power efficiency close to that of ASIC solutions. This work describes a scalable parallel architecture for an H.264 compliant deblocking filter for multi core platforms such as HyperX technology. Parallel techniques such as parallel processing of independent macroblocks, sub blocks, and pixel row level are examined in this work. The deblocking architecture consists of a basic cell called deblocking filter unit (DFU) and dependent data buffer manager (DFM). The DFU can be used in several instances, catering to different performance needs the DFM serves the data required for the different number of DFUs, and also manages all the neighboring data required for future data processing of DFUs. This approach achieves the scalability, flexibility, and performance excellence required in deblocking filters.

  16. Patterns of particle distribution in multiparticle systems by random walks with memory enhancement and decay

    NASA Astrophysics Data System (ADS)

    Tan, Zhi-Jie; Zou, Xian-Wu; Huang, Sheng-You; Zhang, Wei; Jin, Zhun-Zhi

    2002-07-01

    We investigate the pattern of particle distribution and its evolution with time in multiparticle systems using the model of random walks with memory enhancement and decay. This model describes some biological intelligent walks. With decrease in the memory decay exponent α, the distribution of particles changes from a random dispersive pattern to a locally dense one, and then returns to the random one. Correspondingly, the fractal dimension Df,p characterizing the distribution of particle positions increases from a low value to a maximum and then decreases to the low one again. This is determined by the degree of overlap of regions consisting of sites with remanent information. The second moment of the density ρ(2) was introduced to investigate the inhomogeneity of the particle distribution. The dependence of ρ(2) on α is similar to that of Df,p on α. ρ(2) increases with time as a power law in the process of adjusting the particle distribution, and then ρ(2) tends to a stable equilibrium value.

  17. Distributed state-space generation of discrete-state stochastic models

    NASA Technical Reports Server (NTRS)

    Ciardo, Gianfranco; Gluckman, Joshua; Nicol, David

    1995-01-01

    High-level formalisms such as stochastic Petri nets can be used to model complex systems. Analysis of logical and numerical properties of these models of ten requires the generation and storage of the entire underlying state space. This imposes practical limitations on the types of systems which can be modeled. Because of the vast amount of memory consumed, we investigate distributed algorithms for the generation of state space graphs. The distributed construction allows us to take advantage of the combined memory readily available on a network of workstations. The key technical problem is to find effective methods for on-the-fly partitioning, so that the state space is evenly distributed among processors. In this paper we report on the implementation of a distributed state-space generator that may be linked to a number of existing system modeling tools. We discuss partitioning strategies in the context of Petri net models, and report on performance observed on a network of workstations, as well as on a distributed memory multi-computer.

  18. UV TO FAR-IR CATALOG OF A GALAXY SAMPLE IN NEARBY CLUSTERS: SPECTRAL ENERGY DISTRIBUTIONS AND ENVIRONMENTAL TRENDS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hernandez-Fernandez, Jonathan D.; Iglesias-Paramo, J.; Vilchez, J. M., E-mail: jonatan@iaa.es

    2012-03-01

    In this paper, we present a sample of cluster galaxies devoted to study the environmental influence on the star formation activity. This sample of galaxies inhabits in clusters showing a rich variety in their characteristics and have been observed by the SDSS-DR6 down to M{sub B} {approx} -18, and by the Galaxy Evolution Explorer AIS throughout sky regions corresponding to several megaparsecs. We assign the broadband and emission-line fluxes from ultraviolet to far-infrared to each galaxy performing an accurate spectral energy distribution for spectral fitting analysis. The clusters follow the general X-ray luminosity versus velocity dispersion trend of L{sub X}more » {proportional_to} {sigma}{sup 4.4}{sub c}. The analysis of the distributions of galaxy density counting up to the 5th nearest neighbor {Sigma}{sub 5} shows: (1) the virial regions and the cluster outskirts share a common range in the high density part of the distribution. This can be attributed to the presence of massive galaxy structures in the surroundings of virial regions. (2) The virial regions of massive clusters ({sigma}{sub c} > 550 km s{sup -1}) present a {Sigma}{sub 5} distribution statistically distinguishable ({approx}96%) from the corresponding distribution of low-mass clusters ({sigma}{sub c} < 550 km s{sup -1}). Both massive and low-mass clusters follow a similar density-radius trend, but the low-mass clusters avoid the high density extreme. We illustrate, with ABELL 1185, the environmental trends of galaxy populations. Maps of sky projected galaxy density show how low-luminosity star-forming galaxies appear distributed along more spread structures than their giant counterparts, whereas low-luminosity passive galaxies avoid the low-density environment. Giant passive and star-forming galaxies share rather similar sky regions with passive galaxies exhibiting more concentrated distributions.« less

  19. A parallel solver for huge dense linear systems

    NASA Astrophysics Data System (ADS)

    Badia, J. M.; Movilla, J. L.; Climente, J. I.; Castillo, M.; Marqués, M.; Mayo, R.; Quintana-Ortí, E. S.; Planelles, J.

    2011-11-01

    HDSS (Huge Dense Linear System Solver) is a Fortran Application Programming Interface (API) to facilitate the parallel solution of very large dense systems to scientists and engineers. The API makes use of parallelism to yield an efficient solution of the systems on a wide range of parallel platforms, from clusters of processors to massively parallel multiprocessors. It exploits out-of-core strategies to leverage the secondary memory in order to solve huge linear systems O(100.000). The API is based on the parallel linear algebra library PLAPACK, and on its Out-Of-Core (OOC) extension POOCLAPACK. Both PLAPACK and POOCLAPACK use the Message Passing Interface (MPI) as the communication layer and BLAS to perform the local matrix operations. The API provides a friendly interface to the users, hiding almost all the technical aspects related to the parallel execution of the code and the use of the secondary memory to solve the systems. In particular, the API can automatically select the best way to store and solve the systems, depending of the dimension of the system, the number of processes and the main memory of the platform. Experimental results on several parallel platforms report high performance, reaching more than 1 TFLOP with 64 cores to solve a system with more than 200 000 equations and more than 10 000 right-hand side vectors. New version program summaryProgram title: Huge Dense System Solver (HDSS) Catalogue identifier: AEHU_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHU_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 87 062 No. of bytes in distributed program, including test data, etc.: 1 069 110 Distribution format: tar.gz Programming language: Fortran90, C Computer: Parallel architectures: multiprocessors, computer clusters Operating system: Linux/Unix Has the code been vectorized or parallelized?: Yes, includes MPI primitives. RAM: Tested for up to 190 GB Classification: 6.5 External routines: MPI ( http://www.mpi-forum.org/), BLAS ( http://www.netlib.org/blas/), PLAPACK ( http://www.cs.utexas.edu/~plapack/), POOCLAPACK ( ftp://ftp.cs.utexas.edu/pub/rvdg/PLAPACK/pooclapack.ps) (code for PLAPACK and POOCLAPACK is included in the distribution). Catalogue identifier of previous version: AEHU_v1_0 Journal reference of previous version: Comput. Phys. Comm. 182 (2011) 533 Does the new version supersede the previous version?: Yes Nature of problem: Huge scale dense systems of linear equations, Ax=B, beyond standard LAPACK capabilities. Solution method: The linear systems are solved by means of parallelized routines based on the LU factorization, using efficient secondary storage algorithms when the available main memory is insufficient. Reasons for new version: In many applications we need to guarantee a high accuracy in the solution of very large linear systems and we can do it by using double-precision arithmetic. Summary of revisions: Version 1.1 Can be used to solve linear systems using double-precision arithmetic. New version of the initialization routine. The user can choose the kind of arithmetic and the values of several parameters of the environment. Running time: About 5 hours to solve a system with more than 200 000 equations and more than 10 000 right-hand side vectors using double-precision arithmetic on an eight-node commodity cluster with a total of 64 Intel cores.

  20. Vascular system modeling in parallel environment - distributed and shared memory approaches

    PubMed Central

    Jurczuk, Krzysztof; Kretowski, Marek; Bezy-Wendling, Johanne

    2011-01-01

    The paper presents two approaches in parallel modeling of vascular system development in internal organs. In the first approach, new parts of tissue are distributed among processors and each processor is responsible for perfusing its assigned parts of tissue to all vascular trees. Communication between processors is accomplished by passing messages and therefore this algorithm is perfectly suited for distributed memory architectures. The second approach is designed for shared memory machines. It parallelizes the perfusion process during which individual processing units perform calculations concerning different vascular trees. The experimental results, performed on a computing cluster and multi-core machines, show that both algorithms provide a significant speedup. PMID:21550891

  1. Biomimetic Models for An Ecological Approach to Massively-Deployed Sensor Networks

    NASA Technical Reports Server (NTRS)

    Jones, Kennie H.; Lodding, Kenneth N.; Olariu, Stephan; Wilson, Larry; Xin, Chunsheng

    2005-01-01

    Promises of ubiquitous control of the physical environment by massively-deployed wireless sensor networks open avenues for new applications that will redefine the way we live and work. Due to small size and low cost of sensor devices, visionaries promise systems enabled by deployment of massive numbers of sensors ubiquitous throughout our environment working in concert. Recent research has concentrated on developing techniques for performing relatively simple tasks with minimal energy expense, assuming some form of centralized control. Unfortunately, centralized control is not conducive to parallel activities and does not scale to massive size networks. Execution of simple tasks in sparse networks will not lead to the sophisticated applications predicted. We propose a new way of looking at massively-deployed sensor networks, motivated by lessons learned from the way biological ecosystems are organized. We demonstrate that in such a model, fully distributed data aggregation can be performed in a scalable fashion in massively deployed sensor networks, where motes operate on local information, making local decisions that are aggregated across the network to achieve globally-meaningful effects. We show that such architectures may be used to facilitate communication and synchronization in a fault-tolerant manner, while balancing workload and required energy expenditure throughout the network.

  2. Optimizing NEURON Simulation Environment Using Remote Memory Access with Recursive Doubling on Distributed Memory Systems.

    PubMed

    Shehzad, Danish; Bozkuş, Zeki

    2016-01-01

    Increase in complexity of neuronal network models escalated the efforts to make NEURON simulation environment efficient. The computational neuroscientists divided the equations into subnets amongst multiple processors for achieving better hardware performance. On parallel machines for neuronal networks, interprocessor spikes exchange consumes large section of overall simulation time. In NEURON for communication between processors Message Passing Interface (MPI) is used. MPI_Allgather collective is exercised for spikes exchange after each interval across distributed memory systems. The increase in number of processors though results in achieving concurrency and better performance but it inversely affects MPI_Allgather which increases communication time between processors. This necessitates improving communication methodology to decrease the spikes exchange time over distributed memory systems. This work has improved MPI_Allgather method using Remote Memory Access (RMA) by moving two-sided communication to one-sided communication, and use of recursive doubling mechanism facilitates achieving efficient communication between the processors in precise steps. This approach enhanced communication concurrency and has improved overall runtime making NEURON more efficient for simulation of large neuronal network models.

  3. Optimizing NEURON Simulation Environment Using Remote Memory Access with Recursive Doubling on Distributed Memory Systems

    PubMed Central

    Bozkuş, Zeki

    2016-01-01

    Increase in complexity of neuronal network models escalated the efforts to make NEURON simulation environment efficient. The computational neuroscientists divided the equations into subnets amongst multiple processors for achieving better hardware performance. On parallel machines for neuronal networks, interprocessor spikes exchange consumes large section of overall simulation time. In NEURON for communication between processors Message Passing Interface (MPI) is used. MPI_Allgather collective is exercised for spikes exchange after each interval across distributed memory systems. The increase in number of processors though results in achieving concurrency and better performance but it inversely affects MPI_Allgather which increases communication time between processors. This necessitates improving communication methodology to decrease the spikes exchange time over distributed memory systems. This work has improved MPI_Allgather method using Remote Memory Access (RMA) by moving two-sided communication to one-sided communication, and use of recursive doubling mechanism facilitates achieving efficient communication between the processors in precise steps. This approach enhanced communication concurrency and has improved overall runtime making NEURON more efficient for simulation of large neuronal network models. PMID:27413363

  4. Storage of multiple single-photon pulses emitted from a quantum dot in a solid-state quantum memory.

    PubMed

    Tang, Jian-Shun; Zhou, Zong-Quan; Wang, Yi-Tao; Li, Yu-Long; Liu, Xiao; Hua, Yi-Lin; Zou, Yang; Wang, Shuang; He, De-Yong; Chen, Geng; Sun, Yong-Nan; Yu, Ying; Li, Mi-Feng; Zha, Guo-Wei; Ni, Hai-Qiao; Niu, Zhi-Chuan; Li, Chuan-Feng; Guo, Guang-Can

    2015-10-15

    Quantum repeaters are critical components for distributing entanglement over long distances in presence of unavoidable optical losses during transmission. Stimulated by the Duan-Lukin-Cirac-Zoller protocol, many improved quantum repeater protocols based on quantum memories have been proposed, which commonly focus on the entanglement-distribution rate. Among these protocols, the elimination of multiple photons (or multiple photon-pairs) and the use of multimode quantum memory are demonstrated to have the ability to greatly improve the entanglement-distribution rate. Here, we demonstrate the storage of deterministic single photons emitted from a quantum dot in a polarization-maintaining solid-state quantum memory; in addition, multi-temporal-mode memory with 1, 20 and 100 narrow single-photon pulses is also demonstrated. Multi-photons are eliminated, and only one photon at most is contained in each pulse. Moreover, the solid-state properties of both sub-systems make this configuration more stable and easier to be scalable. Our work will be helpful in the construction of efficient quantum repeaters based on all-solid-state devices.

  5. Storage of multiple single-photon pulses emitted from a quantum dot in a solid-state quantum memory

    PubMed Central

    Tang, Jian-Shun; Zhou, Zong-Quan; Wang, Yi-Tao; Li, Yu-Long; Liu, Xiao; Hua, Yi-Lin; Zou, Yang; Wang, Shuang; He, De-Yong; Chen, Geng; Sun, Yong-Nan; Yu, Ying; Li, Mi-Feng; Zha, Guo-Wei; Ni, Hai-Qiao; Niu, Zhi-Chuan; Li, Chuan-Feng; Guo, Guang-Can

    2015-01-01

    Quantum repeaters are critical components for distributing entanglement over long distances in presence of unavoidable optical losses during transmission. Stimulated by the Duan–Lukin–Cirac–Zoller protocol, many improved quantum repeater protocols based on quantum memories have been proposed, which commonly focus on the entanglement-distribution rate. Among these protocols, the elimination of multiple photons (or multiple photon-pairs) and the use of multimode quantum memory are demonstrated to have the ability to greatly improve the entanglement-distribution rate. Here, we demonstrate the storage of deterministic single photons emitted from a quantum dot in a polarization-maintaining solid-state quantum memory; in addition, multi-temporal-mode memory with 1, 20 and 100 narrow single-photon pulses is also demonstrated. Multi-photons are eliminated, and only one photon at most is contained in each pulse. Moreover, the solid-state properties of both sub-systems make this configuration more stable and easier to be scalable. Our work will be helpful in the construction of efficient quantum repeaters based on all-solid-state devices. PMID:26468996

  6. Distributed Learning Enhances Relational Memory Consolidation

    ERIC Educational Resources Information Center

    Litman, Leib; Davachi, Lila

    2008-01-01

    It has long been known that distributed learning (DL) provides a mnemonic advantage over massed learning (ML). However, the underlying mechanisms that drive this robust mnemonic effect remain largely unknown. In two experiments, we show that DL across a 24 hr interval does not enhance immediate memory performance but instead slows the rate of…

  7. Distributed memory compiler design for sparse problems

    NASA Technical Reports Server (NTRS)

    Wu, Janet; Saltz, Joel; Berryman, Harry; Hiranandani, Seema

    1991-01-01

    A compiler and runtime support mechanism is described and demonstrated. The methods presented are capable of solving a wide range of sparse and unstructured problems in scientific computing. The compiler takes as input a FORTRAN 77 program enhanced with specifications for distributing data, and the compiler outputs a message passing program that runs on a distributed memory computer. The runtime support for this compiler is a library of primitives designed to efficiently support irregular patterns of distributed array accesses and irregular distributed array partitions. A variety of Intel iPSC/860 performance results obtained through the use of this compiler are presented.

  8. Enhancement of Immune Memory Responses to Respiratory Infection

    DTIC Science & Technology

    2017-08-01

    Unlimited Distribution 13. SUPPLEMENTARY NOTES 14. ABSTRACT Maintenance of long - term immunological memory against pathogens is crucial for the rapid...highly expressed in memory B cells in mice, and Atg7 is required for maintenance of long - term memory B cells needed to protect against influenza...AWARD NUMBER: W81XWH-16-1-0360 TITLE: Enhancement of Immune Memory Responses to Respiratory Infection PRINCIPAL INVESTIGATORs: Dr Min Chen PhD

  9. AN EVOLVING STELLAR INITIAL MASS FUNCTION AND THE GAMMA-RAY BURST REDSHIFT DISTRIBUTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, F. Y.; Dai, Z. G.

    2011-02-01

    Recent studies suggest that Swift gamma-ray bursts (GRBs) may not trace an ordinary star formation history (SFH). Here, we show that the GRB rate turns out to be consistent with the SFH with an evolving stellar initial mass function (IMF). We first show that the latest Swift sample of GRBs reveals an increasing evolution in the GRB rate relative to the ordinary star formation rate at high redshifts. We then assume only massive stars with masses greater than the critical value to produce GRBs and use an evolving stellar IMF suggested by Dave to fit the latest GRB redshift distribution.more » This evolving IMF would increase the relative number of massive stars, which could lead to more GRB explosions at high redshifts. We find that the evolving IMF can well reproduce the observed redshift distribution of Swift GRBs.« less

  10. Structure of massive star forming clumps from the Red MSX Source Survey

    NASA Astrophysics Data System (ADS)

    Figura, Charles C.; Urquhart, J. S.; Morgan, L.

    2014-01-01

    We present ammonia (1,1) and (2,2) emission maps of 61 high-mass star forming regions drawn from the Red MSX Source (RMS) Survey and observed with the Green Bank Telescope's K-Band Focal Plane Array. We use these observations to investigate the spatial distribution of the environmental conditions associated with this sample of embedded massive young stellar objects (MYSOs). Ammonia is an excellent high-density tracer of star-forming regions as its hyperfine structure allows relatively simple characterisation of the molecular environment. These maps are used to measure the column density, kinetic gas temperature distributions and velocity structure across these regions. We compare the distribution of these properties to that of the associated dust and mid-infrared emission traced by the ATLASGAL 870 micron emission maps and the Spitzer GLIMPSE IRAC images. We present a summary of these results and highlight some of more interesting finds.

  11. Correlative transmission electron microscopy and electrical properties study of switchable phase-change random access memory line cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oosthoek, J. L. M.; Kooi, B. J., E-mail: B.J.Kooi@rug.nl; Voogt, F. C.

    2015-02-14

    Phase-change memory line cells, where the active material has a thickness of 15 nm, were prepared for transmission electron microscopy (TEM) observation such that they still could be switched and characterized electrically after the preparation. The result of these observations in comparison with detailed electrical characterization showed (i) normal behavior for relatively long amorphous marks, resulting in a hyperbolic dependence between SET resistance and SET current, indicating a switching mechanism based on initially long and thin nanoscale crystalline filaments which thicken gradually, and (ii) anomalous behavior, which holds for relatively short amorphous marks, where initially directly a massive crystalline filament ismore » formed that consumes most of the width of the amorphous mark only leaving minor residual amorphous regions at its edges. The present results demonstrate that even in (purposely) thick TEM samples, the TEM sample preparation hampers the probability to observe normal behavior and it can be debated whether it is possible to produce electrically switchable TEM specimen in which the memory cells behave the same as in their original bulk embedded state.« less

  12. Correlative transmission electron microscopy and electrical properties study of switchable phase-change random access memory line cells

    NASA Astrophysics Data System (ADS)

    Oosthoek, J. L. M.; Voogt, F. C.; Attenborough, K.; Verheijen, M. A.; Hurkx, G. A. M.; Gravesteijn, D. J.; Kooi, B. J.

    2015-02-01

    Phase-change memory line cells, where the active material has a thickness of 15 nm, were prepared for transmission electron microscopy (TEM) observation such that they still could be switched and characterized electrically after the preparation. The result of these observations in comparison with detailed electrical characterization showed (i) normal behavior for relatively long amorphous marks, resulting in a hyperbolic dependence between SET resistance and SET current, indicating a switching mechanism based on initially long and thin nanoscale crystalline filaments which thicken gradually, and (ii) anomalous behavior, which holds for relatively short amorphous marks, where initially directly a massive crystalline filament is formed that consumes most of the width of the amorphous mark only leaving minor residual amorphous regions at its edges. The present results demonstrate that even in (purposely) thick TEM samples, the TEM sample preparation hampers the probability to observe normal behavior and it can be debated whether it is possible to produce electrically switchable TEM specimen in which the memory cells behave the same as in their original bulk embedded state.

  13. Central Engine Memory of Gamma-Ray Bursts and Soft Gamma-Ray Repeaters

    NASA Astrophysics Data System (ADS)

    Zhang, Bin-Bin; Zhang, Bing; Castro-Tirado, Alberto J.

    2016-04-01

    Gamma-ray bursts (GRBs) are bursts of γ-rays generated from relativistic jets launched from catastrophic events such as massive star core collapse or binary compact star coalescence. Previous studies suggested that GRB emission is erratic, with no noticeable memory in the central engine. Here we report a discovery that similar light curve patterns exist within individual bursts for at least some GRBs. Applying the Dynamic Time Warping method, we show that similarity of light curve patterns between pulses of a single burst or between the light curves of a GRB and its X-ray flare can be identified. This suggests that the central engine of at least some GRBs carries “memory” of its activities. We also show that the same technique can identify memory-like emission episodes in the flaring emission in soft gamma-ray repeaters (SGRs), which are believed to be Galactic, highly magnetized neutron stars named magnetars. Such a phenomenon challenges the standard black hole central engine models for GRBs, and suggest a common physical mechanism behind GRBs and SGRs, which points toward a magnetar central engine of GRBs.

  14. Autobiographical Memory and Depression in the Later Age: The Bump Is a Turning Point

    ERIC Educational Resources Information Center

    Gidron, Yori; Alon, Shirly

    2007-01-01

    This preliminary study integrated previous findings of the distribution of autobiographical memories in the later age according to their age of occurrence, with the overgeneral memory bias predictive of depression. Twenty-five non-demented, Israeli participants between 65-89 years of age provided autobiographical memories to 4 groups of word cues…

  15. Cortex and Memory: Emergence of a New Paradigm

    ERIC Educational Resources Information Center

    Fuster, Joaquin M.

    2009-01-01

    Converging evidence from humans and nonhuman primates is obliging us to abandon conventional models in favor of a radically different, distributed-network paradigm of cortical memory. Central to the new paradigm is the concept of memory network or cognit--that is, a memory or an item of knowledge defined by a pattern of connections between neuron…

  16. The Effects of Single and Close Binary Evolution on the Stellar Mass Function

    NASA Astrophysics Data System (ADS)

    Schneider, R. N. F.; Izzard, G. R.; de Mink, S.; Langer, N., Stolte, A., de Koter, A.; Gvaramadze, V. V.; Hussmann, B.; Liermann, A.; Sana, H.

    2013-06-01

    Massive stars are almost exclusively born in star clusters, where stars in a cluster are expected to be born quasi-simultaneously and with the same chemical composition. The distribution of their birth masses favors lower over higher stellar masses, such that the most massive stars are rare, and the existence of an stellar upper mass limit is still debated. The majority of massive stars are born as members of close binary systems and most of them will exchange mass with a close companion during their lifetime. We explore the influence of single and binary star evolution on the high mass end of the stellar mass function using a rapid binary evolution code. We apply our results to two massive Galactic star clusters and show how the shape of their mass functions can be used to determine cluster ages and comment on the stellar upper mass limit in view of our new findings.

  17. Angular distributions and mechanisms of fragmentation by relativistic heavy ions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stoenner, R.W.; Haustein, P.E.; Cumming, J.B.

    1984-07-23

    Angular distributions of massive fragments from relativistic heavy-ion interactions are reported. Sideward peaking is observed for the light fragment /sup 37/Ar, from 25-GeV /sup 12/C+Au, while the distribution for /sup 127/Xe is strongly forward peaked. Conflicts of these observations and other existing data with predictions of models for the fragmentation process are discussed.

  18. Entropy-based heavy tailed distribution transformation and visual analytics for monitoring massive network traffic

    NASA Astrophysics Data System (ADS)

    Han, Keesook J.; Hodge, Matthew; Ross, Virginia W.

    2011-06-01

    For monitoring network traffic, there is an enormous cost in collecting, storing, and analyzing network traffic datasets. Data mining based network traffic analysis has a growing interest in the cyber security community, but is computationally expensive for finding correlations between attributes in massive network traffic datasets. To lower the cost and reduce computational complexity, it is desirable to perform feasible statistical processing on effective reduced datasets instead of on the original full datasets. Because of the dynamic behavior of network traffic, traffic traces exhibit mixtures of heavy tailed statistical distributions or overdispersion. Heavy tailed network traffic characterization and visualization are important and essential tasks to measure network performance for the Quality of Services. However, heavy tailed distributions are limited in their ability to characterize real-time network traffic due to the difficulty of parameter estimation. The Entropy-Based Heavy Tailed Distribution Transformation (EHTDT) was developed to convert the heavy tailed distribution into a transformed distribution to find the linear approximation. The EHTDT linearization has the advantage of being amenable to characterize and aggregate overdispersion of network traffic in realtime. Results of applying the EHTDT for innovative visual analytics to real network traffic data are presented.

  19. The descendants of the first quasars in the BlueTides simulation

    NASA Astrophysics Data System (ADS)

    Tenneti, Ananth; Di Matteo, Tiziana; Croft, Rupert; Garcia, ThomasJae; Feng, Yu

    2018-02-01

    Supermassive blackholes with masses of a billion solar masses or more are known to exist up to z = 7. However, the present-day environments of the descendants of first quasars are not well understood and it is not known if they live in massive galaxy clusters or more isolated galaxies at z = 0. We use a dark matter-only realization (BTMassTracer) of the BlueTides cosmological hydrodynamic simulation to study the halo properties of the descendants of the most massive black holes at z = 8. We find that the descendants of the quasars with most massive black holes are not amongst the most massive haloes. They reside in haloes of with group-like (˜1014 M⊙) masses, while the most massive haloes in the simulations are rich clusters with masses ˜1015 M⊙. At z = 0, the distribution of halo masses of these quasar descendants is similar to that of the descendants of least massive black holes, which indicates that they are likely to exist in similar environments. By tracing back to the z = 8 progenitors of the most massive (cluster sized) haloes at z = 0; we find that their most likely black hole mass is less than 107 M⊙; they are clearly not amongst the most massive black holes. For haloes above 1015 M⊙, there is only 20 per cent probability that their z = 8 progenitors hosted a black hole with mass above 107 M⊙.

  20. Extremely Low Operating Current Resistive Memory Based on Exfoliated 2D Perovskite Single Crystals for Neuromorphic Computing.

    PubMed

    Tian, He; Zhao, Lianfeng; Wang, Xuefeng; Yeh, Yao-Wen; Yao, Nan; Rand, Barry P; Ren, Tian-Ling

    2017-12-26

    Extremely low energy consumption neuromorphic computing is required to achieve massively parallel information processing on par with the human brain. To achieve this goal, resistive memories based on materials with ionic transport and extremely low operating current are required. Extremely low operating current allows for low power operation by minimizing the program, erase, and read currents. However, materials currently used in resistive memories, such as defective HfO x , AlO x , TaO x , etc., cannot suppress electronic transport (i.e., leakage current) while allowing good ionic transport. Here, we show that 2D Ruddlesden-Popper phase hybrid lead bromide perovskite single crystals are promising materials for low operating current nanodevice applications because of their mixed electronic and ionic transport and ease of fabrication. Ionic transport in the exfoliated 2D perovskite layer is evident via the migration of bromide ions. Filaments with a diameter of approximately 20 nm are visualized, and resistive memories with extremely low program current down to 10 pA are achieved, a value at least 1 order of magnitude lower than conventional materials. The ionic migration and diffusion as an artificial synapse is realized in the 2D layered perovskites at the pA level, which can enable extremely low energy neuromorphic computing.

  1. Hydrogen-peroxide-modified egg albumen for transparent and flexible resistive switching memory

    NASA Astrophysics Data System (ADS)

    Zhou, Guangdong; Yao, Yanqing; Lu, Zhisong; Yang, Xiude; Han, Juanjuan; Wang, Gang; Rao, Xi; Li, Ping; Liu, Qian; Song, Qunliang

    2017-10-01

    Egg albumen is modified by hydrogen peroxide with concentrations of 5%, 10%, 15% and 30% at room temperature. Compared with devices without modification, a memory cell of Ag/10% H2O2-egg albumen/indium tin oxide exhibits obviously enhanced resistive switching memory behavior with a resistance ratio of 104, self-healing switching endurance for 900 cycles and a prolonged retention time for a 104 s @ 200 mV reading voltage after being bent 103 times. The breakage of massive protein chains occurs followed by the recombination of new protein chain networks due to the oxidation of amidogen and the synthesis of disulfide during the hydrogen peroxide modifying egg albumen. Ions such as Fe3+, Na+, K+, which are surrounded by protein chains, are exposed to the outside of protein chains to generate a series of traps during the egg albumen degeneration process. According to the fitting results of the double logarithm I-V curves and the current-sensing atomic force microscopy (CS-AFM) images of the ON and OFF states, the charge transfer from one trap center to its neighboring trap center is responsible for the resistive switching memory phenomena. The results of our work indicate that hydrogen- peroxide-modified egg albumen could open up a new avenue of biomaterial application in nanoelectronic systems.

  2. Memories of Physical Education

    ERIC Educational Resources Information Center

    Sidwell, Amy M.; Walls, Richard T.

    2014-01-01

    The purpose of this investigation was to explore college students' autobiographical memories of physical education (PE). Questionnaires were distributed to students enrolled in undergraduate Introduction to PE and Introduction to Communications courses. The 261 participants wrote about memories of PE. These students recalled events from Grades…

  3. Immigration, Language Proficiency, and Autobiographical Memories: Lifespan Distribution and Second-Language Access

    PubMed Central

    Esposito, Alena G.; Baker-Ward, Lynne

    2015-01-01

    This investigation examined two controversies in the autobiographical literature: how cross-language immigration affects the distribution of autobiographical memories across the lifespan and under what circumstances language-dependent recall is observed. Both Spanish/English bilingual immigrants and English monolingual non-immigrants participated in a cue word study, with the bilingual sample taking part in a within-subject language manipulation. The expected bump in the number of memories from early life was observed for non-immigrants but not immigrants, who reported more memories for events surrounding immigration. Aspects of the methodology addressed possible reasons for past discrepant findings. Language-dependent recall was influenced by second-language proficiency. Results were interpreted as evidence that bilinguals with high second-language proficiency, in contrast to those with lower second-language proficiency, access a single conceptual store through either language. The final multi-level model predicting language-dependent recall, including second-language proficiency, age of immigration, internal language, and cue word language, explained ¾ of the between-person variance and ⅕ of the within-person variance. We arrive at two conclusions. First, major life transitions influence the distribution of memories. Second, concept representation across multiple languages follows a developmental model. In addition, the results underscore the importance of considering language experience in research involving memory reports. PMID:26274061

  4. Immigration, language proficiency, and autobiographical memories: Lifespan distribution and second-language access.

    PubMed

    Esposito, Alena G; Baker-Ward, Lynne

    2016-08-01

    This investigation examined two controversies in the autobiographical literature: how cross-language immigration affects the distribution of autobiographical memories across the lifespan and under what circumstances language-dependent recall is observed. Both Spanish/English bilingual immigrants and English monolingual non-immigrants participated in a cue word study, with the bilingual sample taking part in a within-subject language manipulation. The expected bump in the number of memories from early life was observed for non-immigrants but not immigrants, who reported more memories for events surrounding immigration. Aspects of the methodology addressed possible reasons for past discrepant findings. Language-dependent recall was influenced by second-language proficiency. Results were interpreted as evidence that bilinguals with high second-language proficiency, in contrast to those with lower second-language proficiency, access a single conceptual store through either language. The final multi-level model predicting language-dependent recall, including second-language proficiency, age of immigration, internal language, and cue word language, explained ¾ of the between-person variance and (1)/5 of the within-person variance. We arrive at two conclusions. First, major life transitions influence the distribution of memories. Second, concept representation across multiple languages follows a developmental model. In addition, the results underscore the importance of considering language experience in research involving memory reports.

  5. Bioinspired architecture approach for a one-billion transistor smart CMOS camera chip

    NASA Astrophysics Data System (ADS)

    Fey, Dietmar; Komann, Marcus

    2007-05-01

    In the paper we present a massively parallel VLSI architecture for future smart CMOS camera chips with up to one billion transistors. To exploit efficiently the potential offered by future micro- or nanoelectronic devices traditional on central structures oriented parallel architectures based on MIMD or SIMD approaches will fail. They require too long and too many global interconnects for the distribution of code or the access to common memory. On the other hand nature developed self-organising and emergent principles to manage successfully complex structures based on lots of interacting simple elements. Therefore we developed a new as Marching Pixels denoted emergent computing paradigm based on a mixture of bio-inspired computing models like cellular automaton and artificial ants. In the paper we present different Marching Pixels algorithms and the corresponding VLSI array architecture. A detailed synthesis result for a 0.18 μm CMOS process shows that a 256×256 pixel image is processed in less than 10 ms assuming a moderate 100 MHz clock rate for the processor array. Future higher integration densities and a 3D chip stacking technology will allow the integration and processing of Mega pixels within the same time since our architecture is fully scalable.

  6. Development of a Computing Cluster At the University of Richmond

    NASA Astrophysics Data System (ADS)

    Carbonneau, J.; Gilfoyle, G. P.; Bunn, E. F.

    2010-11-01

    The University of Richmond has developed a computing cluster to support the massive simulation and data analysis requirements for programs in intermediate-energy nuclear physics, and cosmology. It is a 20-node, 240-core system running Red Hat Enterprise Linux 5. We have built and installed the physics software packages (Geant4, gemc, MADmap...) and developed shell and Perl scripts for running those programs on the remote nodes. The system has a theoretical processing peak of about 2500 GFLOPS. Testing with the High Performance Linpack (HPL) benchmarking program (one of the standard benchmarks used by the TOP500 list of fastest supercomputers) resulted in speeds of over 900 GFLOPS. The difference between the maximum and measured speeds is due to limitations in the communication speed among the nodes; creating a bottleneck for large memory problems. As HPL sends data between nodes, the gigabit Ethernet connection cannot keep up with the processing power. We will show how both the theoretical and actual performance of the cluster compares with other current and past clusters, as well as the cost per GFLOP. We will also examine the scaling of the performance when distributed to increasing numbers of nodes.

  7. Real-Time Data Streaming and Storing Structure for the LHD's Fusion Plasma Experiments

    NASA Astrophysics Data System (ADS)

    Nakanishi, Hideya; Ohsuna, Masaki; Kojima, Mamoru; Imazu, Setsuo; Nonomura, Miki; Emoto, Masahiko; Yoshida, Masanobu; Iwata, Chie; Ida, Katsumi

    2016-02-01

    The LHD data acquisition and archiving system, i.e., LABCOM system, has been fully equipped with high-speed real-time acquisition, streaming, and storage capabilities. To deal with more than 100 MB/s continuously generated data at each data acquisition (DAQ) node, DAQ tasks have been implemented as multitasking and multithreaded ones in which the shared memory plays the most important role for inter-process fast and massive data handling. By introducing a 10-second time chunk named “subshot,” endless data streams can be stored into a consecutive series of fixed length data blocks so that they will soon become readable by other processes even while the write process is continuing. Real-time device and environmental monitoring are also implemented in the same way with further sparse resampling. The central data storage has been separated into two layers to be capable of receiving multiple 100 MB/s inflows in parallel. For the frontend layer, high-speed SSD arrays are used as the GlusterFS distributed filesystem which can provide max. 2 GB/s throughput. Those design optimizations would be informative for implementing the next-generation data archiving system in big physics, such as ITER.

  8. Novel Highly Parallel and Systolic Architectures Using Quantum Dot-Based Hardware

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Toomarian, Benny N.; Spotnitz, Matthew

    1997-01-01

    VLSI technology has made possible the integration of massive number of components (processors, memory, etc.) into a single chip. In VLSI design, memory and processing power are relatively cheap and the main emphasis of the design is on reducing the overall interconnection complexity since data routing costs dominate the power, time, and area required to implement a computation. Communication is costly because wires occupy the most space on a circuit and it can also degrade clock time. In fact, much of the complexity (and hence the cost) of VLSI design results from minimization of data routing. The main difficulty in VLSI routing is due to the fact that crossing of the lines carrying data, instruction, control, etc. is not possible in a plane. Thus, in order to meet this constraint, the VLSI design aims at keeping the architecture highly regular with local and short interconnection. As a result, while the high level of integration has opened the way for massively parallel computation, practical and full exploitation of such a capability in many applications of interest has been hindered by the constraints on interconnection pattern. More precisely. the use of only localized communication significantly simplifies the design of interconnection architecture but at the expense of somewhat restricted class of applications. For example, there are currently commercially available products integrating; hundreds of simple processor elements within a single chip. However, the lack of adequate interconnection pattern among these processing elements make them inefficient for exploiting a large degree of parallelism in many applications.

  9. Efficiently modeling neural networks on massively parallel computers

    NASA Technical Reports Server (NTRS)

    Farber, Robert M.

    1993-01-01

    Neural networks are a very useful tool for analyzing and modeling complex real world systems. Applying neural network simulations to real world problems generally involves large amounts of data and massive amounts of computation. To efficiently handle the computational requirements of large problems, we have implemented at Los Alamos a highly efficient neural network compiler for serial computers, vector computers, vector parallel computers, and fine grain SIMD computers such as the CM-2 connection machine. This paper describes the mapping used by the compiler to implement feed-forward backpropagation neural networks for a SIMD (Single Instruction Multiple Data) architecture parallel computer. Thinking Machines Corporation has benchmarked our code at 1.3 billion interconnects per second (approximately 3 gigaflops) on a 64,000 processor CM-2 connection machine (Singer 1990). This mapping is applicable to other SIMD computers and can be implemented on MIMD computers such as the CM-5 connection machine. Our mapping has virtually no communications overhead with the exception of the communications required for a global summation across the processors (which has a sub-linear runtime growth on the order of O(log(number of processors)). We can efficiently model very large neural networks which have many neurons and interconnects and our mapping can extend to arbitrarily large networks (within memory limitations) by merging the memory space of separate processors with fast adjacent processor interprocessor communications. This paper will consider the simulation of only feed forward neural network although this method is extendable to recurrent networks.

  10. Studying the properties of massive galaxies in protoclusters using millimetre-wavelength observations

    NASA Astrophysics Data System (ADS)

    Zeballos, M.; Hughes, D. H.; Aretxaga, I.; Wilson, G.

    2011-10-01

    We present an analysis of the number density and spatial distribution of the population of millimetre galaxies (MMGs) towards 17 high-z active galaxies using 1.1 mm observations taken with the AzTEC camera on the Atacama Submillimeter Telescope Experiment (ASTE) and the James Clerk Maxwell Telescope (JCMT). The sample allows us to study the properties of MMGs in protocluster environments and compare them to the population in blank (unbiased) fields. The goal is to identify if these biased environments are responsible for differences in the number and distribution of dust-obscured star-forming galaxies and whether these changes support the suggestion that MMGs are the progenitors of massive (elliptical) galaxies we see today in the centre of rich clusters.

  11. Can Distributed Volunteers Accomplish Massive Data Analysis Tasks?

    NASA Technical Reports Server (NTRS)

    Kanefsky, B.; Barlow, N. G.; Gulick, V. C.

    2001-01-01

    We argue that many image analysis tasks can be performed by distributed amateurs. Our pilot study, with crater surveying and classification, has produced encouraging results in terms of both quantity (100,000 crater entries in 2 months) and quality. Additional information is contained in the original extended abstract.

  12. A ten-year follow-up of a study of memory for the attack of September 11, 2001: Flashbulb memories and memories for flashbulb events.

    PubMed

    Hirst, William; Phelps, Elizabeth A; Meksin, Robert; Vaidya, Chandan J; Johnson, Marcia K; Mitchell, Karen J; Buckner, Randy L; Budson, Andrew E; Gabrieli, John D E; Lustig, Cindy; Mather, Mara; Ochsner, Kevin N; Schacter, Daniel; Simons, Jon S; Lyle, Keith B; Cuc, Alexandru F; Olsson, Andreas

    2015-06-01

    Within a week of the attack of September 11, 2001, a consortium of researchers from across the United States distributed a survey asking about the circumstances in which respondents learned of the attack (their flashbulb memories) and the facts about the attack itself (their event memories). Follow-up surveys were distributed 11, 25, and 119 months after the attack. The study, therefore, examines retention of flashbulb memories and event memories at a substantially longer retention interval than any previous study using a test-retest methodology, allowing for the study of such memories over the long term. There was rapid forgetting of both flashbulb and event memories within the first year, but the forgetting curves leveled off after that, not significantly changing even after a 10-year delay. Despite the initial rapid forgetting, confidence remained high throughout the 10-year period. Five putative factors affecting flashbulb memory consistency and event memory accuracy were examined: (a) attention to media, (b) the amount of discussion, (c) residency, (d) personal loss and/or inconvenience, and (e) emotional intensity. After 10 years, none of these factors predicted flashbulb memory consistency; media attention and ensuing conversation predicted event memory accuracy. Inconsistent flashbulb memories were more likely to be repeated rather than corrected over the 10-year period; inaccurate event memories, however, were more likely to be corrected. The findings suggest that even traumatic memories and those implicated in a community's collective identity may be inconsistent over time and these inconsistencies can persist without the corrective force of external influences. (c) 2015 APA, all rights reserved).

  13. Binary stars in the Galactic thick disc

    NASA Astrophysics Data System (ADS)

    Izzard, Robert G.; Preece, Holly; Jofre, Paula; Halabi, Ghina M.; Masseron, Thomas; Tout, Christopher A.

    2018-01-01

    The combination of asteroseismologically measured masses with abundances from detailed analyses of stellar atmospheres challenges our fundamental knowledge of stars and our ability to model them. Ancient red-giant stars in the Galactic thick disc are proving to be most troublesome in this regard. They are older than 5 Gyr, a lifetime corresponding to an initial stellar mass of about 1.2 M⊙. So why do the masses of a sizeable fraction of thick-disc stars exceed 1.3 M⊙, with some as massive as 2.3 M⊙? We answer this question by considering duplicity in the thick-disc stellar population using a binary population-nucleosynthesis model. We examine how mass transfer and merging affect the stellar mass distribution and surface abundances of carbon and nitrogen. We show that a few per cent of thick-disc stars can interact in binary star systems and become more massive than 1.3 M⊙. Of these stars, most are single because they are merged binaries. Some stars more massive than 1.3 M⊙ form in binaries by wind mass transfer. We compare our results to a sample of the APOKASC data set and find reasonable agreement except in the number of these thick-disc stars more massive than 1.3 M⊙. This problem is resolved by the use of a logarithmically flat orbital-period distribution and a large binary fraction.

  14. A dust-obscured massive maximum-starburst galaxy at a redshift of 6.34.

    PubMed

    Riechers, Dominik A; Bradford, C M; Clements, D L; Dowell, C D; Pérez-Fournon, I; Ivison, R J; Bridge, C; Conley, A; Fu, Hai; Vieira, J D; Wardlow, J; Calanog, J; Cooray, A; Hurley, P; Neri, R; Kamenetzky, J; Aguirre, J E; Altieri, B; Arumugam, V; Benford, D J; Béthermin, M; Bock, J; Burgarella, D; Cabrera-Lavers, A; Chapman, S C; Cox, P; Dunlop, J S; Earle, L; Farrah, D; Ferrero, P; Franceschini, A; Gavazzi, R; Glenn, J; Solares, E A Gonzalez; Gurwell, M A; Halpern, M; Hatziminaoglou, E; Hyde, A; Ibar, E; Kovács, A; Krips, M; Lupu, R E; Maloney, P R; Martinez-Navajas, P; Matsuhara, H; Murphy, E J; Naylor, B J; Nguyen, H T; Oliver, S J; Omont, A; Page, M J; Petitpas, G; Rangwala, N; Roseboom, I G; Scott, D; Smith, A J; Staguhn, J G; Streblyanska, A; Thomson, A P; Valtchanov, I; Viero, M; Wang, L; Zemcov, M; Zmuidzinas, J

    2013-04-18

    Massive present-day early-type (elliptical and lenticular) galaxies probably gained the bulk of their stellar mass and heavy elements through intense, dust-enshrouded starbursts--that is, increased rates of star formation--in the most massive dark-matter haloes at early epochs. However, it remains unknown how soon after the Big Bang massive starburst progenitors exist. The measured redshift (z) distribution of dusty, massive starbursts has long been suspected to be biased low in z owing to selection effects, as confirmed by recent findings of systems with redshifts as high as ~5 (refs 2-4). Here we report the identification of a massive starburst galaxy at z = 6.34 through a submillimetre colour-selection technique. We unambiguously determined the redshift from a suite of molecular and atomic fine-structure cooling lines. These measurements reveal a hundred billion solar masses of highly excited, chemically evolved interstellar medium in this galaxy, which constitutes at least 40 per cent of the baryonic mass. A 'maximum starburst' converts the gas into stars at a rate more than 2,000 times that of the Milky Way, a rate among the highest observed at any epoch. Despite the overall downturn in cosmic star formation towards the highest redshifts, it seems that environments mature enough to form the most massive, intense starbursts existed at least as early as 880 million years after the Big Bang.

  15. Molecular line study of massive star-forming regions from the Red MSX Source survey

    NASA Astrophysics Data System (ADS)

    Yu, Naiping; Wang, Jun-Jie

    2014-05-01

    In this paper, we have selected a sample of massive star-forming regions from the Red MSX Source survey, in order to study star formation activities (mainly outflow and inflow signatures). We have focused on three molecular lines from the Millimeter Astronomy Legacy Team Survey at 90 GHz: HCO+(1-0), H13CO+(1-0) and SiO(2-1). According to previous observations, our sources can be divided into two groups: nine massive young stellar object candidates (radio-quiet) and 10 H II regions (which have spherical or unresolved radio emissions). Outflow activities have been found in 11 sources, while only three show inflow signatures in all. The high outflow detection rate means that outflows are common in massive star-forming regions. The inflow detection rate was relatively low. We suggest that this was because of the beam dilution of the telescope. All three inflow candidates have outflow(s). The outward radiation and thermal pressure from the central massive star(s) do not seem to be strong enough to halt accretion in G345.0034-00.2240. Our simple model of G318.9480-00.1969 shows that it has an infall velocity of about 1.8 km s-1. The spectral energy distribution analysis agrees our sources are massive and intermediate-massive star formation regions.

  16. Two demonstrators and a simulator for a sparse, distributed memory

    NASA Technical Reports Server (NTRS)

    Brown, Robert L.

    1987-01-01

    Described are two programs demonstrating different aspects of Kanerva's Sparse, Distributed Memory (SDM). These programs run on Sun 3 workstations, one using color, and have straightforward graphically oriented user interfaces and graphical output. Presented are descriptions of the programs, how to use them, and what they show. Additionally, this paper describes the software simulator behind each program.

  17. Dynamic overset grid communication on distributed memory parallel processors

    NASA Technical Reports Server (NTRS)

    Barszcz, Eric; Weeratunga, Sisira K.; Meakin, Robert L.

    1993-01-01

    A parallel distributed memory implementation of intergrid communication for dynamic overset grids is presented. Included are discussions of various options considered during development. Results are presented comparing an Intel iPSC/860 to a single processor Cray Y-MP. Results for grids in relative motion show the iPSC/860 implementation to be faster than the Cray implementation.

  18. A manual for PARTI runtime primitives

    NASA Technical Reports Server (NTRS)

    Berryman, Harry; Saltz, Joel

    1990-01-01

    Primitives are presented that are designed to help users efficiently program irregular problems (e.g., unstructured mesh sweeps, sparse matrix codes, adaptive mesh partial differential equations solvers) on distributed memory machines. These primitives are also designed for use in compilers for distributed memory multiprocessors. Communications patterns are captured at runtime, and the appropriate send and receive messages are automatically generated.

  19. pyCTQW: A continuous-time quantum walk simulator on distributed memory computers

    NASA Astrophysics Data System (ADS)

    Izaac, Josh A.; Wang, Jingbo B.

    2015-01-01

    In the general field of quantum information and computation, quantum walks are playing an increasingly important role in constructing physical models and quantum algorithms. We have recently developed a distributed memory software package pyCTQW, with an object-oriented Python interface, that allows efficient simulation of large multi-particle CTQW (continuous-time quantum walk)-based systems. In this paper, we present an introduction to the Python and Fortran interfaces of pyCTQW, discuss various numerical methods of calculating the matrix exponential, and demonstrate the performance behavior of pyCTQW on a distributed memory cluster. In particular, the Chebyshev and Krylov-subspace methods for calculating the quantum walk propagation are provided, as well as methods for visualization and data analysis.

  20. Biaxial Fatigue Behavior of Niti Shape Memory Alloy

    DTIC Science & Technology

    2005-03-01

    BIAXIAL FATIGUE BEHAVIOR OF NiTi SHAPE MEMORY ALLOY THESIS Daniel M. Jensen, 1st Lieutenant...BIAXIAL FATIGUE BEHAVIOR OF NiTi SHAPE MEMORY ALLOY THESIS Presented to the Faculty Department of Aeronautics and Astronautics Graduate School of...FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED AFIT/GA/ENY/05-M06 BIAXIAL FATIGUE BEHAVIOR OF NiTi SHAPE MEMORY ALLOY Daniel M. Jensen

  1. The Measurement of Visuo-Spatial and Verbal-Numerical Working Memory: Development of IRT-Based Scales

    ERIC Educational Resources Information Center

    Vock, Miriam; Holling, Heinz

    2008-01-01

    The objective of this study is to explore the potential for developing IRT-based working memory scales for assessing specific working memory components in children (8-13 years). These working memory scales should measure cognitive abilities reliably in the upper range of ability distribution as well as in the normal range, and provide a…

  2. Task set induces dynamic reallocation of resources in visual short-term memory.

    PubMed

    Sheremata, Summer L; Shomstein, Sarah

    2017-08-01

    Successful interaction with the environment requires the ability to flexibly allocate resources to different locations in the visual field. Recent evidence suggests that visual short-term memory (VSTM) resources are distributed asymmetrically across the visual field based upon task demands. Here, we propose that context, rather than the stimulus itself, determines asymmetrical distribution of VSTM resources. To test whether context modulates the reallocation of resources to the right visual field, task set, defined by memory-load, was manipulated to influence visual short-term memory performance. Performance was measured for single-feature objects embedded within predominantly single- or two-feature memory blocks. Therefore, context was varied to determine whether task set directly predicts changes in visual field biases. In accord with the dynamic reallocation of resources hypothesis, task set, rather than aspects of the physical stimulus, drove improvements in performance in the right- visual field. Our results show, for the first time, that preparation for upcoming memory demands directly determines how resources are allocated across the visual field.

  3. Endoscopic management of massive mercury ingestion

    PubMed Central

    Zag, Levente; Berkes, Gábor; Takács, Irma F; Szepes, Attila; Szabó, István

    2017-01-01

    Abstract Rationale: Ingestion of a massive amount of metallic mercury was thought to be harmless until the last century. After that, in a number of cases, mercury ingestion has been associated with appendicitis, impaired liver function, memory deficits, aspiration leading to pneumonitis and acute renal failure. Treatment includes gastric lavage, giving laxatives and chelating agents, but rapid removal of metallic mercury with gastroscopy has not been used. Patient concerns: An 18-year-old man was admitted to our emergency department after drinking 1000 g of metallic mercury as a suicide attempt. Diagnosis: Except from mild umbilical tenderness, he had no other symptoms. Radiography showed a metallic density in the area of the stomach. Intervention: Gastroscopy was performed to remove the mercury. One large pool and several small droplets of mercury were removed from the stomach. Outcomes: Blood and urine mercury levels of the patient remained low during hospitalization. No symptoms of mercury intoxication developed during the follow-up period. Lessons: Massive mercury ingestion may cause several symptoms, which can be prevented with prompt treatment. We used endoscopy to remove the mercury, which shortened the exposure time and minimized the risk of aspiration. This is the first case where endoscopy was used for the management of mercury ingestion. PMID:28562544

  4. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-03-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3DMIP platform when a larger number of cores is available.

  5. Diffusion theory of decision making in continuous report.

    PubMed

    Smith, Philip L

    2016-07-01

    I present a diffusion model for decision making in continuous report tasks, in which a continuous, circularly distributed, stimulus attribute in working memory is matched to a representation of the attribute in the stimulus display. Memory retrieval is modeled as a 2-dimensional diffusion process with vector-valued drift on a disk, whose bounding circle represents the decision criterion. The direction and magnitude of the drift vector describe the identity of the stimulus and the quality of its representation in memory, respectively. The point at which the diffusion exits the disk determines the reported value of the attribute and the time to exit the disk determines the decision time. Expressions for the joint distribution of decision times and report outcomes are obtained by means of the Girsanov change-of-measure theorem, which allows the properties of the nonzero-drift diffusion process to be characterized as a function of a Euclidian-distance Bessel process. Predicted report precision is equal to the product of the decision criterion and the drift magnitude and follows a von Mises distribution, in agreement with the treatment of precision in the working memory literature. Trial-to-trial variability in criterion and drift rate leads, respectively, to direct and inverse relationships between report accuracy and decision times, in agreement with, and generalizing, the standard diffusion model of 2-choice decisions. The 2-dimensional model provides a process account of working memory precision and its relationship with the diffusion model, and a new way to investigate the properties of working memory, via the distributions of decision times. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  6. Kmerind: A Flexible Parallel Library for K-mer Indexing of Biological Sequences on Distributed Memory Systems.

    PubMed

    Pan, Tony; Flick, Patrick; Jain, Chirag; Liu, Yongchao; Aluru, Srinivas

    2017-10-09

    Counting and indexing fixed length substrings, or k-mers, in biological sequences is a key step in many bioinformatics tasks including genome alignment and mapping, genome assembly, and error correction. While advances in next generation sequencing technologies have dramatically reduced the cost and improved latency and throughput, few bioinformatics tools can efficiently process the datasets at the current generation rate of 1.8 terabases every 3 days. We present Kmerind, a high performance parallel k-mer indexing library for distributed memory environments. The Kmerind library provides a set of simple and consistent APIs with sequential semantics and parallel implementations that are designed to be flexible and extensible. Kmerind's k-mer counter performs similarly or better than the best existing k-mer counting tools even on shared memory systems. In a distributed memory environment, Kmerind counts k-mers in a 120 GB sequence read dataset in less than 13 seconds on 1024 Xeon CPU cores, and fully indexes their positions in approximately 17 seconds. Querying for 1% of the k-mers in these indices can be completed in 0.23 seconds and 28 seconds, respectively. Kmerind is the first k-mer indexing library for distributed memory environments, and the first extensible library for general k-mer indexing and counting. Kmerind is available at https://github.com/ParBLiSS/kmerind.

  7. The Shark Random Swim - (Lévy Flight with Memory)

    NASA Astrophysics Data System (ADS)

    Businger, Silvia

    2018-05-01

    The Elephant Random Walk (ERW), first introduced by Schütz and Trimper (Phys Rev E 70:045101, 2004), is a one-dimensional simple random walk on Z having a memory about the whole past. We study the Shark Random Swim, a random walk with memory about the whole past, whose steps are α -stable distributed with α \\in (0,2] . Our aim in this work is to study the impact of the heavy tailed step distributions on the asymptotic behavior of the random walk. We shall see that, as for the ERW, the asymptotic behavior of the Shark Random Swim depends on its memory parameter p, and that a phase transition can be observed at the critical value p=1/α.

  8. Rapid solution of large-scale systems of equations

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.

    1994-01-01

    The analysis and design of complex aerospace structures requires the rapid solution of large systems of linear and nonlinear equations, eigenvalue extraction for buckling, vibration and flutter modes, structural optimization and design sensitivity calculation. Computers with multiple processors and vector capabilities can offer substantial computational advantages over traditional scalar computer for these analyses. These computers fall into two categories: shared memory computers and distributed memory computers. This presentation covers general-purpose, highly efficient algorithms for generation/assembly or element matrices, solution of systems of linear and nonlinear equations, eigenvalue and design sensitivity analysis and optimization. All algorithms are coded in FORTRAN for shared memory computers and many are adapted to distributed memory computers. The capability and numerical performance of these algorithms will be addressed.

  9. Efficient entanglement distillation without quantum memory.

    PubMed

    Abdelkhalek, Daniela; Syllwasschy, Mareike; Cerf, Nicolas J; Fiurášek, Jaromír; Schnabel, Roman

    2016-05-31

    Entanglement distribution between distant parties is an essential component to most quantum communication protocols. Unfortunately, decoherence effects such as phase noise in optical fibres are known to demolish entanglement. Iterative (multistep) entanglement distillation protocols have long been proposed to overcome decoherence, but their probabilistic nature makes them inefficient since the success probability decays exponentially with the number of steps. Quantum memories have been contemplated to make entanglement distillation practical, but suitable quantum memories are not realised to date. Here, we present the theory for an efficient iterative entanglement distillation protocol without quantum memories and provide a proof-of-principle experimental demonstration. The scheme is applied to phase-diffused two-mode-squeezed states and proven to distil entanglement for up to three iteration steps. The data are indistinguishable from those that an efficient scheme using quantum memories would produce. Since our protocol includes the final measurement it is particularly promising for enhancing continuous-variable quantum key distribution.

  10. Efficient entanglement distillation without quantum memory

    PubMed Central

    Abdelkhalek, Daniela; Syllwasschy, Mareike; Cerf, Nicolas J.; Fiurášek, Jaromír; Schnabel, Roman

    2016-01-01

    Entanglement distribution between distant parties is an essential component to most quantum communication protocols. Unfortunately, decoherence effects such as phase noise in optical fibres are known to demolish entanglement. Iterative (multistep) entanglement distillation protocols have long been proposed to overcome decoherence, but their probabilistic nature makes them inefficient since the success probability decays exponentially with the number of steps. Quantum memories have been contemplated to make entanglement distillation practical, but suitable quantum memories are not realised to date. Here, we present the theory for an efficient iterative entanglement distillation protocol without quantum memories and provide a proof-of-principle experimental demonstration. The scheme is applied to phase-diffused two-mode-squeezed states and proven to distil entanglement for up to three iteration steps. The data are indistinguishable from those that an efficient scheme using quantum memories would produce. Since our protocol includes the final measurement it is particularly promising for enhancing continuous-variable quantum key distribution. PMID:27241946

  11. UPC++ Programmer’s Guide (v1.0 2017.9)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bachan, J.; Baden, S.; Bonachea, D.

    UPC++ is a C++11 library that provides Asynchronous Partitioned Global Address Space (APGAS) programming. It is designed for writing parallel programs that run efficiently and scale well on distributed-memory parallel computers. The APGAS model is single program, multiple-data (SPMD), with each separate thread of execution (referred to as a rank, a term borrowed from MPI) having access to local memory as it would in C++. However, APGAS also provides access to a global address space, which is allocated in shared segments that are distributed over the ranks. UPC++ provides numerous methods for accessing and using global memory. In UPC++, allmore » operations that access remote memory are explicit, which encourages programmers to be aware of the cost of communication and data movement. Moreover, all remote-memory access operations are by default asynchronous, to enable programmers to write code that scales well even on hundreds of thousands of cores.« less

  12. UPC++ Programmer’s Guide, v1.0-2018.3.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bachan, J.; Baden, S.; Bonachea, Dan

    UPC++ is a C++11 library that provides Partitioned Global Address Space (PGAS) programming. It is designed for writing parallel programs that run efficiently and scale well on distributed-memory parallel computers. The PGAS model is single program, multiple-data (SPMD), with each separate thread of execution (referred to as a rank, a term borrowed from MPI) having access to local memory as it would in C++. However, PGAS also provides access to a global address space, which is allocated in shared segments that are distributed over the ranks. UPC++ provides numerous methods for accessing and using global memory. In UPC++, all operationsmore » that access remote memory are explicit, which encourages programmers to be aware of the cost of communication and data movement. Moreover, all remote-memory access operations are by default asynchronous, to enable programmers to write code that scales well even on hundreds of thousands of cores.« less

  13. Retrieval of high-fidelity memory arises from distributed cortical networks.

    PubMed

    Wais, Peter E; Jahanikia, Sahar; Steiner, Daniel; Stark, Craig E L; Gazzaley, Adam

    2017-04-01

    Medial temporal lobe (MTL) function is well established as necessary for memory of facts and events. It is likely that lateral cortical regions critically guide cognitive control processes to tune in high-fidelity details that are most relevant for memory retrieval. Here, convergent results from functional and structural MRI show that retrieval of detailed episodic memory arises from lateral cortical-MTL networks, including regions of inferior frontal and angular gyrii. Results also suggest that recognition of items based on low-fidelity, generalized information, rather than memory arising from retrieval of relevant episodic details, is not associated with functional connectivity between MTL and lateral cortical regions. Additionally, individual differences in microstructural properties in white matter pathways, associated with distributed MTL-cortical networks, are positively correlated with better performance on a mnemonic discrimination task. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  14. Development and Verification of Sputtered Thin-Film Nickel-Titanium (NiTi) Shape Memory Alloy (SMA)

    DTIC Science & Technology

    2015-08-01

    Shape Memory Alloy (SMA) by Cory R Knick and Christopher J Morris Approved for public release; distribution unlimited...Laboratory Development and Verification of Sputtered Thin-Film Nickel-Titanium (NiTi) Shape Memory Alloy (SMA) by Cory R Knick and Christopher

  15. JuxtaView - A tool for interactive visualization of large imagery on scalable tiled displays

    USGS Publications Warehouse

    Krishnaprasad, N.K.; Vishwanath, V.; Venkataraman, S.; Rao, A.G.; Renambot, L.; Leigh, J.; Johnson, A.E.; Davis, B.

    2004-01-01

    JuxtaView is a cluster-based application for viewing ultra-high-resolution images on scalable tiled displays. We present in JuxtaView, a new parallel computing and distributed memory approach for out-of-core montage visualization, using LambdaRAM, a software-based network-level cache system. The ultimate goal of JuxtaView is to enable a user to interactively roam through potentially terabytes of distributed, spatially referenced image data such as those from electron microscopes, satellites and aerial photographs. In working towards this goal, we describe our first prototype implemented over a local area network, where the image is distributed using LambdaRAM, on the memory of all nodes of a PC cluster driving a tiled display wall. Aggressive pre-fetching schemes employed by LambdaRAM help to reduce latency involved in remote memory access. We compare LambdaRAM with a more traditional memory-mapped file approach for out-of-core visualization. ?? 2004 IEEE.

  16. Human short-term spatial memory: precision predicts capacity.

    PubMed

    Banta Lavenex, Pamela; Boujon, Valérie; Ndarugendamwo, Angélique; Lavenex, Pierre

    2015-03-01

    Here, we aimed to determine the capacity of human short-term memory for allocentric spatial information in a real-world setting. Young adults were tested on their ability to learn, on a trial-unique basis, and remember over a 1-min interval the location(s) of 1, 3, 5, or 7 illuminating pads, among 23 pads distributed in a 4m×4m arena surrounded by curtains on three sides. Participants had to walk to and touch the pads with their foot to illuminate the goal locations. In contrast to the predictions from classical slot models of working memory capacity limited to a fixed number of items, i.e., Miller's magical number 7 or Cowan's magical number 4, we found that the number of visited locations to find the goals was consistently about 1.6 times the number of goals, whereas the number of correct choices before erring and the number of errorless trials varied with memory load even when memory load was below the hypothetical memory capacity. In contrast to resource models of visual working memory, we found no evidence that memory resources were evenly distributed among unlimited numbers of items to be remembered. Instead, we found that memory for even one individual location was imprecise, and that memory performance for one location could be used to predict memory performance for multiple locations. Our findings are consistent with a theoretical model suggesting that the precision of the memory for individual locations might determine the capacity of human short-term memory for spatial information. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. CANDELS Sheds Light on the Environmental Quenching of Low-mass Galaxies

    NASA Astrophysics Data System (ADS)

    Guo, Yicheng; Bell, Eric F.; Lu, Yu; Koo, David C.; Faber, Sandra M.; CANDELS

    2018-01-01

    We investigate the environmental quenching of galaxies, especially those with stellar masses (M*) smaller than 10^9.5 M⊙, beyond the local universe. Essentially all local low-mass quenched galaxies (QGs) are believed to live close to massive central galaxies, which is a demonstration of environmental quenching. We use CANDELS data to test whether or not such a dwarf QG--massive central galaxy connection exists beyond the local universe. For this purpose, we only need a statistically representative, rather than a complete, sample of low-mass galaxies, which enables our study out to z > 1.5. For each low-mass galaxy, we measure the projected distance (dproj) to its nearest massive (M* > 10^10.5 M⊙) neighbor within a redshift range. At a given z and M*, the environmental quenching effect is considered to be observed if the dproj distribution of QGs is significantly skewed toward lower values than that of star-forming galaxies (SFGs). For galaxies with 10^8 M⊙ < M* < 10^10 M⊙, such a difference between the dproj distributions of quenched and star-forming populations is detected up to z ˜ 1. Also, about 10% of the quenched galaxies in our sample are located between two and four virial radii (R_Vir) of the massive halos. The median projected distance from low-mass QGs to their massive neighbors (dproj/R_Vir) decreases with satellite M* at M* < 10^9.5 M⊙, but increases with satellite M* at M* > 10^9.5 M⊙. This trend suggests a smooth, if any, transition of the quenching timescale around M* of 10^9.5 M⊙ at 0.5 < z < 1.0.

  18. The nature of ultra-massive lens galaxies

    NASA Astrophysics Data System (ADS)

    Canameras, Raoul

    2017-08-01

    During the past decade, strong gravitational lensing analyses have contributed tremendously to the characterization of the inner properties of massive early-type galaxies, beyond the local Universe. Here we intend to extend studies of this kind to the most massive lens galaxies known to date, well outside the mass limits investigated by previous lensing surveys. This will allow us to probe the physics of the likely descendants of the most violent episodes of star formation and of the compact massive galaxies at high redshift. We propose WFC3 imaging (F438W and F160W) of four extremely massive early-type lens galaxies at z 0.5, in order to put them into context with the evolutionary trends of ellipticals as a function of mass and redshift. These systems were discovered in the SDSS and show one single main lens galaxy with a stellar mass above 1.5x10^12 Msun and large Einstein radii. Our high-resolution spectroscopic follow-up with VLT/X-shooter provides secure lens and source redshifts, between 0.3 and 0.7 and between 1.5 and 2.5, respectively, and confirm extreme stellar velocity dispersions > 400 km/s for the lenses. The excellent angular resolution of the proposed WFC3 imaging - not achievable from the ground - is the remaining indispensable piece of information to :(1) Resolve the lens structural parameters and obtain robust measurements of their stellar mass distributions,(2) Model the amount and distribution of the lens total masses and measure their M/L ratios and stellar IMF with joint strong lensing and stellar dynamics analyses,(3) Enhance our on-going lens models through the most accurate positions and morphologies of the blue multiply-imaged sources.

  19. Parallel and distributed computation for fault-tolerant object recognition

    NASA Technical Reports Server (NTRS)

    Wechsler, Harry

    1988-01-01

    The distributed associative memory (DAM) model is suggested for distributed and fault-tolerant computation as it relates to object recognition tasks. The fault-tolerance is with respect to geometrical distortions (scale and rotation), noisy inputs, occulsion/overlap, and memory faults. An experimental system was developed for fault-tolerant structure recognition which shows the feasibility of such an approach. The approach is futher extended to the problem of multisensory data integration and applied successfully to the recognition of colored polyhedral objects.

  20. Kanerva's sparse distributed memory with multiple hamming thresholds

    NASA Technical Reports Server (NTRS)

    Pohja, Seppo; Kaski, Kimmo

    1992-01-01

    If the stored input patterns of Kanerva's Sparse Distributed Memory (SDM) are highly correlated, utilization of the storage capacity is very low compared to the case of uniformly distributed random input patterns. We consider a variation of SDM that has a better storage capacity utilization for correlated input patterns. This approach uses a separate selection threshold for each physical storage address or hard location. The selection of the hard locations for reading or writing can be done in parallel of which SDM implementations can benefit.

  1. Implementation of collisions on GPU architecture in the Vorpal code

    NASA Astrophysics Data System (ADS)

    Leddy, Jarrod; Averkin, Sergey; Cowan, Ben; Sides, Scott; Werner, Greg; Cary, John

    2017-10-01

    The Vorpal code contains a variety of collision operators allowing for the simulation of plasmas containing multiple charge species interacting with neutrals, background gas, and EM fields. These existing algorithms have been improved and reimplemented to take advantage of the massive parallelization allowed by GPU architecture. The use of GPUs is most effective when algorithms are single-instruction multiple-data, so particle collisions are an ideal candidate for this parallelization technique due to their nature as a series of independent processes with the same underlying operation. This refactoring required data memory reorganization and careful consideration of device/host data allocation to minimize memory access and data communication per operation. Successful implementation has resulted in an order of magnitude increase in simulation speed for a test-case involving multiple binary collisions using the null collision method. Work supported by DARPA under contract W31P4Q-16-C-0009.

  2. Gilgamesh: A Multithreaded Processor-In-Memory Architecture for Petaflops Computing

    NASA Technical Reports Server (NTRS)

    Sterling, T. L.; Zima, H. P.

    2002-01-01

    Processor-in-Memory (PIM) architectures avoid the von Neumann bottleneck in conventional machines by integrating high-density DRAM and CMOS logic on the same chip. Parallel systems based on this new technology are expected to provide higher scalability, adaptability, robustness, fault tolerance and lower power consumption than current MPPs or commodity clusters. In this paper we describe the design of Gilgamesh, a PIM-based massively parallel architecture, and elements of its execution model. Gilgamesh extends existing PIM capabilities by incorporating advanced mechanisms for virtualizing tasks and data and providing adaptive resource management for load balancing and latency tolerance. The Gilgamesh execution model is based on macroservers, a middleware layer which supports object-based runtime management of data and threads allowing explicit and dynamic control of locality and load balancing. The paper concludes with a discussion of related research activities and an outlook to future work.

  3. The Earth Data Analytic Services (EDAS) Framework

    NASA Astrophysics Data System (ADS)

    Maxwell, T. P.; Duffy, D.

    2017-12-01

    Faced with unprecedented growth in earth data volume and demand, NASA has developed the Earth Data Analytic Services (EDAS) framework, a high performance big data analytics framework built on Apache Spark. This framework enables scientists to execute data processing workflows combining common analysis operations close to the massive data stores at NASA. The data is accessed in standard (NetCDF, HDF, etc.) formats in a POSIX file system and processed using vetted earth data analysis tools (ESMF, CDAT, NCO, etc.). EDAS utilizes a dynamic caching architecture, a custom distributed array framework, and a streaming parallel in-memory workflow for efficiently processing huge datasets within limited memory spaces with interactive response times. EDAS services are accessed via a WPS API being developed in collaboration with the ESGF Compute Working Team to support server-side analytics for ESGF. The API can be accessed using direct web service calls, a Python script, a Unix-like shell client, or a JavaScript-based web application. New analytic operations can be developed in Python, Java, or Scala (with support for other languages planned). Client packages in Python, Java/Scala, or JavaScript contain everything needed to build and submit EDAS requests. The EDAS architecture brings together the tools, data storage, and high-performance computing required for timely analysis of large-scale data sets, where the data resides, to ultimately produce societal benefits. It is is currently deployed at NASA in support of the Collaborative REAnalysis Technical Environment (CREATE) project, which centralizes numerous global reanalysis datasets onto a single advanced data analytics platform. This service enables decision makers to compare multiple reanalysis datasets and investigate trends, variability, and anomalies in earth system dynamics around the globe.

  4. A manual for PARTI runtime primitives, revision 1

    NASA Technical Reports Server (NTRS)

    Das, Raja; Saltz, Joel; Berryman, Harry

    1991-01-01

    Primitives are presented that are designed to help users efficiently program irregular problems (e.g., unstructured mesh sweeps, sparse matrix codes, adaptive mesh partial differential equations solvers) on distributed memory machines. These primitives are also designed for use in compilers for distributed memory multiprocessors. Communications patterns are captured at runtime, and the appropriate send and receive messages are automatically generated.

  5. Distributed memory parallel Markov random fields using graph partitioning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heinemann, C.; Perciano, T.; Ushizima, D.

    Markov random fields (MRF) based algorithms have attracted a large amount of interest in image analysis due to their ability to exploit contextual information about data. Image data generated by experimental facilities, though, continues to grow larger and more complex, making it more difficult to analyze in a reasonable amount of time. Applying image processing algorithms to large datasets requires alternative approaches to circumvent performance problems. Aiming to provide scientists with a new tool to recover valuable information from such datasets, we developed a general purpose distributed memory parallel MRF-based image analysis framework (MPI-PMRF). MPI-PMRF overcomes performance and memory limitationsmore » by distributing data and computations across processors. The proposed approach was successfully tested with synthetic and experimental datasets. Additionally, the performance of the MPI-PMRF framework is analyzed through a detailed scalability study. We show that a performance increase is obtained while maintaining an accuracy of the segmentation results higher than 98%. The contributions of this paper are: (a) development of a distributed memory MRF framework; (b) measurement of the performance increase of the proposed approach; (c) verification of segmentation accuracy in both synthetic and experimental, real-world datasets« less

  6. Asymmetric orbital distribution near mean motion resonance: Application to planets observed by Kepler and radial velocities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Ji-Wei, E-mail: jwxie@nju.edu.cn, E-mail: jwxie@astro.utoronto.ca

    2014-05-10

    Many multiple-planet systems have been found by the Kepler transit survey and various radial velocity (RV) surveys. Kepler planets show an asymmetric feature, namely, there are small but significant deficits/excesses of planet pairs with orbital period spacing slightly narrow/wide of the exact resonance, particularly near the first order mean motion resonance (MMR), such as 2:1 and 3:2 MMR. Similarly, if not exactly the same, an asymmetric feature (pileup wide of 2:1 MMR) is also seen in RV planets, but only for massive ones. We analytically and numerically study planets' orbital evolutions near and in the MMR. We find that theirmore » orbital period ratios could be asymmetrically distributed around the MMR center regardless of dissipation. In the case of no dissipation, Kepler planets' asymmetric orbital distribution could be partly reproduced for 3:2 MMR but not for 2:1 MMR, implying that dissipation might be more important to the latter. The pileup of massive RV planets just wide of 2:1 MMR is found to be consistent with the scenario that planets formed separately then migrated toward the MMR. The location of the pileup infers a K value of 1-100 on the order of magnitude for massive planets, where K is the damping rate ratio between orbital eccentricity and semimajor axis during planet migration.« less

  7. Neural bases of orthographic long-term memory and working memory in dysgraphia

    PubMed Central

    Purcell, Jeremy; Hillis, Argye E.; Capasso, Rita; Miceli, Gabriele

    2016-01-01

    Spelling a word involves the retrieval of information about the word’s letters and their order from long-term memory as well as the maintenance and processing of this information by working memory in preparation for serial production by the motor system. While it is known that brain lesions may selectively affect orthographic long-term memory and working memory processes, relatively little is known about the neurotopographic distribution of the substrates that support these cognitive processes, or the lesions that give rise to the distinct forms of dysgraphia that affect these cognitive processes. To examine these issues, this study uses a voxel-based mapping approach to analyse the lesion distribution of 27 individuals with dysgraphia subsequent to stroke, who were identified on the basis of their behavioural profiles alone, as suffering from deficits only affecting either orthographic long-term or working memory, as well as six other individuals with deficits affecting both sets of processes. The findings provide, for the first time, clear evidence of substrates that selectively support orthographic long-term and working memory processes, with orthographic long-term memory deficits centred in either the left posterior inferior frontal region or left ventral temporal cortex, and orthographic working memory deficits primarily arising from lesions of the left parietal cortex centred on the intraparietal sulcus. These findings also contribute to our understanding of the relationship between the neural instantiation of written language processes and spoken language, working memory and other cognitive skills. PMID:26685156

  8. Memory for Context becomes Less Specific with Time

    ERIC Educational Resources Information Center

    Wiltgen, Brian J.; Silva, Alcino J.

    2007-01-01

    Context memories initially require the hippocampus, but over time become independent of this structure. This shift reflects a consolidation process whereby memories are gradually stored in distributed regions of the cortex. The function of this process is thought to be the extraction of statistical regularities and general knowledge from specific…

  9. A Memory-Based Theory of Verbal Cognition

    ERIC Educational Resources Information Center

    Dennis, Simon

    2005-01-01

    The syntagmatic paradigmatic model is a distributed, memory-based account of verbal processing. Built on a Bayesian interpretation of string edit theory, it characterizes the control of verbal cognition as the retrieval of sets of syntagmatic and paradigmatic constraints from sequential and relational long-term memory and the resolution of these…

  10. Autobiographical Memory from a Life Span Perspective

    ERIC Educational Resources Information Center

    Schroots, Johannes J. F.; van Dijkum, Cor; Assink, Marian H. J.

    2004-01-01

    This comparative study (i.e., three age groups, three measures) explores the distribution of retrospective and prospective autobiographical memory data across the lifespan, in particular the bump pattern of disproportionally higher recall of memories from the ages 10 to 30, as generally observed in older age groups, in conjunction with the…

  11. Computer Sciences and Data Systems, volume 2

    NASA Technical Reports Server (NTRS)

    1987-01-01

    Topics addressed include: data storage; information network architecture; VHSIC technology; fiber optics; laser applications; distributed processing; spaceborne optical disk controller; massively parallel processors; and advanced digital SAR processors.

  12. An interactive NASTRAN preprocessor. [graphic display of undeformed structure using CDC 6000 series computer

    NASA Technical Reports Server (NTRS)

    Smith, W. W.

    1973-01-01

    A Langley Research Center version of NASTRAN Level 15.1.0 designed to provide the analyst with an added tool for debugging massive NASTRAN input data is described. The program checks all NASTRAN input data cards and displays on a CRT the graphic representation of the undeformed structure. In addition, the program permits the display and alteration of input data and allows reexecution without physically resubmitting the job. Core requirements on the CDC 6000 computer are approximately 77,000 octal words of central memory.

  13. High Performance Programming Using Explicit Shared Memory Model on the Cray T3D

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Simon, Horst D.; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    The Cray T3D is the first-phase system in Cray Research Inc.'s (CRI) three-phase massively parallel processing program. In this report we describe the architecture of the T3D, as well as the CRAFT (Cray Research Adaptive Fortran) programming model, and contrast it with PVM, which is also supported on the T3D We present some performance data based on the NAS Parallel Benchmarks to illustrate both architectural and software features of the T3D.

  14. Binary black hole mergers within the LIGO horizon: statistical properties and prospects for detecting electromagnetic counterparts

    NASA Astrophysics Data System (ADS)

    Perna, Rosalba; Chruslinska, Martyna; Corsi, Alessandra; Belczynski, Krzysztof

    2018-07-01

    Binary black holes (BBHs) are one of the endpoints of isolated binary evolution, and their mergers a leading channel for gravitational wave events. Here, using the evolutionary code STARTRACK, we study the statistical properties of the BBH population from isolated binary evolution for a range of progenitor star metallicities and BH natal kicks. We compute the mass function and the distribution of the primary BH spin a as a result of mass accretion during the binary evolution, and find that this is not an efficient process to spin-up BHs, producing an increase by at most a ˜ 0.2-0.3 for very low natal BH spins. We further compute the distribution of merger sites within the host galaxy, after tracking the motion of the binaries in the potentials of a massive spiral, a massive elliptical, and a dwarf galaxy. We find that a fraction of 70-90 per cent of mergers in massive galaxies and of 40-60 per cent in dwarfs (range mostly sensitive to the natal kicks) are expected to occur inside of their hosts. The number density distribution at the merger sites further allows us to estimate the broad-band luminosity distribution that BBH mergers would produce, if associated with a kinetic energy release in an outflow, which, as a reference, we assume at the level inferred for the Fermi GBM counterpart to GW150914, with the understanding that current limits from the O1 and O2 runs would require such emission to be produced within a jet of angular size within ≲50°.

  15. Binary Black Hole Mergers within the LIGO Horizon: Statistical Properties and prospects for detecting Electromagnetic Counterparts

    NASA Astrophysics Data System (ADS)

    Perna, Rosalba; Chruslinska, Martyna; Corsi, Alessandra; Belczynski, Krzysztof

    2018-03-01

    Binary black holes (BBHs) are one of the endpoints of isolated binary evolution, and their mergers a leading channel for gravitational wave events. Here, using the evolutionary code STARTRACK, we study the statistical properties of the BBH population from isolated binary evolution for a range of progenitor star metallicities and BH natal kicks. We compute the mass function and the distribution of the primary BH spin a as a result of mass accretion during the binary evolution, and find that this is not an efficient process to spin up BHs, producing an increase by at most a ˜ 0.2-0.3 for very low natal BH spins. We further compute the distribution of merger sites within the host galaxy, after tracking the motion of the binaries in the potentials of a massive spiral, a massive elliptical, and a dwarf galaxy. We find that a fraction of 70-90% of mergers in massive galaxies and of 40-60% in dwarfs (range mostly sensitive to the natal kicks) is expected to occur inside of their hosts. The number density distribution at the merger sites further allows us to estimate the broadband luminosity distribution that BBH mergers would produce, if associated with a kinetic energy release in an outflow, which, as a reference, we assume at the level inferred for the Fermi GBM counterpart to GW150914, with the understanding that current limits from the O1 and O2 runs would require such emission to be produced within a jet of angular size within ≲ 50°.

  16. Physical conditions, dynamics and mass distribution in the center of the galaxy

    NASA Technical Reports Server (NTRS)

    Genzel, R.; Townes, C. H.

    1987-01-01

    Investigations of the central 10 pc of the Galaxy, and conclusions on energetics, dynamics, and mass distribution derived from X and gamma ray measurements and from infrared and microwave studies, especially from spectroscopy, high resolution imaging, and interferometry are reviewed. Evidence for and against a massive black hole is analyzed.

  17. Massively parallel processor computer

    NASA Technical Reports Server (NTRS)

    Fung, L. W. (Inventor)

    1983-01-01

    An apparatus for processing multidimensional data with strong spatial characteristics, such as raw image data, characterized by a large number of parallel data streams in an ordered array is described. It comprises a large number (e.g., 16,384 in a 128 x 128 array) of parallel processing elements operating simultaneously and independently on single bit slices of a corresponding array of incoming data streams under control of a single set of instructions. Each of the processing elements comprises a bidirectional data bus in communication with a register for storing single bit slices together with a random access memory unit and associated circuitry, including a binary counter/shift register device, for performing logical and arithmetical computations on the bit slices, and an I/O unit for interfacing the bidirectional data bus with the data stream source. The massively parallel processor architecture enables very high speed processing of large amounts of ordered parallel data, including spatial translation by shifting or sliding of bits vertically or horizontally to neighboring processing elements.

  18. Flood inundation extent mapping based on block compressed tracing

    NASA Astrophysics Data System (ADS)

    Shen, Dingtao; Rui, Yikang; Wang, Jiechen; Zhang, Yu; Cheng, Liang

    2015-07-01

    Flood inundation extent, depth, and duration are important factors affecting flood hazard evaluation. At present, flood inundation analysis is based mainly on a seeded region-growing algorithm, which is an inefficient process because it requires excessive recursive computations and it is incapable of processing massive datasets. To address this problem, we propose a block compressed tracing algorithm for mapping the flood inundation extent, which reads the DEM data in blocks before transferring them to raster compression storage. This allows a smaller computer memory to process a larger amount of data, which solves the problem of the regular seeded region-growing algorithm. In addition, the use of a raster boundary tracing technique allows the algorithm to avoid the time-consuming computations required by the seeded region-growing. Finally, we conduct a comparative evaluation in the Chin-sha River basin, results show that the proposed method solves the problem of flood inundation extent mapping based on massive DEM datasets with higher computational efficiency than the original method, which makes it suitable for practical applications.

  19. Track finding in ATLAS using GPUs

    NASA Astrophysics Data System (ADS)

    Mattmann, J.; Schmitt, C.

    2012-12-01

    The reconstruction and simulation of collision events is a major task in modern HEP experiments involving several ten thousands of standard CPUs. On the other hand the graphics processors (GPUs) have become much more powerful and are by far outperforming the standard CPUs in terms of floating point operations due to their massive parallel approach. The usage of these GPUs could therefore significantly reduce the overall reconstruction time per event or allow for the usage of more sophisticated algorithms. In this paper the track finding in the ATLAS experiment will be used as an example on how the GPUs can be used in this context: the implementation on the GPU requires a change in the algorithmic flow to allow the code to work in the rather limited environment on the GPU in terms of memory, cache, and transfer speed from and to the GPU and to make use of the massive parallel computation. Both, the specific implementation of parts of the ATLAS track reconstruction chain and the performance improvements obtained will be discussed.

  20. Changing concepts of working memory

    PubMed Central

    Ma, Wei Ji; Husain, Masud; Bays, Paul M

    2014-01-01

    Working memory is widely considered to be limited in capacity, holding a fixed, small number of items, such as Miller's ‘magical number’ seven or Cowan's four. It has recently been proposed that working memory might better be conceptualized as a limited resource that is distributed flexibly among all items to be maintained in memory. According to this view, the quality rather than the quantity of working memory representations determines performance. Here we consider behavioral and emerging neural evidence for this proposal. PMID:24569831

  1. Memory access in shared virtual memory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berrendorf, R.

    1992-01-01

    Shared virtual memory (SVM) is a virtual memory layer with a single address space on top of a distributed real memory on parallel computers. We examine the behavior and performance of SVM running a parallel program with medium-grained, loop-level parallelism on top of it. A simulator for the underlying parallel architecture can be used to examine the behavior of SVM more deeply. The influence of several parameters, such as the number of processors, page size, cold or warm start, and restricted page replication, is studied.

  2. Memory access in shared virtual memory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berrendorf, R.

    1992-09-01

    Shared virtual memory (SVM) is a virtual memory layer with a single address space on top of a distributed real memory on parallel computers. We examine the behavior and performance of SVM running a parallel program with medium-grained, loop-level parallelism on top of it. A simulator for the underlying parallel architecture can be used to examine the behavior of SVM more deeply. The influence of several parameters, such as the number of processors, page size, cold or warm start, and restricted page replication, is studied.

  3. Impurity mixing and radiation asymmetry in massive gas injection simulations of DIII-D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Izzo, V. A.

    Simulations of neon massive gas injection into DIII-D are performed with the 3D MHD code NIMROD. The poloidal and toroidal distribution of the impurity source is varied. This report will focus on the effects of the source variation on impurity mixing and radiated power asymmetry. Even toroidally symmetric impurity injection is found to produce asymmetric radiated power due to asymmetric convective heat flux produced by the 1/1 mode. When the gas source is toroidally localized, the phase relationship between the mode and the source location is important, affecting both radiation peaking and impurity mixing. Under certain circumstances, a single, localizedmore » gas jet could produce better radiation symmetry during the disruption thermal quench than evenly distributed impurities.« less

  4. Memory as the "whole brain work": a large-scale model based on "oscillations in super-synergy".

    PubMed

    Başar, Erol

    2005-01-01

    According to recent trends, memory depends on several brain structures working in concert across many levels of neural organization; "memory is a constant work-in progress." The proposition of a brain theory based on super-synergy in neural populations is most pertinent for the understanding of this constant work in progress. This report introduces a new model on memory basing on the processes of EEG oscillations and Brain Dynamics. This model is shaped by the following conceptual and experimental steps: 1. The machineries of super-synergy in the whole brain are responsible for formation of sensory-cognitive percepts. 2. The expression "dynamic memory" is used for memory processes that evoke relevant changes in alpha, gamma, theta and delta activities. The concerted action of distributed multiple oscillatory processes provides a major key for understanding of distributed memory. It comprehends also the phyletic memory and reflexes. 3. The evolving memory, which incorporates reciprocal actions or reverberations in the APLR alliance and during working memory processes, is especially emphasized. 4. A new model related to "hierarchy of memories as a continuum" is introduced. 5. The notions of "longer activated memory" and "persistent memory" are proposed instead of long-term memory. 6. The new analysis to recognize faces emphasizes the importance of EEG oscillations in neurophysiology and Gestalt analysis. 7. The proposed basic framework called "Memory in the Whole Brain Work" emphasizes that memory and all brain functions are inseparable and are acting as a "whole" in the whole brain. 8. The role of genetic factors is fundamental in living system settings and oscillations and accordingly in memory, according to recent publications. 9. A link from the "whole brain" to "whole body," and incorporation of vegetative and neurological system, is proposed, EEG oscillations and ultraslow oscillations being a control parameter.

  5. THE PROPERTIES OF DYNAMICALLY EJECTED RUNAWAY AND HYPER-RUNAWAY STARS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perets, Hagai B.; Subr, Ladislav

    2012-06-01

    Runaway stars are stars observed to have large peculiar velocities. Two mechanisms are thought to contribute to the ejection of runaway stars, both of which involve binarity (or higher multiplicity). In the binary supernova scenario, a runaway star receives its velocity when its binary massive companion explodes as a supernova (SN). In the alternative dynamical ejection scenario, runaway stars are formed through gravitational interactions between stars and binaries in dense, compact clusters or cluster cores. Here we study the ejection scenario. We make use of extensive N-body simulations of massive clusters, as well as analytic arguments, in order to characterizemore » the expected ejection velocity distribution of runaway stars. We find that the ejection velocity distribution of the fastest runaways (v {approx}> 80 km s{sup -1}) depends on the binary distribution in the cluster, consistent with our analytic toy model, whereas the distribution of lower velocity runaways appears independent of the binaries' properties. For a realistic log constant distribution of binary separations, we find the velocity distribution to follow a simple power law: {Gamma}(v){proportional_to}v{sup -8/3} for the high-velocity runaways and v{sup -3/2} for the low-velocity ones. We calculate the total expected ejection rates of runaway stars from our simulated massive clusters and explore their mass function and their binarity. The mass function of runaway stars is biased toward high masses and strongly depends on their velocity. The binarity of runaways is a decreasing function of their ejection velocity, with no binaries expected to be ejected with v > 150 km s{sup -1}. We also find that hyper-runaways with velocities of hundreds of km s{sup -1} can be dynamically ejected from stellar clusters, but only at very low rates, which cannot account for a significant fraction of the observed population of hyper-velocity stars in the Galactic halo.« less

  6. The Galactic Distribution of OB Associations in Molecular Clouds

    NASA Astrophysics Data System (ADS)

    Williams, Jonathan P.; McKee, Christopher F.

    1997-02-01

    Molecular clouds account for half of the mass of the interstellar medium interior to the solar circle and for all current star formation. Using cloud catalogs of two CO surveys of the first quadrant, we have fitted the mass distribution of molecular clouds to a truncated power law in a similar manner as the luminosity function of OB associations in the companion paper to this work. After extrapolating from the first quadrant to the entire inner Galaxy, we find that the mass of cataloged clouds amounts to only 40% of current estimates of the total Galactic molecular mass. Following Solomon & Rivolo, we have assumed that the remaining molecular gas is in cold clouds, and we normalize the distribution accordingly. The predicted total number of clouds is then shown to be consistent with that observed in the solar neighborhood where cloud catalogs should be more complete. Within the solar circle, the cumulative form of the distribution is \\Nscrc(>M)=105[(Mu/M)0.6-1], where \\Nscrc is the number of clouds, and Mu = 6 × 106 M⊙ is the upper mass limit. The large number of clouds near the upper cutoff to the distribution indicates an underlying physical limit to cloud formation or destruction processes. The slope of the distribution corresponds to d\\Nscrc/dM~M-1.6, implying that although numerically most clouds are of low mass, most of the molecular gas is contained within the most massive clouds. The distribution of cloud masses is then compared to the Galactic distribution of OB association luminosities to obtain statistical estimates of the number of massive stars expected in any given cloud. The likelihood of massive star formation in a cloud is determined, and it is found that the median cloud mass that contains at least one O star is ~105 M⊙. The average star formation efficiency over the lifetime of an association is about 5% but varies by more than 2 orders of magnitude from cloud to cloud and is predicted to increase with cloud mass. O stars photoevaporate their surrounding molecular gas, and even with low rates of formation, they are the principal agents of cloud destruction. Using an improved estimate of the timescale for photoevaporation and our statistics on the expected numbers of stars per cloud, we find that 106 M⊙ giant molecular clouds (GMCs) are expected to survive for about 3 × 107 yr. Smaller clouds are disrupted, rather than photoionized, by photoevaporation. The porosity of H II regions in large GMCs is shown to be of order unity, which is consistent with self-regulation of massive star formation in GMCs. On average, 10% of the mass of a GMC is converted to stars by the time it is destroyed by photoevaporation.

  7. Probing Massive Black Hole Populations and Their Environments with LISA

    NASA Astrophysics Data System (ADS)

    Katz, Michael; Larson, Shane

    2018-01-01

    With the adoption of the LISA Mission Proposal by the European Space Agency in response to its call for L3 mission concepts, gravitational wave measurements from space are on the horizon. With data from the Illustris large-scale cosmological simulation, we provide analysis of LISA detection rates accompanied by characterization of the merging Massive Black Holes (MBH) and their host galaxies. MBHs of total mass $\\sim10^6-10^9 M_\\odot$ are the main focus of this study. Using a precise treatment of the dynamical friction evolutionary process prior to gravitational wave emission, we evolve MBH simulation particle mergers from $\\sim$kpc scales until coalescence to achieve a merger distribution. Using the statistical basis of the Illustris output, we Monte-carlo synthesize many realizations of the merging massive black hole population across space and time. We use those realizations to build mock LISA detection catalogs to understand the impact of LISA mission configurations on our ability to probe massive black hole merger populations and their environments throughout the visible Universe.

  8. Colony size as a species character in massive reef corals

    NASA Astrophysics Data System (ADS)

    Soong, Keryea

    1993-07-01

    In a study of seven massive, Caribbean corals, I have found major differences in reproductive behavior between species with large maximum colony sizes and species with smaller maximum colony sizes. Four species ( Diploria clivosa, D. strigosa, Montastrea cavernosa, Siderastrea siderea) which are large (<1000 cm2 in surface area) broadcast gametes during a short spawning season. Their puberty size is relatively large (>100 cm2, except M. cavernosa). In contrast, two small massive species (<100 cm2, Favia fragum and S. radians), and one medium-sized (100 1000 cm2, Porites astreoides) massive species, brood larvae during an extended season (year-round in Panama). The puberty size of the small species is only 2 4 cm2. Given these close associations between maximum colony sizes and a number of fundamental reproductive attributes, greater attention should be given to the colony size distributions of different species of reef corals in nature, since many important life history and population characters may be inferred.

  9. Short-Term Memory in Orthogonal Neural Networks

    NASA Astrophysics Data System (ADS)

    White, Olivia L.; Lee, Daniel D.; Sompolinsky, Haim

    2004-04-01

    We study the ability of linear recurrent networks obeying discrete time dynamics to store long temporal sequences that are retrievable from the instantaneous state of the network. We calculate this temporal memory capacity for both distributed shift register and random orthogonal connectivity matrices. We show that the memory capacity of these networks scales with system size.

  10. The Basolateral Amygdala and Nucleus Accumbens Core Mediate Dissociable Aspects of Drug Memory Reconsolidation

    ERIC Educational Resources Information Center

    Theberge, Florence R. M.; Milton, Amy L.; Belin, David; Lee, Jonathan L. C.; Everitt, Barry J.

    2010-01-01

    A distributed limbic-corticostriatal circuitry is implicated in cue-induced drug craving and relapse. Exposure to drug-paired cues not only precipitates relapse, but also triggers the reactivation and reconsolidation of the cue-drug memory. However, the limbic cortical-striatal circuitry underlying drug memory reconsolidation is unclear. The aim…

  11. NAS Applications and Advanced Algorithms

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Biswas, Rupak; VanDerWijngaart, Rob; Kutler, Paul (Technical Monitor)

    1997-01-01

    This paper examines the applications most commonly run on the supercomputers at the Numerical Aerospace Simulation (NAS) facility. It analyzes the extent to which such applications are fundamentally oriented to vector computers, and whether or not they can be efficiently implemented on hierarchical memory machines, such as systems with cache memories and highly parallel, distributed memory systems.

  12. The effects of dolomitization on petrophysical properties and fracture distribution within rift-related carbonates (Hammam Faraun Fault Block, Suez Rift, Egypt)

    NASA Astrophysics Data System (ADS)

    Korneva, I.; Bastesen, E.; Corlett, H.; Eker, A.; Hirani, J.; Hollis, C.; Gawthorpe, R. L.; Rotevatn, A.; Taylor, R.

    2018-03-01

    Petrographic and petrophysical data from different limestone lithofacies (skeletal packstones, matrix-supported conglomerates and foraminiferal grainstones) and their dolomitized equivalents within a slope carbonate succession (Eocene Thebes Formation) of Hammam Faraun Fault Block (Suez Rift, Egypt) have been analyzed in order to link fracture distribution with mechanical and textural properties of these rocks. Two phases of dolomitization resulted in facies-selective stratabound dolostones extending up to two and a half kilometers from the Hammam Faraun Fault, and massive dolostones in the vicinity of the fault (100 metres). Stratabound dolostones are characterized by up to 8 times lower porosity and 6 times higher frequency of fractures compared to the host limestones. Precursor lithofacies type has no significant effect on fracture frequency in the stratabound dolostones. At a distance of 100 metres from the fault, massive dolostones are present which have 0.5 times porosity of precursor limestones, and lithofacies type exerts a stronger control on fracture frequency than the presence of dolomitization (undolomitized vs. dolomitized). Massive dolomitization corresponds to increased fracture intensity in conglomerates and grainstones but decreased fracture intensity in packstones. This corresponds to a decrease of grain/crystal size in conglomerates and grainstones and its increase in packstones after massive dolomitization. Since fractures may contribute significantly to the flow properties of a carbonate rock, the work presented herein has significant applicability to hydrocarbon exploration and production from limestone and dolostone reservoirs, particularly where matrix porosities are low.

  13. Characterizing the Protostars in the Herschel Survey of Cygnus-X

    NASA Astrophysics Data System (ADS)

    Kirk, James; Hora, J. L.; Smith, H. A.; Herschel Cygnus-X Group

    2014-01-01

    The Cygnus-X complex is an extremely active region of massive star formation at a distance of ~1.4 kpc which can be studied with higher sensitivity and less confusion than more distant regions. The study of this region is important in improving our understanding of the formation processes and protostellar phases of massive stars. A previous Spitzer Legacy survey of Cygnus-X mapped the distributions of Class I and Class II YSOs within the region and studied the interaction between massive young stars and clusters of YSOs. Using data from the recent Herschel survey of the region, taken with the PACS and SPIRE instrument (70-500 microns), we are expanding this study of star formation to the youngest and most deeply embedded objects. Using these data we will expand the sample of massive protostars and YSOs in Cygnus-X, analyze the population of infrared dark clouds and their embedded objects, construct Spectral Energy Distributions (SEDs) using pre-existing Spitzer and near-IR data sets (1-500 microns), and fit these sources with models of protostars to derive luminosities and envelope masses. The derived luminosities and masses will enable us to create evolutionary diagrams and test models of high-mass star formation. We will also investigate what role OB associations, such as Cyg OB2, play in causing subsequent star formation in neighboring clouds, providing us with a comprehensive picture of star formation within this extremely active complex.

  14. A RAPIDLY EVOLVING REGION IN THE GALACTIC CENTER: WHY S-STARS THERMALIZE AND MORE MASSIVE STARS ARE MISSING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Xian; Amaro-Seoane, Pau, E-mail: Xian.Chen@aei.mpg.de, E-mail: Pau.Amaro-Seoane@aei.mpg.de

    2014-05-10

    The existence of ''S-stars'' within a distance of 1'' from Sgr A* contradicts our understanding of star formation, due to Sgr A* 's forbiddingly violent environment. A suggested possibility is that they form far away and were brought in by some fast dynamical process, since they are young. Nonetheless, all conjectured mechanisms either fail to reproduce their eccentricities—without violating their young age—or cannot explain the problem of {sup i}nverse mass segregation{sup :} the fact that lighter stars (the S-stars) are closer to Sgr A* and more massive ones, Wolf-Rayet (WR) and O-stars, are farther out. In this Letter we proposemore » that the mechanism responsible for both the distribution of the eccentricities and the paucity of massive stars is the Kozai-Lidov-like resonance induced by a sub-parsec disk recently discovered in the Galactic center. Considering that the disk probably extended to a smaller radius in the past, we show that in as short as (a few) 10{sup 6} yr, the stars populating the innermost 1'' region would redistribute in angular-momentum space and recover the observed ''super-thermal'' distribution. Meanwhile, WR and O-stars in the same region intermittently attain ample eccentricities that will lead to their tidal disruptions by the central massive black hole. Our results provide new evidences that Sgr A* was powered several millions years ago by an accretion disk as well as by tidal stellar disruptions.« less

  15. Spatial Inference for Distributed Remote Sensing Data

    NASA Astrophysics Data System (ADS)

    Braverman, A. J.; Katzfuss, M.; Nguyen, H.

    2014-12-01

    Remote sensing data are inherently spatial, and a substantial portion of their value for scientific analyses derives from the information they can provide about spatially dependent processes. Geophysical variables such as atmopsheric temperature, cloud properties, humidity, aerosols and carbon dioxide all exhibit spatial patterns, and satellite observations can help us learn about the physical mechanisms driving them. However, remote sensing observations are often noisy and incomplete, so inferring properties of true geophysical fields from them requires some care. These data can also be massive, which is both a blessing and a curse: using more data drives uncertainties down, but also drives costs up, particularly when data are stored on different computers or in different physical locations. In this talk I will discuss a methodology for spatial inference on massive, distributed data sets that does not require moving large volumes of data. The idea is based on a combination of ideas including modeling spatial covariance structures with low-rank covariance matrices, and distributed estimation in sensor or wireless networks.

  16. Wide-Field Infrared Survey Explorer Observations of the Evolution of Massive Star-Forming Regions

    NASA Technical Reports Server (NTRS)

    Koenig, X. P.; Leisawitz, D. T.; Benford, D. J.; Rebull, L. M.; Padgett, D. L.; Assef, R. J.

    2011-01-01

    We present the results of a mid-infrared survey of 11 outer Galaxy massive star-forming regions and 3 open clusters with data from the Wide-field Infrared Survey Explorer (WISE). Using a newly developed photometric scheme to identify young stellar objects and exclude extragalactic contamination, we have studied the distribution of young stars within each region. These data tend to support the hypothesis that latter generations may be triggered by the interaction of winds and radiation from the first burst of massive star formation with the molecular cloud material leftover from that earlier generation of stars.We dub this process the "fireworks hypothesis" since star formation by this mechanism would proceed rapidly and resemble a burst of fireworks.We have also analyzed small cutout WISE images of the structures around the edges of these massive star-forming regions. We observe large (1-3 pc size) pillar and trunk-like structures of diffuse emission nebulosity tracing excited polycyclic aromatic hydrocarbon molecules and small dust grains at the perimeter of the massive star-forming regions. These structures contain small clusters of emerging Class I and Class II sources, but some are forming only a single to a few new stars.

  17. Wide-Field Infrared Survey Explorer Observations of the Evolution of Massive Star-Forming Regions

    NASA Technical Reports Server (NTRS)

    Koenig, X. P.; Leisawitz, D. T.; Benford, D. J.; Rebull, L. M.; Padgett, D. L.; Asslef, R. J.

    2012-01-01

    We present the results of a mid-infrared survey of II outer Galaxy massive star-forming regions and 3 open clusters with data from the Wide-field Infrared Survey Explorer (WISE). Using a newly developed photometric scheme to identify young stellar objects and exclude extragalactic contamination, we have studied the distribution of young stars within each region. These data tend to support the hypothesis that latter generations may be triggered by the interaction of winds and radiation from the first burst of massive star formation with the molecular cloud material leftover from that earlier generation of stars. We dub this process the "fireworks hypothesis" since star formation by this mechanism would proceed rapidly and resemble a burst of fireworks. We have also analyzed small cutout WISE images of the structures around the edges of these massive star-forming regions. We observe large (1-3 pc size) pillar and trunk-like structures of diffuse emission nebulosity tracing excited polycyclic aromatic hydrocarbon molecules and small dust grains at the perimeter of the massive star-forming regions. These structures contain small clusters of emerging Class I and Class II sources, but some are forming only a single to a few new stars.

  18. Channel Acquisition for Massive MIMO-OFDM With Adjustable Phase Shift Pilots

    NASA Astrophysics Data System (ADS)

    You, Li; Gao, Xiqi; Swindlehurst, A. Lee; Zhong, Wen

    2016-03-01

    We propose adjustable phase shift pilots (APSPs) for channel acquisition in wideband massive multiple-input multiple-output (MIMO) systems employing orthogonal frequency division multiplexing (OFDM) to reduce the pilot overhead. Based on a physically motivated channel model, we first establish a relationship between channel space-frequency correlations and the channel power angle-delay spectrum in the massive antenna array regime, which reveals the channel sparsity in massive MIMO-OFDM. With this channel model, we then investigate channel acquisition, including channel estimation and channel prediction, for massive MIMO-OFDM with APSPs. We show that channel acquisition performance in terms of sum mean square error can be minimized if the user terminals' channel power distributions in the angle-delay domain can be made non-overlapping with proper phase shift scheduling. A simplified pilot phase shift scheduling algorithm is developed based on this optimal channel acquisition condition. The performance of APSPs is investigated for both one symbol and multiple symbol data models. Simulations demonstrate that the proposed APSP approach can provide substantial performance gains in terms of achievable spectral efficiency over the conventional phase shift orthogonal pilot approach in typical mobility scenarios.

  19. Programming in Vienna Fortran

    NASA Technical Reports Server (NTRS)

    Chapman, Barbara; Mehrotra, Piyush; Zima, Hans

    1992-01-01

    Exploiting the full performance potential of distributed memory machines requires a careful distribution of data across the processors. Vienna Fortran is a language extension of Fortran which provides the user with a wide range of facilities for such mapping of data structures. In contrast to current programming practice, programs in Vienna Fortran are written using global data references. Thus, the user has the advantages of a shared memory programming paradigm while explicitly controlling the data distribution. In this paper, we present the language features of Vienna Fortran for FORTRAN 77, together with examples illustrating the use of these features.

  20. Order statistics applied to the most massive and most distant galaxy clusters

    NASA Astrophysics Data System (ADS)

    Waizmann, J.-C.; Ettori, S.; Bartelmann, M.

    2013-06-01

    In this work, we present an analytic framework for calculating the individual and joint distributions of the nth most massive or nth highest redshift galaxy cluster for a given survey characteristic allowing us to formulate Λ cold dark matter (ΛCDM) exclusion criteria. We show that the cumulative distribution functions steepen with increasing order, giving them a higher constraining power with respect to the extreme value statistics. Additionally, we find that the order statistics in mass (being dominated by clusters at lower redshifts) is sensitive to the matter density and the normalization of the matter fluctuations, whereas the order statistics in redshift is particularly sensitive to the geometric evolution of the Universe. For a fixed cosmology, both order statistics are efficient probes of the functional shape of the mass function at the high-mass end. To allow a quick assessment of both order statistics, we provide fits as a function of the survey area that allow percentile estimation with an accuracy better than 2 per cent. Furthermore, we discuss the joint distributions in the two-dimensional case and find that for the combination of the largest and the second largest observation, it is most likely to find them to be realized with similar values with a broadly peaked distribution. When combining the largest observation with higher orders, it is more likely to find a larger gap between the observations and when combining higher orders in general, the joint probability density function peaks more strongly. Having introduced the theory, we apply the order statistical analysis to the Southpole Telescope (SPT) massive cluster sample and metacatalogue of X-ray detected clusters of galaxies catalogue and find that the 10 most massive clusters in the sample are consistent with ΛCDM and the Tinker mass function. For the order statistics in redshift, we find a discrepancy between the data and the theoretical distributions, which could in principle indicate a deviation from the standard cosmology. However, we attribute this deviation to the uncertainty in the modelling of the SPT survey selection function. In turn, by assuming the ΛCDM reference cosmology, order statistics can also be utilized for consistency checks of the completeness of the observed sample and of the modelling of the survey selection function.

  1. Efficient Retrieval of Massive Ocean Remote Sensing Images via a Cloud-Based Mean-Shift Algorithm.

    PubMed

    Yang, Mengzhao; Song, Wei; Mei, Haibin

    2017-07-23

    The rapid development of remote sensing (RS) technology has resulted in the proliferation of high-resolution images. There are challenges involved in not only storing large volumes of RS images but also in rapidly retrieving the images for ocean disaster analysis such as for storm surges and typhoon warnings. In this paper, we present an efficient retrieval of massive ocean RS images via a Cloud-based mean-shift algorithm. Distributed construction method via the pyramid model is proposed based on the maximum hierarchical layer algorithm and used to realize efficient storage structure of RS images on the Cloud platform. We achieve high-performance processing of massive RS images in the Hadoop system. Based on the pyramid Hadoop distributed file system (HDFS) storage method, an improved mean-shift algorithm for RS image retrieval is presented by fusion with the canopy algorithm via Hadoop MapReduce programming. The results show that the new method can achieve better performance for data storage than HDFS alone and WebGIS-based HDFS. Speedup and scaleup are very close to linear changes with an increase of RS images, which proves that image retrieval using our method is efficient.

  2. Luminous and Variable Stars in M31 and M33. V. The Upper HR Diagram

    NASA Astrophysics Data System (ADS)

    Humphreys, Roberta M.; Davidson, Kris; Hahn, David; Martin, John C.; Weis, Kerstin

    2017-07-01

    We present HR diagrams for the massive star populations in M31 and M33, including several different types of emission-line stars: the confirmed luminous blue variables (LBVs), candidate LBVs, B[e] supergiants, and the warm hypergiants. We estimate their apparent temperatures and luminosities for comparison with their respective massive star populations and evaluate the possible relationships of these different classes of evolved, massive stars, and their evolutionary state. Several of the LBV candidates lie near the LBV/S Dor instability strip that supports their classification. Most of the B[e] supergiants, however, are less luminous than the LBVs. Many are very dusty with the infrared flux contributing one-third or more to their total flux. They are also relatively isolated from other luminous OB stars. Overall, their spatial distribution suggests a more evolved state. Some may be post-RSGs (red supergiants) like the warm hypergiants, and there may be more than one path to becoming a B[e] star. There are sufficient differences in the spectra, luminosities, spatial distribution, and the presence or lack of dust between the LBVs and B[e] supergiants to conclude that one group does not evolve into the other.

  3. Probabilistic HR Diagrams: A New Infrared and X-ray Chronometer for Very Young, Massive Stellar Clusters and Associations

    NASA Astrophysics Data System (ADS)

    Maldonado, Jessica; Povich, Matthew S.

    2016-01-01

    We present a novel method for constraining the duration of star formation in very young, massive star-forming regions. Constraints on stellar population ages are derived from probabilistic HR diagrams (pHRDs) generated by fitting stellar model spectra to the infrared (IR) spectral energy distributions (SEDs) of Herbig Ae/Be stars and their less-evolved, pre-main sequence progenitors. Stellar samples for the pHRDs are selected based on the detection of X-ray emission associated with the IR source, and the lack of detectible IR excess emission at wavelengths ≤4.5 µm. The SED model fits were used to create two-dimensional probability distributions of the stellar parameters, specifically bolometric luminosity versus temperature and mass versus evolutionary age. We present first results from the pHRD analysis of the relatively evolved Carina Nebula and the unevolved M17 SWex infrared dark cloud, which reveal the expected, strikingly different star formation durations between these two regions. In the future, we will apply this method to analyze available X-ray and IR data from the MYStIX project on other Galactic massive star forming regions within 3 kpc of the Sun.

  4. Efficient Retrieval of Massive Ocean Remote Sensing Images via a Cloud-Based Mean-Shift Algorithm

    PubMed Central

    Song, Wei; Mei, Haibin

    2017-01-01

    The rapid development of remote sensing (RS) technology has resulted in the proliferation of high-resolution images. There are challenges involved in not only storing large volumes of RS images but also in rapidly retrieving the images for ocean disaster analysis such as for storm surges and typhoon warnings. In this paper, we present an efficient retrieval of massive ocean RS images via a Cloud-based mean-shift algorithm. Distributed construction method via the pyramid model is proposed based on the maximum hierarchical layer algorithm and used to realize efficient storage structure of RS images on the Cloud platform. We achieve high-performance processing of massive RS images in the Hadoop system. Based on the pyramid Hadoop distributed file system (HDFS) storage method, an improved mean-shift algorithm for RS image retrieval is presented by fusion with the canopy algorithm via Hadoop MapReduce programming. The results show that the new method can achieve better performance for data storage than HDFS alone and WebGIS-based HDFS. Speedup and scaleup are very close to linear changes with an increase of RS images, which proves that image retrieval using our method is efficient. PMID:28737699

  5. CD8+ T Lymphocyte Expansion, Proliferation and Activation in Dengue Fever

    PubMed Central

    de Matos, Andréia Manso; Carvalho, Karina Inacio; Rosa, Daniela Santoro; Villas-Boas, Lucy Santos; da Silva, Wanessa Cardoso; Rodrigues, Célia Luiza de Lima; Oliveira, Olímpia Massae Nakasone Peel Furtado; Levi, José Eduardo; Araújo, Evaldo Stanislau Affonso; Pannuti, Claudio Sergio; Luna, Expedito José Albuquerque; Kallas, Esper George

    2015-01-01

    Dengue fever induces a robust immune response, including massive T cell activation. The level of T cell activation may, however, be associated with more severe disease. In this study, we explored the level of CD8+ T lymphocyte activation in the first six days after onset of symptoms during a DENV2 outbreak in early 2010 on the coast of São Paulo State, Brazil. Using flow cytometry we detected a progressive increase in the percentage of CD8+ T cells in 74 dengue fever cases. Peripheral blood mononuclear cells from 30 cases were thawed and evaluated using expanded phenotyping. The expansion of the CD8+ T cells was coupled with increased Ki67 expression. Cell activation was observed later in the course of disease, as determined by the expression of the activation markers CD38 and HLA-DR. This increased CD8+ T lymphocyte activation was observed in all memory subsets, but was more pronounced in the effector memory subset, as defined by higher CD38 expression. Our results show that most CD8+ T cell subsets are expanded during DENV2 infection and that the effector memory subset is the predominantly affected sub population. PMID:25675375

  6. Review on the APP/PS1KI mouse model: intraneuronal Abeta accumulation triggers axonopathy, neuron loss and working memory impairment.

    PubMed

    Bayer, T A; Wirths, O

    2008-02-01

    Accumulating evidence points to an important role of intraneuronal Abeta as a trigger of the pathological cascade of events leading to neurodegeneration and eventually to Alzheimer's disease (AD) with its typical clinical symptoms, like memory impairment and change in personality. As a new concept, intraneuronal accumulation of Abeta instead of extracellular Abeta deposition has been introduced to be the disease-triggering event in AD. The present review compiles current knowledge on the amyloid precursor protein (APP)/PS1KI mouse model with early and massive intraneuronal Abeta42 accumulation: (1) The APP/PS1KI mouse model exhibits early robust brain and spinal cord axonal degeneration and hippocampal CA1 neuron loss. (2) At the same time-point, a dramatic, age-dependent reduced ability to perform working memory and motor tasks is observed. (3) The APP/PS1KI mice are smaller and show development of a thoracolumbar kyphosis, together with an incremental loss of body weight. (4) Onset of the observed behavioral alterations correlates well with robust axonal degeneration in brain and spinal cord and with abundant hippocampal CA1 neuron loss.

  7. NANOGrav Constraints on Gravitational Wave Bursts with Memory

    NASA Astrophysics Data System (ADS)

    Arzoumanian, Z.; Brazier, A.; Burke-Spolaor, S.; Chamberlin, S. J.; Chatterjee, S.; Christy, B.; Cordes, J. M.; Cornish, N. J.; Demorest, P. B.; Deng, X.; Dolch, T.; Ellis, J. A.; Ferdman, R. D.; Fonseca, E.; Garver-Daniels, N.; Jenet, F.; Jones, G.; Kaspi, V. M.; Koop, M.; Lam, M. T.; Lazio, T. J. W.; Levin, L.; Lommen, A. N.; Lorimer, D. R.; Luo, J.; Lynch, R. S.; Madison, D. R.; McLaughlin, M. A.; McWilliams, S. T.; Nice, D. J.; Palliyaguru, N.; Pennucci, T. T.; Ransom, S. M.; Siemens, X.; Stairs, I. H.; Stinebring, D. R.; Stovall, K.; Swiggum, J.; Vallisneri, M.; van Haasteren, R.; Wang, Y.; Zhu, W. W.; NANOGrav Collaboration

    2015-09-01

    Among efforts to detect gravitational radiation, pulsar timing arrays are uniquely poised to detect “memory” signatures, permanent perturbations in spacetime from highly energetic astrophysical events such as mergers of supermassive black hole binaries. The North American Nanohertz Observatory for Gravitational Waves (NANOGrav) observes dozens of the most stable millisecond pulsars using the Arecibo and Green Bank radio telescopes in an effort to study, among other things, gravitational wave memory. We herein present the results of a search for gravitational wave bursts with memory (BWMs) using the first five years of NANOGrav observations. We develop original methods for dramatically speeding up searches for BWM signals. In the directions of the sky where our sensitivity to BWMs is best, we would detect mergers of binaries with reduced masses of {10}9 {M}⊙ out to distances of 30 Mpc; such massive mergers in the Virgo cluster would be marginally detectable. We find no evidence for BWMs. However, with our non-detection, we set upper limits on the rate at which BWMs of various amplitudes could have occurred during the time spanned by our data—e.g., BWMs with amplitudes greater than 10-13 must encounter the Earth at a rate less than 1.5 yr-1.

  8. CENTRAL ENGINE MEMORY OF GAMMA-RAY BURSTS AND SOFT GAMMA-RAY REPEATERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Bin-Bin; Castro-Tirado, Alberto J.; Zhang, Bing, E-mail: zhang.grb@gmail.com

    Gamma-ray bursts (GRBs) are bursts of γ-rays generated from relativistic jets launched from catastrophic events such as massive star core collapse or binary compact star coalescence. Previous studies suggested that GRB emission is erratic, with no noticeable memory in the central engine. Here we report a discovery that similar light curve patterns exist within individual bursts for at least some GRBs. Applying the Dynamic Time Warping method, we show that similarity of light curve patterns between pulses of a single burst or between the light curves of a GRB and its X-ray flare can be identified. This suggests that themore » central engine of at least some GRBs carries “memory” of its activities. We also show that the same technique can identify memory-like emission episodes in the flaring emission in soft gamma-ray repeaters (SGRs), which are believed to be Galactic, highly magnetized neutron stars named magnetars. Such a phenomenon challenges the standard black hole central engine models for GRBs, and suggest a common physical mechanism behind GRBs and SGRs, which points toward a magnetar central engine of GRBs.« less

  9. Detection of weak signals in memory thermal baths.

    PubMed

    Jiménez-Aquino, J I; Velasco, R M; Romero-Bastida, M

    2014-11-01

    The nonlinear relaxation time and the statistics of the first passage time distribution in connection with the quasideterministic approach are used to detect weak signals in the decay process of the unstable state of a Brownian particle embedded in memory thermal baths. The study is performed in the overdamped approximation of a generalized Langevin equation characterized by an exponential decay in the friction memory kernel. A detection criterion for each time scale is studied: The first one is referred to as the receiver output, which is given as a function of the nonlinear relaxation time, and the second one is related to the statistics of the first passage time distribution.

  10. Roles of Course Facilitators, Learners, and Technology in the Flow of Information of a cMOOC

    ERIC Educational Resources Information Center

    Skrypnyk, Oleksandra; Joksimovic, Srec´ko; Kovanovic, Vitomir; Gas?evic, Dragan; Dawson, Shane

    2015-01-01

    Distributed Massive Open Online Courses (MOOCs) are based on the premise that online learning occurs through a network of interconnected learners. The teachers' role in distributed courses extends to forming such a network by facilitating communication that connects learners and their separate personal learning environments scattered around the…

  11. LEAVING THE DARK AGES WITH AMIGA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manrique, Alberto; Salvador-Solé, Eduard; Juan, Enric

    2015-01-01

    We present an Analytic Model of Intergalactic-medium and GAlaxy (AMIGA) evolution since the dark ages. AMIGA is in the spirit of the popular semi-analytic models of galaxy formation, although it does not use halo merger trees but interpolates halo properties in grids that are progressively built. This strategy is less memory-demanding and allows one to start modeling at sufficiently high redshifts and low halo masses to have trivial boundary conditions. The number of free parameters is minimized by making a causal connection between physical processes usually treated as independent of each other, which leads to more reliable predictions. However, themore » strongest points of AMIGA are the following: (1) the inclusion of molecular cooling and metal-poor, population III (Pop III) stars with the most dramatic feedback and (2) accurate follow up of the temperature and volume filling factor of neutral, singly ionized, and doubly ionized regions, taking into account the distinct halo mass functions in those environments. We find the following general results. Massive Pop III stars determine the intergalactic medium metallicity and temperature, and the growth of spheroids and disks is self-regulated by that of massive black holes (MBHs) developed from the remnants of those stars. However, the properties of normal galaxies and active galactic nuclei appear to be quite insensitive to Pop III star properties due to the much higher yield of ordinary stars compared to Pop III stars and the dramatic growth of MBHs when normal galaxies begin to develop, which cause the memory loss of the initial conditions.« less

  12. Understanding human dynamics in microblog posting activities

    NASA Astrophysics Data System (ADS)

    Jiang, Zhihong; Zhang, Yubao; Wang, Hui; Li, Pei

    2013-02-01

    Human activity patterns are an important issue in behavior dynamics research. Empirical evidence indicates that human activity patterns can be characterized by a heavy-tailed inter-event time distribution. However, most researchers give an understanding by only modeling the power-law feature of the inter-event time distribution, and those overlooked non-power-law features are likely to be nontrivial. In this work, we propose a behavior dynamics model, called the finite memory model, in which humans adaptively change their activity rates based on a finite memory of recent activities, which is driven by inherent individual interest. Theoretical analysis shows a finite memory model can properly explain various heavy-tailed inter-event time distributions, including a regular power law and some non-power-law deviations. To validate the model, we carry out an empirical study based on microblogging activity from thousands of microbloggers in the Celebrity Hall of the Sina microblog. The results show further that the model is reasonably effective. We conclude that finite memory is an effective dynamics element to describe the heavy-tailed human activity pattern.

  13. Quantitative autoradiographic analysis of muscarinic receptor subtypes and their role in representational memory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Messer, W.S.

    1986-01-01

    Autoradiographic techniques were used to examine the distribution of muscarinic receptors in rat brain slices. Agonist and selective antagonist binding were examined by measuring the ability for unlabeled ligands to inhibit (/sup 3/H)-1-QNB labeling of muscarinic receptors. The distribution of high affinity pirenzepine binding sites (M/sub 1/ subtype) was distinct from the distribution of high affinity carbamylcholine sites, which corresponded to the M/sub 2/ subtype. In a separate assay, the binding profile for pirenzepine was shown to differ from the profile for scopolamine, a classical muscarinic antagonist. Muscarinic antagonists, when injected into the Hippocampus, impaired performance of a representational memorymore » task. Pirenzepine, the M/sub 1/ selective antagonist, produced representational memory deficits. Scopolamine, a less selective muscarinic antagonist, caused increases in running times in some animals which prevented a definitive interpretation of the nature of the impairment. Pirenzepine displayed a higher affinity for the hippocampus and was more effective in producing a selective impairment of representational memory than scopolamine. The data indicated that cholinergic activity in the hippocampus was necessary for representation memory function.« less

  14. A class of designs for a sparse distributed memory

    NASA Technical Reports Server (NTRS)

    Jaeckel, Louis A.

    1989-01-01

    A general class of designs for a space distributed memory (SDM) is described. The author shows that Kanerva's original design and the selected-coordinate design are related, and that there is a series of possible intermediate designs between those two designs. In each such design, the set of addresses that activate a memory location is a sphere in the address space. We can also have hybrid designs, in which the memory locations may be a mixture of those found in the other designs. In some applications, the bits of the read and write addresses that will actually be used might be mostly zeros; that is, the addresses might lie on or near z hyperplane in the address space. The author describes a hyperplane design which is adapted to this situation and compares it to an adaptation of Kanerva's design. To study the performance of these designs, he computes the expected number of memory locations activated by both of two addresses.

  15. Quantum cryptography: individual eavesdropping with the knowledge of the error-correcting protocol

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Horoshko, D B

    2007-12-31

    The quantum key distribution protocol BB84 combined with the repetition protocol for error correction is analysed from the point of view of its security against individual eavesdropping relying on quantum memory. It is shown that the mere knowledge of the error-correcting protocol changes the optimal attack and provides the eavesdropper with additional information on the distributed key. (fifth seminar in memory of d.n. klyshko)

  16. Parallel Programming Paradigms

    DTIC Science & Technology

    1987-07-01

    Unclassified IS.. DECLASSIFICATIONIOOWNGRADIN G 16. DISTRIBUTION STATEMENT (of this Report) Distribution of this report is unlimited. 17...8416878 and by the Office of Naval Research Contracts No. N00014-86-K-0264 and No. N00014-85- K-0328. 8 ?~~ O . G 1 49 II Parallel Programming Paradigms...processors -. "to fetch from the same memory cell (list head) and thus seems to favor a shared memory - g implementation [37). In this dissertation, we

  17. Weather prediction using a genetic memory

    NASA Technical Reports Server (NTRS)

    Rogers, David

    1990-01-01

    Kanaerva's sparse distributed memory (SDM) is an associative memory model based on the mathematical properties of high dimensional binary address spaces. Holland's genetic algorithms are a search technique for high dimensional spaces inspired by evolutional processes of DNA. Genetic Memory is a hybrid of the above two systems, in which the memory uses a genetic algorithm to dynamically reconfigure its physical storage locations to reflect correlations between the stored addresses and data. This architecture is designed to maximize the ability of the system to scale-up to handle real world problems.

  18. Low latency messages on distributed memory multiprocessors

    NASA Technical Reports Server (NTRS)

    Rosing, Matthew; Saltz, Joel

    1993-01-01

    Many of the issues in developing an efficient interface for communication on distributed memory machines are described and a portable interface is proposed. Although the hardware component of message latency is less than one microsecond on many distributed memory machines, the software latency associated with sending and receiving typed messages is on the order of 50 microseconds. The reason for this imbalance is that the software interface does not match the hardware. By changing the interface to match the hardware more closely, applications with fine grained communication can be put on these machines. Based on several tests that were run on the iPSC/860, an interface that will better match current distributed memory machines is proposed. The model used in the proposed interface consists of a computation processor and a communication processor on each node. Communication between these processors and other nodes in the system is done through a buffered network. Information that is transmitted is either data or procedures to be executed on the remote processor. The dual processor system is better suited for efficiently handling asynchronous communications compared to a single processor system. The ability to send data or procedure is very flexible for minimizing message latency, based on the type of communication being performed. The test performed and the proposed interface are described.

  19. Distributed shared memory for roaming large volumes.

    PubMed

    Castanié, Laurent; Mion, Christophe; Cavin, Xavier; Lévy, Bruno

    2006-01-01

    We present a cluster-based volume rendering system for roaming very large volumes. This system allows to move a gigabyte-sized probe inside a total volume of several tens or hundreds of gigabytes in real-time. While the size of the probe is limited by the total amount of texture memory on the cluster, the size of the total data set has no theoretical limit. The cluster is used as a distributed graphics processing unit that both aggregates graphics power and graphics memory. A hardware-accelerated volume renderer runs in parallel on the cluster nodes and the final image compositing is implemented using a pipelined sort-last rendering algorithm. Meanwhile, volume bricking and volume paging allow efficient data caching. On each rendering node, a distributed hierarchical cache system implements a global software-based distributed shared memory on the cluster. In case of a cache miss, this system first checks page residency on the other cluster nodes instead of directly accessing local disks. Using two Gigabit Ethernet network interfaces per node, we accelerate data fetching by a factor of 4 compared to directly accessing local disks. The system also implements asynchronous disk access and texture loading, which makes it possible to overlap data loading, volume slicing and rendering for optimal volume roaming.

  20. A compositional reservoir simulator on distributed memory parallel computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rame, M.; Delshad, M.

    1995-12-31

    This paper presents the application of distributed memory parallel computes to field scale reservoir simulations using a parallel version of UTCHEM, The University of Texas Chemical Flooding Simulator. The model is a general purpose highly vectorized chemical compositional simulator that can simulate a wide range of displacement processes at both field and laboratory scales. The original simulator was modified to run on both distributed memory parallel machines (Intel iPSC/960 and Delta, Connection Machine 5, Kendall Square 1 and 2, and CRAY T3D) and a cluster of workstations. A domain decomposition approach has been taken towards parallelization of the code. Amore » portion of the discrete reservoir model is assigned to each processor by a set-up routine that attempts a data layout as even as possible from the load-balance standpoint. Each of these subdomains is extended so that data can be shared between adjacent processors for stencil computation. The added routines that make parallel execution possible are written in a modular fashion that makes the porting to new parallel platforms straight forward. Results of the distributed memory computing performance of Parallel simulator are presented for field scale applications such as tracer flood and polymer flood. A comparison of the wall-clock times for same problems on a vector supercomputer is also presented.« less

  1. Multimodal retrieval of autobiographical memories: sensory information contributes differently to the recollection of events.

    PubMed

    Willander, Johan; Sikström, Sverker; Karlsson, Kristina

    2015-01-01

    Previous studies on autobiographical memory have focused on unimodal retrieval cues (i.e., cues pertaining to one modality). However, from an ecological perspective multimodal cues (i.e., cues pertaining to several modalities) are highly important to investigate. In the present study we investigated age distributions and experiential ratings of autobiographical memories retrieved with unimodal and multimodal cues. Sixty-two participants were randomized to one of four cue-conditions: visual, olfactory, auditory, or multimodal. The results showed that the peak of the distributions depends on the modality of the retrieval cue. The results indicated that multimodal retrieval seemed to be driven by visual and auditory information to a larger extent and to a lesser extent by olfactory information. Finally, no differences were observed in the number of retrieved memories or experiential ratings across the four cue-conditions.

  2. Discrete-Slots Models of Visual Working-Memory Response Times

    PubMed Central

    Donkin, Christopher; Nosofsky, Robert M.; Gold, Jason M.; Shiffrin, Richard M.

    2014-01-01

    Much recent research has aimed to establish whether visual working memory (WM) is better characterized by a limited number of discrete all-or-none slots or by a continuous sharing of memory resources. To date, however, researchers have not considered the response-time (RT) predictions of discrete-slots versus shared-resources models. To complement the past research in this field, we formalize a family of mixed-state, discrete-slots models for explaining choice and RTs in tasks of visual WM change detection. In the tasks under investigation, a small set of visual items is presented, followed by a test item in 1 of the studied positions for which a change judgment must be made. According to the models, if the studied item in that position is retained in 1 of the discrete slots, then a memory-based evidence-accumulation process determines the choice and the RT; if the studied item in that position is missing, then a guessing-based accumulation process operates. Observed RT distributions are therefore theorized to arise as probabilistic mixtures of the memory-based and guessing distributions. We formalize an analogous set of continuous shared-resources models. The model classes are tested on individual subjects with both qualitative contrasts and quantitative fits to RT-distribution data. The discrete-slots models provide much better qualitative and quantitative accounts of the RT and choice data than do the shared-resources models, although there is some evidence for “slots plus resources” when memory set size is very small. PMID:24015956

  3. Effects of cacheing on multitasking efficiency and programming strategy on an ELXSI 6400

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Montry, G.R.; Benner, R.E.

    1985-12-01

    The impact of a cache/shared memory architecture, and, in particular, the cache coherency problem, upon concurrent algorithm and program development is discussed. In this context, a simple set of programming strategies are proposed which streamline code development and improve code performance when multitasking in a cache/shared memory or distributed memory environment.

  4. Phonotactic Probability Effect in Nonword Recall and Its Relationship with Vocabulary in Monolingual and Bilingual Preschoolers

    ERIC Educational Resources Information Center

    Messer, Marielle H.; Leseman, Paul P. M.; Boom, Jan; Mayo, Aziza Y.

    2010-01-01

    The current study examined to what extent information in long-term memory concerning the distribution of phoneme clusters in a language, so-called long-term phonotactic knowledge, increased the capacity of verbal short-term memory in young language learners and, through increased verbal short-term memory capacity, supported these children's first…

  5. Contrasting single and multi-component working-memory systems in dual tasking.

    PubMed

    Nijboer, Menno; Borst, Jelmer; van Rijn, Hedderik; Taatgen, Niels

    2016-05-01

    Working memory can be a major source of interference in dual tasking. However, there is no consensus on whether this interference is the result of a single working memory bottleneck, or of interactions between different working memory components that together form a complete working-memory system. We report a behavioral and an fMRI dataset in which working memory requirements are manipulated during multitasking. We show that a computational cognitive model that assumes a distributed version of working memory accounts for both behavioral and neuroimaging data better than a model that takes a more centralized approach. The model's working memory consists of an attentional focus, declarative memory, and a subvocalized rehearsal mechanism. Thus, the data and model favor an account where working memory interference in dual tasking is the result of interactions between different resources that together form a working-memory system. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. System and method for programmable bank selection for banked memory subsystems

    DOEpatents

    Blumrich, Matthias A.; Chen, Dong; Gara, Alan G.; Giampapa, Mark E.; Hoenicke, Dirk; Ohmacht, Martin; Salapura, Valentina; Sugavanam, Krishnan

    2010-09-07

    A programmable memory system and method for enabling one or more processor devices access to shared memory in a computing environment, the shared memory including one or more memory storage structures having addressable locations for storing data. The system comprises: one or more first logic devices associated with a respective one or more processor devices, each first logic device for receiving physical memory address signals and programmable for generating a respective memory storage structure select signal upon receipt of pre-determined address bit values at selected physical memory address bit locations; and, a second logic device responsive to each of the respective select signal for generating an address signal used for selecting a memory storage structure for processor access. The system thus enables each processor device of a computing environment memory storage access distributed across the one or more memory storage structures.

  7. Memory consolidation reconfigures neural pathways involved in the suppression of emotional memories

    PubMed Central

    Liu, Yunzhe; Lin, Wanjun; Liu, Chao; Luo, Yuejia; Wu, Jianhui; Bayley, Peter J.; Qin, Shaozheng

    2016-01-01

    The ability to suppress unwanted emotional memories is crucial for human mental health. Through consolidation over time, emotional memories often become resistant to change. However, how consolidation impacts the effectiveness of emotional memory suppression is still unknown. Using event-related fMRI while concurrently recording skin conductance, we investigated the neurobiological processes underlying the suppression of aversive memories before and after overnight consolidation. Here we report that consolidated aversive memories retain their emotional reactivity and become more resistant to suppression. Suppression of consolidated memories involves higher prefrontal engagement, and less concomitant hippocampal and amygdala disengagement. In parallel, we show a shift away from hippocampal-dependent representational patterns to distributed neocortical representational patterns in the suppression of aversive memories after consolidation. These findings demonstrate rapid changes in emotional memory organization with overnight consolidation, and suggest possible neurobiological bases underlying the resistance to suppression of emotional memories in affective disorders. PMID:27898050

  8. Constructing Neuronal Network Models in Massively Parallel Environments.

    PubMed

    Ippen, Tammo; Eppler, Jochen M; Plesser, Hans E; Diesmann, Markus

    2017-01-01

    Recent advances in the development of data structures to represent spiking neuron network models enable us to exploit the complete memory of petascale computers for a single brain-scale network simulation. In this work, we investigate how well we can exploit the computing power of such supercomputers for the creation of neuronal networks. Using an established benchmark, we divide the runtime of simulation code into the phase of network construction and the phase during which the dynamical state is advanced in time. We find that on multi-core compute nodes network creation scales well with process-parallel code but exhibits a prohibitively large memory consumption. Thread-parallel network creation, in contrast, exhibits speedup only up to a small number of threads but has little overhead in terms of memory. We further observe that the algorithms creating instances of model neurons and their connections scale well for networks of ten thousand neurons, but do not show the same speedup for networks of millions of neurons. Our work uncovers that the lack of scaling of thread-parallel network creation is due to inadequate memory allocation strategies and demonstrates that thread-optimized memory allocators recover excellent scaling. An analysis of the loop order used for network construction reveals that more complex tests on the locality of operations significantly improve scaling and reduce runtime by allowing construction algorithms to step through large networks more efficiently than in existing code. The combination of these techniques increases performance by an order of magnitude and harnesses the increasingly parallel compute power of the compute nodes in high-performance clusters and supercomputers.

  9. Constructing Neuronal Network Models in Massively Parallel Environments

    PubMed Central

    Ippen, Tammo; Eppler, Jochen M.; Plesser, Hans E.; Diesmann, Markus

    2017-01-01

    Recent advances in the development of data structures to represent spiking neuron network models enable us to exploit the complete memory of petascale computers for a single brain-scale network simulation. In this work, we investigate how well we can exploit the computing power of such supercomputers for the creation of neuronal networks. Using an established benchmark, we divide the runtime of simulation code into the phase of network construction and the phase during which the dynamical state is advanced in time. We find that on multi-core compute nodes network creation scales well with process-parallel code but exhibits a prohibitively large memory consumption. Thread-parallel network creation, in contrast, exhibits speedup only up to a small number of threads but has little overhead in terms of memory. We further observe that the algorithms creating instances of model neurons and their connections scale well for networks of ten thousand neurons, but do not show the same speedup for networks of millions of neurons. Our work uncovers that the lack of scaling of thread-parallel network creation is due to inadequate memory allocation strategies and demonstrates that thread-optimized memory allocators recover excellent scaling. An analysis of the loop order used for network construction reveals that more complex tests on the locality of operations significantly improve scaling and reduce runtime by allowing construction algorithms to step through large networks more efficiently than in existing code. The combination of these techniques increases performance by an order of magnitude and harnesses the increasingly parallel compute power of the compute nodes in high-performance clusters and supercomputers. PMID:28559808

  10. A Faster Parallel Algorithm and Efficient Multithreaded Implementations for Evaluating Betweenness Centrality on Massive Datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Madduri, Kamesh; Ediger, David; Jiang, Karl

    2009-02-15

    We present a new lock-free parallel algorithm for computing betweenness centralityof massive small-world networks. With minor changes to the data structures, ouralgorithm also achieves better spatial cache locality compared to previous approaches. Betweenness centrality is a key algorithm kernel in HPCS SSCA#2, a benchmark extensively used to evaluate the performance of emerging high-performance computing architectures for graph-theoretic computations. We design optimized implementations of betweenness centrality and the SSCA#2 benchmark for two hardware multithreaded systems: a Cray XMT system with the Threadstorm processor, and a single-socket Sun multicore server with the UltraSPARC T2 processor. For a small-world network of 134 millionmore » vertices and 1.073 billion edges, the 16-processor XMT system and the 8-core Sun Fire T5120 server achieve TEPS scores (an algorithmic performance count for the SSCA#2 benchmark) of 160 million and 90 million respectively, which corresponds to more than a 2X performance improvement over the previous parallel implementations. To better characterize the performance of these multithreaded systems, we correlate the SSCA#2 performance results with data from the memory-intensive STREAM and RandomAccess benchmarks. Finally, we demonstrate the applicability of our implementation to analyze massive real-world datasets by computing approximate betweenness centrality for a large-scale IMDb movie-actor network.« less

  11. A Faster Parallel Algorithm and Efficient Multithreaded Implementations for Evaluating Betweenness Centrality on Massive Datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Madduri, Kamesh; Ediger, David; Jiang, Karl

    2009-05-29

    We present a new lock-free parallel algorithm for computing betweenness centrality of massive small-world networks. With minor changes to the data structures, our algorithm also achieves better spatial cache locality compared to previous approaches. Betweenness centrality is a key algorithm kernel in the HPCS SSCA#2 Graph Analysis benchmark, which has been extensively used to evaluate the performance of emerging high-performance computing architectures for graph-theoretic computations. We design optimized implementations of betweenness centrality and the SSCA#2 benchmark for two hardware multithreaded systems: a Cray XMT system with the ThreadStorm processor, and a single-socket Sun multicore server with the UltraSparc T2 processor.more » For a small-world network of 134 million vertices and 1.073 billion edges, the 16-processor XMT system and the 8-core Sun Fire T5120 server achieve TEPS scores (an algorithmic performance count for the SSCA#2 benchmark) of 160 million and 90 million respectively, which corresponds to more than a 2X performance improvement over the previous parallel implementations. To better characterize the performance of these multithreaded systems, we correlate the SSCA#2 performance results with data from the memory-intensive STREAM and RandomAccess benchmarks. Finally, we demonstrate the applicability of our implementation to analyze massive real-world datasets by computing approximate betweenness centrality for a large-scale IMDb movie-actor network.« less

  12. Is there vacuum when there is mass? Vacuum and non-vacuum solutions for massive gravity

    NASA Astrophysics Data System (ADS)

    Martín-Moruno, Prado; Visser, Matt

    2013-08-01

    Massive gravity is a theory which has a tremendous amount of freedom to describe different cosmologies, but at the same time, the various solutions one encounters must fulfil some rather nontrivial constraints. Most of the freedom comes not from the Lagrangian, which contains only a small number of free parameters (typically three depending on counting conventions), but from the fact that one is in principle free to choose the reference metric almost arbitrarily—which effectively introduces a non-denumerable infinity of free parameters. In the current paper, we stress that although changing the reference metric would lead to a different cosmological model, this does not mean that the dynamics of the universe can be entirely divorced from its matter content. That is, while the choice of reference metric certainly influences the evolution of the physically observable foreground metric, the effect of matter cannot be neglected. Indeed the interplay between matter and geometry can be significantly changed in some specific models; effectively since the graviton would be able to curve the spacetime by itself, without the need of matter. Thus, even the set of vacuum solutions for massive gravity can have significant structure. In some cases, the effect of the reference metric could be so strong that no conceivable material content would be able to drastically affect the cosmological evolution. Dedicated to the memory of Professor Pedro F González-Díaz

  13. Entropy growth in emotional online dialogues

    NASA Astrophysics Data System (ADS)

    Sienkiewicz, J.; Skowron, M.; Paltoglou, G.; Hołyst, Janusz A.

    2013-02-01

    We analyze emotionally annotated massive data from IRC (Internet Relay Chat) and model the dialogues between its participants by assuming that the driving force for the discussion is the entropy growth of emotional probability distribution.

  14. Gamma-ray line emission from Al-26 produced by Wolf-Rayet stars

    NASA Technical Reports Server (NTRS)

    Prantzos, N.; Casse, M.; Gros, M.; Doom, C.; Arnould, M.

    1985-01-01

    The recent satellite observations of the 1.8 MeV line from the decay of Al-26 has given a new impetus to the study of the nucleosynthesis of Al-26. The production and ejection of Al-26 by massive mass-losing stars (Of and WR stars) is discussed in the light of recent stellar models. The longitude distribution of the Al-26 gamma ray line emission produced by the galactic collection of WR stars is derived based on various estimates of their radial distribution. This longitude profile provides: (1) a specific signature of massive stars on the background of other potential Al-26 sources, as novae, supernovae, certain red giants and possibly AGB stars; and (2) a possible tool to improve the data analysis of the HEAO 3 and SMM experiments.

  15. Parameters affecting the resilience of scale-free networks to random failures.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Link, Hamilton E.; LaViolette, Randall A.; Lane, Terran

    2005-09-01

    It is commonly believed that scale-free networks are robust to massive numbers of random node deletions. For example, Cohen et al. in (1) study scale-free networks including some which approximate the measured degree distribution of the Internet. Their results suggest that if each node in this network failed independently with probability 0.99, most of the remaining nodes would still be connected in a giant component. In this paper, we show that a large and important subclass of scale-free networks are not robust to massive numbers of random node deletions. In particular, we study scale-free networks which have minimum node degreemore » of 1 and a power-law degree distribution beginning with nodes of degree 1 (power-law networks). We show that, in a power-law network approximating the Internet's reported distribution, when the probability of deletion of each node is 0.5 only about 25% of the surviving nodes in the network remain connected in a giant component, and the giant component does not persist beyond a critical failure rate of 0.9. The new result is partially due to improved analytical accommodation of the large number of degree-0 nodes that result after node deletions. Our results apply to power-law networks with a wide range of power-law exponents, including Internet-like networks. We give both analytical and empirical evidence that such networks are not generally robust to massive random node deletions.« less

  16. Temporal evolution of brain reorganization under cross-modal training: insights into the functional architecture of encoding and retrieval networks

    NASA Astrophysics Data System (ADS)

    Likova, Lora T.

    2015-03-01

    This study is based on the recent discovery of massive and well-structured cross-modal memory activation generated in the primary visual cortex (V1) of totally blind people as a result of novel training in drawing without any vision (Likova, 2012). This unexpected functional reorganization of primary visual cortex was obtained after undergoing only a week of training by the novel Cognitive-Kinesthetic Method, and was consistent across pilot groups of different categories of visual deprivation: congenitally blind, late-onset blind and blindfolded (Likova, 2014). These findings led us to implicate V1 as the implementation of the theoretical visuo-spatial 'sketchpad' for working memory in the human brain. Since neither the source nor the subsequent 'recipient' of this non-visual memory information in V1 is known, these results raise a number of important questions about the underlying functional organization of the respective encoding and retrieval networks in the brain. To address these questions, an individual totally blind from birth was given a week of Cognitive-Kinesthetic training, accompanied by functional magnetic resonance imaging (fMRI) both before and just after training, and again after a two-month consolidation period. The results revealed a remarkable temporal sequence of training-based response reorganization in both the hippocampal complex and the temporal-lobe object processing hierarchy over the prolonged consolidation period. In particular, a pattern of profound learning-based transformations in the hippocampus was strongly reflected in V1, with the retrieval function showing massive growth as result of the Cognitive-Kinesthetic memory training and consolidation, while the initially strong hippocampal response during tactile exploration and encoding became non-existent. Furthermore, after training, an alternating patch structure in the form of a cascade of discrete ventral regions underwent radical transformations to reach complete functional specialization in terms of either encoding or retrieval as a function of the stage of learning. Moreover, several distinct patterns of learning-evolution within the patches as a function of their anatomical location, implying a complex reorganization of the object processing sub-networks through the learning period. These first findings of complex patterns of training-based encoding/retrieval reorganization thus have broad implications for a newly emerging view of the perception/memory interactions and their reorganization through the learning process. Note that the temporal evolution of these forms of extended functional reorganization could not be uncovered with conventional assessment paradigms used in the traditional approaches to functional mapping, which may therefore have to be revisited. Moreover, as the present results are obtained in learning under life-long blindness, they imply modality-independent operations, transcending the usual tight association with visual processing. The present approach of memory drawing training in blindness, has the dual-advantage of being both non-visual and causal intervention, which makes it a promising 'scalpel' to disentangle interactions among diverse cognitive functions.

  17. An Observational Study of Blended Young Stellar Clusters in the Galactic Plane - Do Massive Stars form First?

    NASA Astrophysics Data System (ADS)

    Martínez-Galarza, Rafael; Protopapas, Pavlos; Smith, Howard A.; Morales, Esteban

    2018-01-01

    From an observational point of view, the early life of massive stars is difficult to understand partly because star formation occurs in crowded clusters where individual stars often appear blended together in the beams of infrared telescopes. This renders the characterization of the physical properties of young embedded clusters via spectral energy distribution (SED) fitting a challenging task. Of particular relevance for the testing of star formation models is the question of whether the claimed universality of the IMF (references) is reflected in an equally universal integrated galactic initial mass function (IGIMF) of stars. In other words, is the set of all stellar masses in the galaxy sampled from a single universal IMF, or does the distribution of masses depend on the environment, making the IGIMF different from the canonical IMF? If the latter is true, how different are the two? We present a infrared SED analysis of ~70 Spitzer-selected, low mass ($<100~\\rm{M}_{\\odot}$), galactic blended clusters. For all of the clusters we obtain the most probable individual SED of each member and derive their physical properties, effectively deblending the confused emission from individual YSOs. Our algorithm incorporates a combined probabilistic model of the blended SEDs and the unresolved images in the long-wavelength end. We find that our results are compatible with competitive accretion in the central regions of young clusters, with the most massive stars forming early on in the process and less massive stars forming about 1Myr later. We also find evidence for a relationship between the total stellar mass of the cluster and the mass of the most massive member that favors optimal sampling in the cluster and disfavors random sampling for the canonical IMF, implying that star formation is self-regulated, and that the mass of the most massive star in a cluster depends on the available resources. The method presented here is easily adapted to future observations of clustered regions of star formation with JWST and other high resolution facilities.

  18. Massively parallel data processing for quantitative total flow imaging with optical coherence microscopy and tomography

    NASA Astrophysics Data System (ADS)

    Sylwestrzak, Marcin; Szlag, Daniel; Marchand, Paul J.; Kumar, Ashwin S.; Lasser, Theo

    2017-08-01

    We present an application of massively parallel processing of quantitative flow measurements data acquired using spectral optical coherence microscopy (SOCM). The need for massive signal processing of these particular datasets has been a major hurdle for many applications based on SOCM. In view of this difficulty, we implemented and adapted quantitative total flow estimation algorithms on graphics processing units (GPU) and achieved a 150 fold reduction in processing time when compared to a former CPU implementation. As SOCM constitutes the microscopy counterpart to spectral optical coherence tomography (SOCT), the developed processing procedure can be applied to both imaging modalities. We present the developed DLL library integrated in MATLAB (with an example) and have included the source code for adaptations and future improvements. Catalogue identifier: AFBT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AFBT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU GPLv3 No. of lines in distributed program, including test data, etc.: 913552 No. of bytes in distributed program, including test data, etc.: 270876249 Distribution format: tar.gz Programming language: CUDA/C, MATLAB. Computer: Intel x64 CPU, GPU supporting CUDA technology. Operating system: 64-bit Windows 7 Professional. Has the code been vectorized or parallelized?: Yes, CPU code has been vectorized in MATLAB, CUDA code has been parallelized. RAM: Dependent on users parameters, typically between several gigabytes and several tens of gigabytes Classification: 6.5, 18. Nature of problem: Speed up of data processing in optical coherence microscopy Solution method: Utilization of GPU for massively parallel data processing Additional comments: Compiled DLL library with source code and documentation, example of utilization (MATLAB script with raw data) Running time: 1,8 s for one B-scan (150 × faster in comparison to the CPU data processing time)

  19. Architectural Strategies for Enabling Data-Driven Science at Scale

    NASA Astrophysics Data System (ADS)

    Crichton, D. J.; Law, E. S.; Doyle, R. J.; Little, M. M.

    2017-12-01

    The analysis of large data collections from NASA or other agencies is often executed through traditional computational and data analysis approaches, which require users to bring data to their desktops and perform local data analysis. Alternatively, data are hauled to large computational environments that provide centralized data analysis via traditional High Performance Computing (HPC). Scientific data archives, however, are not only growing massive, but are also becoming highly distributed. Neither traditional approach provides a good solution for optimizing analysis into the future. Assumptions across the NASA mission and science data lifecycle, which historically assume that all data can be collected, transmitted, processed, and archived, will not scale as more capable instruments stress legacy-based systems. New paradigms are needed to increase the productivity and effectiveness of scientific data analysis. This paradigm must recognize that architectural and analytical choices are interrelated, and must be carefully coordinated in any system that aims to allow efficient, interactive scientific exploration and discovery to exploit massive data collections, from point of collection (e.g., onboard) to analysis and decision support. The most effective approach to analyzing a distributed set of massive data may involve some exploration and iteration, putting a premium on the flexibility afforded by the architectural framework. The framework should enable scientist users to assemble workflows efficiently, manage the uncertainties related to data analysis and inference, and optimize deep-dive analytics to enhance scalability. In many cases, this "data ecosystem" needs to be able to integrate multiple observing assets, ground environments, archives, and analytics, evolving from stewardship of measurements of data to using computational methodologies to better derive insight from the data that may be fused with other sets of data. This presentation will discuss architectural strategies, including a 2015-2016 NASA AIST Study on Big Data, for evolving scientific research towards massively distributed data-driven discovery. It will include example use cases across earth science, planetary science, and other disciplines.

  20. Column Store for GWAC: A High-cadence, High-density, Large-scale Astronomical Light Curve Pipeline and Distributed Shared-nothing Database

    NASA Astrophysics Data System (ADS)

    Wan, Meng; Wu, Chao; Wang, Jing; Qiu, Yulei; Xin, Liping; Mullender, Sjoerd; Mühleisen, Hannes; Scheers, Bart; Zhang, Ying; Nes, Niels; Kersten, Martin; Huang, Yongpan; Deng, Jinsong; Wei, Jianyan

    2016-11-01

    The ground-based wide-angle camera array (GWAC), a part of the SVOM space mission, will search for various types of optical transients by continuously imaging a field of view (FOV) of 5000 degrees2 every 15 s. Each exposure consists of 36 × 4k × 4k pixels, typically resulting in 36 × ˜175,600 extracted sources. For a modern time-domain astronomy project like GWAC, which produces massive amounts of data with a high cadence, it is challenging to search for short timescale transients in both real-time and archived data, and to build long-term light curves for variable sources. Here, we develop a high-cadence, high-density light curve pipeline (HCHDLP) to process the GWAC data in real-time, and design a distributed shared-nothing database to manage the massive amount of archived data which will be used to generate a source catalog with more than 100 billion records during 10 years of operation. First, we develop HCHDLP based on the column-store DBMS of MonetDB, taking advantage of MonetDB’s high performance when applied to massive data processing. To realize the real-time functionality of HCHDLP, we optimize the pipeline in its source association function, including both time and space complexity from outside the database (SQL semantic) and inside (RANGE-JOIN implementation), as well as in its strategy of building complex light curves. The optimized source association function is accelerated by three orders of magnitude. Second, we build a distributed database using a two-level time partitioning strategy via the MERGE TABLE and REMOTE TABLE technology of MonetDB. Intensive tests validate that our database architecture is able to achieve both linear scalability in response time and concurrent access by multiple users. In summary, our studies provide guidance for a solution to GWAC in real-time data processing and management of massive data.

  1. CDL description of the CDC 6600 stunt box

    NASA Technical Reports Server (NTRS)

    Hertzog, J. B.

    1971-01-01

    The CDC 6600 central memory control (stunt box) is described utilizing CDL (Computer Design Language), block diagrams, and text. The stunt box is a clearing house for all central memory references from the 6600 central and peripheral processors. Since memory requests can be issued simultaneously, the stunt box must be capable of assigning priorities to requests, of labeling requests so that the data will be distributed correctly, and of remembering rejected addresses due to memory conflicts.

  2. Recognition of simple visual images using a sparse distributed memory: Some implementations and experiments

    NASA Technical Reports Server (NTRS)

    Jaeckel, Louis A.

    1990-01-01

    Previously, a method was described of representing a class of simple visual images so that they could be used with a Sparse Distributed Memory (SDM). Herein, two possible implementations are described of a SDM, for which these images, suitably encoded, will serve both as addresses to the memory and as data to be stored in the memory. A key feature of both implementations is that a pattern that is represented as an unordered set with a variable number of members can be used as an address to the memory. In the 1st model, an image is encoded as a 9072 bit string to be used as a read or write address; the bit string may also be used as data to be stored in the memory. Another representation, in which an image is encoded as a 256 bit string, may be used with either model as data to be stored in the memory, but not as an address. In the 2nd model, an image is not represented as a vector of fixed length to be used as an address. Instead, a rule is given for determining which memory locations are to be activated in response to an encoded image. This activation rule treats the pieces of an image as an unordered set. With this model, the memory can be simulated, based on a method of computing the approximate result of a read operation.

  3. Massive stars, disks, and clustered star formation

    NASA Astrophysics Data System (ADS)

    Moeckel, Nickolas Barry

    The formation of an isolated massive star is inherently more complex than the relatively well-understood collapse of an isolated, low-mass star. The dense, clustered environment where massive stars are predominantly found further complicates the picture, and suggests that interactions with other stars may play an important role in the early life of these objects. In this thesis we present the results of numerical hydrodynamic experiments investigating interactions between a massive protostar and its lower-mass cluster siblings. We explore the impact of these interactions on the orientation of disks and outflows, which are potentially observable indications of encounters during the formation of a star. We show that these encounters efficiently form eccentric binary systems, and in clusters similar to Orion they occur frequently enough to contribute to the high multiplicity of massive stars. We suggest that the massive protostar in Cepheus A is currently undergoing a series of interactions, and present simulations tailored to that system. We also apply the numerical techniques used in the massive star investigations to a much lower-mass regime, the formation of planetary systems around Solar- mass stars. We perform a small number of illustrative planet-planet scattering experiments, which have been used to explain the eccentricity distribution of extrasolar planets. We add the complication of a remnant gas disk, and show that this feature has the potential to stabilize the system against strong encounters between planets. We present preliminary simulations of Bondi-Hoyle accretion onto a protoplanetary disk, and consider the impact of the flow on the disk properties as well as the impact of the disk on the accretion flow.

  4. Parallel processing for scientific computations

    NASA Technical Reports Server (NTRS)

    Alkhatib, Hasan S.

    1995-01-01

    The scope of this project dealt with the investigation of the requirements to support distributed computing of scientific computations over a cluster of cooperative workstations. Various experiments on computations for the solution of simultaneous linear equations were performed in the early phase of the project to gain experience in the general nature and requirements of scientific applications. A specification of a distributed integrated computing environment, DICE, based on a distributed shared memory communication paradigm has been developed and evaluated. The distributed shared memory model facilitates porting existing parallel algorithms that have been designed for shared memory multiprocessor systems to the new environment. The potential of this new environment is to provide supercomputing capability through the utilization of the aggregate power of workstations cooperating in a cluster interconnected via a local area network. Workstations, generally, do not have the computing power to tackle complex scientific applications, making them primarily useful for visualization, data reduction, and filtering as far as complex scientific applications are concerned. There is a tremendous amount of computing power that is left unused in a network of workstations. Very often a workstation is simply sitting idle on a desk. A set of tools can be developed to take advantage of this potential computing power to create a platform suitable for large scientific computations. The integration of several workstations into a logical cluster of distributed, cooperative, computing stations presents an alternative to shared memory multiprocessor systems. In this project we designed and evaluated such a system.

  5. Support for Debugging Automatically Parallelized Programs

    NASA Technical Reports Server (NTRS)

    Hood, Robert; Jost, Gabriele; Biegel, Bryan (Technical Monitor)

    2001-01-01

    This viewgraph presentation provides information on the technical aspects of debugging computer code that has been automatically converted for use in a parallel computing system. Shared memory parallelization and distributed memory parallelization entail separate and distinct challenges for a debugging program. A prototype system has been developed which integrates various tools for the debugging of automatically parallelized programs including the CAPTools Database which provides variable definition information across subroutines as well as array distribution information.

  6. High Band Technology Program (HiTeP)

    DTIC Science & Technology

    2005-03-01

    clock distribution circuit. One Receiver Memory module receives 60MHz reference sine wave and distributes 60MHz clock signals to all Receiver Memory...Diagram UNCLASSIFIED 23 in N00014-99-C-0314 Integrated Defense Systems Final Report 1 March 2005 .. 4.ran FibreXpress Fibre-Channel PMC "Motrl Medea FCR...the Electrically Short Crossed-Notch (ESCN). It is shorter than traditional traveling wave notch antennas. The 2X ECSN fin length is approximately 1.2

  7. Correlated resistive/capacitive state variability in solid TiO2 based memory devices

    NASA Astrophysics Data System (ADS)

    Li, Qingjiang; Salaoru, Iulia; Khiat, Ali; Xu, Hui; Prodromakis, Themistoklis

    2017-05-01

    In this work, we experimentally demonstrated the correlated resistive/capacitive switching and state variability in practical TiO2 based memory devices. Based on filamentary functional mechanism, we argue that the impedance state variability stems from the randomly distributed defects inside the oxide bulk. Finally, our assumption was verified via a current percolation circuit model, by taking into account of random defects distribution and coexistence of memristor and memcapacitor.

  8. Timing in a Variable Interval Procedure: Evidence for a Memory Singularity

    PubMed Central

    Matell, Matthew S.; Kim, Jung S.; Hartshorne, Loryn

    2013-01-01

    Rats were trained in either a 30s peak-interval procedure, or a 15–45s variable interval peak procedure with a uniform distribution (Exp 1) or a ramping probability distribution (Exp 2). Rats in all groups showed peak shaped response functions centered around 30s, with the uniform group having an earlier and broader peak response function and rats in the ramping group having a later peak function as compared to the single duration group. The changes in these mean functions, as well as the statistics from single trial analyses, can be better captured by a model of timing in which memory is represented by a single, average, delay to reinforcement compared to one in which all durations are stored as a distribution, such as the complete memory model of Scalar Expectancy Theory or a simple associative model. PMID:24012783

  9. Modeling of long-range memory processes with inverse cubic distributions by the nonlinear stochastic differential equations

    NASA Astrophysics Data System (ADS)

    Kaulakys, B.; Alaburda, M.; Ruseckas, J.

    2016-05-01

    A well-known fact in the financial markets is the so-called ‘inverse cubic law’ of the cumulative distributions of the long-range memory fluctuations of market indicators such as a number of events of trades, trading volume and the logarithmic price change. We propose the nonlinear stochastic differential equation (SDE) giving both the power-law behavior of the power spectral density and the long-range dependent inverse cubic law of the cumulative distribution. This is achieved using the suggestion that when the market evolves from calm to violent behavior there is a decrease of the delay time of multiplicative feedback of the system in comparison to the driving noise correlation time. This results in a transition from the Itô to the Stratonovich sense of the SDE and yields a long-range memory process.

  10. Quantifying data retention of perpendicular spin-transfer-torque magnetic random access memory chips using an effective thermal stability factor method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas, Luc, E-mail: luc.thomas@headway.com; Jan, Guenole; Le, Son

    The thermal stability of perpendicular Spin-Transfer-Torque Magnetic Random Access Memory (STT-MRAM) devices is investigated at chip level. Experimental data are analyzed in the framework of the Néel-Brown model including distributions of the thermal stability factor Δ. We show that in the low error rate regime important for applications, the effect of distributions of Δ can be described by a single quantity, the effective thermal stability factor Δ{sub eff}, which encompasses both the median and the standard deviation of the distributions. Data retention of memory chips can be assessed accurately by measuring Δ{sub eff} as a function of device diameter andmore » temperature. We apply this method to show that 54 nm devices based on our perpendicular STT-MRAM design meet our 10 year data retention target up to 120 °C.« less

  11. Neural bases of orthographic long-term memory and working memory in dysgraphia.

    PubMed

    Rapp, Brenda; Purcell, Jeremy; Hillis, Argye E; Capasso, Rita; Miceli, Gabriele

    2016-02-01

    Spelling a word involves the retrieval of information about the word's letters and their order from long-term memory as well as the maintenance and processing of this information by working memory in preparation for serial production by the motor system. While it is known that brain lesions may selectively affect orthographic long-term memory and working memory processes, relatively little is known about the neurotopographic distribution of the substrates that support these cognitive processes, or the lesions that give rise to the distinct forms of dysgraphia that affect these cognitive processes. To examine these issues, this study uses a voxel-based mapping approach to analyse the lesion distribution of 27 individuals with dysgraphia subsequent to stroke, who were identified on the basis of their behavioural profiles alone, as suffering from deficits only affecting either orthographic long-term or working memory, as well as six other individuals with deficits affecting both sets of processes. The findings provide, for the first time, clear evidence of substrates that selectively support orthographic long-term and working memory processes, with orthographic long-term memory deficits centred in either the left posterior inferior frontal region or left ventral temporal cortex, and orthographic working memory deficits primarily arising from lesions of the left parietal cortex centred on the intraparietal sulcus. These findings also contribute to our understanding of the relationship between the neural instantiation of written language processes and spoken language, working memory and other cognitive skills. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  12. Geocomputation over Hybrid Computer Architecture and Systems: Prior Works and On-going Initiatives at UARK

    NASA Astrophysics Data System (ADS)

    Shi, X.

    2015-12-01

    As NSF indicated - "Theory and experimentation have for centuries been regarded as two fundamental pillars of science. It is now widely recognized that computational and data-enabled science forms a critical third pillar." Geocomputation is the third pillar of GIScience and geosciences. With the exponential growth of geodata, the challenge of scalable and high performance computing for big data analytics become urgent because many research activities are constrained by the inability of software or tool that even could not complete the computation process. Heterogeneous geodata integration and analytics obviously magnify the complexity and operational time frame. Many large-scale geospatial problems may be not processable at all if the computer system does not have sufficient memory or computational power. Emerging computer architectures, such as Intel's Many Integrated Core (MIC) Architecture and Graphics Processing Unit (GPU), and advanced computing technologies provide promising solutions to employ massive parallelism and hardware resources to achieve scalability and high performance for data intensive computing over large spatiotemporal and social media data. Exploring novel algorithms and deploying the solutions in massively parallel computing environment to achieve the capability for scalable data processing and analytics over large-scale, complex, and heterogeneous geodata with consistent quality and high-performance has been the central theme of our research team in the Department of Geosciences at the University of Arkansas (UARK). New multi-core architectures combined with application accelerators hold the promise to achieve scalability and high performance by exploiting task and data levels of parallelism that are not supported by the conventional computing systems. Such a parallel or distributed computing environment is particularly suitable for large-scale geocomputation over big data as proved by our prior works, while the potential of such advanced infrastructure remains unexplored in this domain. Within this presentation, our prior and on-going initiatives will be summarized to exemplify how we exploit multicore CPUs, GPUs, and MICs, and clusters of CPUs, GPUs and MICs, to accelerate geocomputation in different applications.

  13. moocRP: Enabling Open Learning Analytics with an Open Source Platform for Data Distribution, Analysis, and Visualization

    ERIC Educational Resources Information Center

    Pardos, Zachary A.; Whyte, Anthony; Kao, Kevin

    2016-01-01

    In this paper, we address issues of transparency, modularity, and privacy with the introduction of an open source, web-based data repository and analysis tool tailored to the Massive Open Online Course community. The tool integrates data request/authorization and distribution workflow features as well as provides a simple analytics module upload…

  14. PARALLEL HOP: A SCALABLE HALO FINDER FOR MASSIVE COSMOLOGICAL DATA SETS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skory, Stephen; Turk, Matthew J.; Norman, Michael L.

    2010-11-15

    Modern N-body cosmological simulations contain billions (10{sup 9}) of dark matter particles. These simulations require hundreds to thousands of gigabytes of memory and employ hundreds to tens of thousands of processing cores on many compute nodes. In order to study the distribution of dark matter in a cosmological simulation, the dark matter halos must be identified using a halo finder, which establishes the halo membership of every particle in the simulation. The resources required for halo finding are similar to the requirements for the simulation itself. In particular, simulations have become too extensive to use commonly employed halo finders, suchmore » that the computational requirements to identify halos must now be spread across multiple nodes and cores. Here, we present a scalable-parallel halo finding method called Parallel HOP for large-scale cosmological simulation data. Based on the halo finder HOP, it utilizes message passing interface and domain decomposition to distribute the halo finding workload across multiple compute nodes, enabling analysis of much larger data sets than is possible with the strictly serial or previous parallel implementations of HOP. We provide a reference implementation of this method as a part of the toolkit {sup yt}, an analysis toolkit for adaptive mesh refinement data that include complementary analysis modules. Additionally, we discuss a suite of benchmarks that demonstrate that this method scales well up to several hundred tasks and data sets in excess of 2000{sup 3} particles. The Parallel HOP method and our implementation can be readily applied to any kind of N-body simulation data and is therefore widely applicable.« less

  15. Figuring fact from fiction: unbiased polling of memory T cells.

    PubMed

    Gerlach, Carmen; Loughhead, Scott M; von Andrian, Ulrich H

    2015-05-07

    Immunization generates several memory T cell subsets that differ in their migratory properties, anatomic distribution, and, hence, accessibility to investigation. In this issue, Steinert et al. demonstrate that what was believed to be a minor memory cell subset in peripheral tissues has been dramatically underestimated. Thus, current models of protective immunity require revision. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Working memory retrieval as a decision process

    PubMed Central

    Pearson, Benjamin; Raškevičius, Julius; Bays, Paul M.; Pertzov, Yoni; Husain, Masud

    2014-01-01

    Working memory (WM) is a core cognitive process fundamental to human behavior, yet the mechanisms underlying it remain highly controversial. Here we provide a new framework for understanding retrieval of information from WM, conceptualizing it as a decision based on the quality of internal evidence. Recent findings have demonstrated that precision of WM decreases with memory load. If WM retrieval uses a decision process that depends on memory quality, systematic changes in response time distribution should occur as a function of WM precision. We asked participants to view sample arrays and, after a delay, report the direction of change in location or orientation of a probe. As WM precision deteriorated with increasing memory load, retrieval time increased systematically. Crucially, the shape of reaction time distributions was consistent with a linear accumulator decision process. Varying either task relevance of items or maintenance duration influenced memory precision, with corresponding shifts in retrieval time. These results provide strong support for a decision-making account of WM retrieval based on noisy storage of items. Furthermore, they show that encoding, maintenance, and retrieval in WM need not be considered as separate processes, but may instead be conceptually unified as operations on the same noise-limited, neural representation. PMID:24492597

  17. The Molecular Basis of Memory

    PubMed Central

    2012-01-01

    We propose a tripartite biochemical mechanism for memory. Three physiologic components are involved, namely, the neuron (individual and circuit), the surrounding neural extracellular matrix, and the various trace metals distributed within the matrix. The binding of a metal cation affects a corresponding nanostructure (shrinking, twisting, expansion) and dielectric sensibility of the chelating node (address) within the matrix lattice, sensed by the neuron. The neural extracellular matrix serves as an electro-elastic lattice, wherein neurons manipulate multiple trace metals (n > 10) to encode, store, and decode coginive information. The proposed mechanism explains brains low energy requirements and high rates of storage capacity described in multiples of Avogadro number (NA = 6 × 1023). Supportive evidence correlates memory loss to trace metal toxicity or deficiency, or breakdown in the delivery/transport of metals to the matrix, or its degradation. Inherited diseases revolving around dysfunctional trace metal metabolism and memory dysfunction, include Alzheimer's disease (Al, Zn, Fe), Wilson’s disease (Cu), thalassemia (Fe), and autism (metallothionein). The tripartite mechanism points to the electro-elastic interactions of neurons with trace metals distributed within the neural extracellular matrix, as the molecular underpinning of “synaptic plasticity” affecting short-term memory, long-term memory, and forgetting. PMID:23050060

  18. Working memory retrieval as a decision process.

    PubMed

    Pearson, Benjamin; Raskevicius, Julius; Bays, Paul M; Pertzov, Yoni; Husain, Masud

    2014-02-03

    Working memory (WM) is a core cognitive process fundamental to human behavior, yet the mechanisms underlying it remain highly controversial. Here we provide a new framework for understanding retrieval of information from WM, conceptualizing it as a decision based on the quality of internal evidence. Recent findings have demonstrated that precision of WM decreases with memory load. If WM retrieval uses a decision process that depends on memory quality, systematic changes in response time distribution should occur as a function of WM precision. We asked participants to view sample arrays and, after a delay, report the direction of change in location or orientation of a probe. As WM precision deteriorated with increasing memory load, retrieval time increased systematically. Crucially, the shape of reaction time distributions was consistent with a linear accumulator decision process. Varying either task relevance of items or maintenance duration influenced memory precision, with corresponding shifts in retrieval time. These results provide strong support for a decision-making account of WM retrieval based on noisy storage of items. Furthermore, they show that encoding, maintenance, and retrieval in WM need not be considered as separate processes, but may instead be conceptually unified as operations on the same noise-limited, neural representation.

  19. Virtual memory support for distributed computing environments using a shared data object model

    NASA Astrophysics Data System (ADS)

    Huang, F.; Bacon, J.; Mapp, G.

    1995-12-01

    Conventional storage management systems provide one interface for accessing memory segments and another for accessing secondary storage objects. This hinders application programming and affects overall system performance due to mandatory data copying and user/kernel boundary crossings, which in the microkernel case may involve context switches. Memory-mapping techniques may be used to provide programmers with a unified view of the storage system. This paper extends such techniques to support a shared data object model for distributed computing environments in which good support for coherence and synchronization is essential. The approach is based on a microkernel, typed memory objects, and integrated coherence control. A microkernel architecture is used to support multiple coherence protocols and the addition of new protocols. Memory objects are typed and applications can choose the most suitable protocols for different types of object to avoid protocol mismatch. Low-level coherence control is integrated with high-level concurrency control so that the number of messages required to maintain memory coherence is reduced and system-wide synchronization is realized without severely impacting the system performance. These features together contribute a novel approach to the support for flexible coherence under application control.

  20. In-memory integration of existing software components for parallel adaptive unstructured mesh workflows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Cameron W.; Granzow, Brian; Diamond, Gerrett

    Unstructured mesh methods, like finite elements and finite volumes, support the effective analysis of complex physical behaviors modeled by partial differential equations over general threedimensional domains. The most reliable and efficient methods apply adaptive procedures with a-posteriori error estimators that indicate where and how the mesh is to be modified. Although adaptive meshes can have two to three orders of magnitude fewer elements than a more uniform mesh for the same level of accuracy, there are many complex simulations where the meshes required are so large that they can only be solved on massively parallel systems.

  1. In-memory integration of existing software components for parallel adaptive unstructured mesh workflows

    DOE PAGES

    Smith, Cameron W.; Granzow, Brian; Diamond, Gerrett; ...

    2017-01-01

    Unstructured mesh methods, like finite elements and finite volumes, support the effective analysis of complex physical behaviors modeled by partial differential equations over general threedimensional domains. The most reliable and efficient methods apply adaptive procedures with a-posteriori error estimators that indicate where and how the mesh is to be modified. Although adaptive meshes can have two to three orders of magnitude fewer elements than a more uniform mesh for the same level of accuracy, there are many complex simulations where the meshes required are so large that they can only be solved on massively parallel systems.

  2. The CP-PACS parallel computer

    NASA Astrophysics Data System (ADS)

    Ukawa, Akira

    1998-05-01

    The CP-PACS computer is a massively parallel computer consisting of 2048 processing units and having a peak speed of 614 GFLOPS and 128 GByte of main memory. It was developed over the four years from 1992 to 1996 at the Center for Computational Physics, University of Tsukuba, for large-scale numerical simulations in computational physics, especially those of lattice QCD. The CP-PACS computer has been in full operation for physics computations since October 1996. In this article we describe the chronology of the development, the hardware and software characteristics of the computer, and its performance for lattice QCD simulations.

  3. Efficient computation of aerodynamic influence coefficients for aeroelastic analysis on a transputer network

    NASA Technical Reports Server (NTRS)

    Janetzke, David C.; Murthy, Durbha V.

    1991-01-01

    Aeroelastic analysis is multi-disciplinary and computationally expensive. Hence, it can greatly benefit from parallel processing. As part of an effort to develop an aeroelastic capability on a distributed memory transputer network, a parallel algorithm for the computation of aerodynamic influence coefficients is implemented on a network of 32 transputers. The aerodynamic influence coefficients are calculated using a 3-D unsteady aerodynamic model and a parallel discretization. Efficiencies up to 85 percent were demonstrated using 32 processors. The effect of subtask ordering, problem size, and network topology are presented. A comparison to results on a shared memory computer indicates that higher speedup is achieved on the distributed memory system.

  4. Stochastic switching of TiO2-based memristive devices with identical initial memory states

    PubMed Central

    2014-01-01

    In this work, we show that identical TiO2-based memristive devices that possess the same initial resistive states are only phenomenologically similar as their internal structures may vary significantly, which could render quite dissimilar switching dynamics. We experimentally demonstrated that the resistive switching of practical devices with similar initial states could occur at different programming stimuli cycles. We argue that similar memory states can be transcribed via numerous distinct active core states through the dissimilar reduced TiO2-x filamentary distributions. Our hypothesis was finally verified via simulated results of the memory state evolution, by taking into account dissimilar initial filamentary distribution. PMID:24994953

  5. Energy spectra of massive two-body decay products and mass measurement

    DOE PAGES

    Agashe, Kaustubh; Franceschini, Roberto; Hong, Sungwoo; ...

    2016-04-26

    Here, we have recently established a new method for measuring the mass of unstable particles produced at hadron colliders based on the analysis of the energy distribution of a massless product from their two-body decays. The central ingredient of our proposal is the remarkable result that, for an unpolarized decaying particle, the location of the peak in the energy distribution of the observed decay product is identical to the (fixed) value of the energy that this particle would have in the rest-frame of the decaying particle, which, in turn, is a simple function of the involved masses. In addition, wemore » utilized the property that this energy distribution is symmetric around the location of peak when energy is plotted on a logarithmic scale. The general strategy was demonstrated in several specific cases, including both beyond the standard model particles, as well as for the top quark. In the present work, we generalize this method to the case of a massive decay product from a two-body decay; this procedure is far from trivial because (in general) both the above-mentioned properties are no longer valid. Nonetheless, we propose a suitably modified parametrization of the energy distribution that was used successfully for the massless case, which can deal with the massive case as well. We test this parametrization on concrete examples of energy spectra of Z bosons from the decay of a heavier supersymmetric partner of top quark (stop) into a Z boson and a lighter stop. After establishing the accuracy of this parametrization, we study a realistic application for the same process, but now including dominant backgrounds and using foreseeable statistics at LHC14, in order to determine the performance of this method for an actual mass measurement. The upshot of our present and previous work is that, in spite of energy being a Lorentz-variant quantity, its distribution emerges as a powerful tool for mass measurement at hadron colliders.« less

  6. Multi-Scale Human Respiratory System Simulations to Study Health Effects of Aging, Disease, and Inhaled Substances

    NASA Astrophysics Data System (ADS)

    Kunz, Robert; Haworth, Daniel; Dogan, Gulkiz; Kriete, Andres

    2006-11-01

    Three-dimensional, unsteady simulations of multiphase flow, gas exchange, and particle/aerosol deposition in the human lung are reported. Surface data for human tracheo-bronchial trees are derived from CT scans, and are used to generate three- dimensional CFD meshes for the first several generations of branching. One-dimensional meshes for the remaining generations down to the respiratory units are generated using branching algorithms based on those that have been proposed in the literature, and a zero-dimensional respiratory unit (pulmonary acinus) model is attached at the end of each terminal bronchiole. The process is automated to facilitate rapid model generation. The model is exercised through multiple breathing cycles to compute the spatial and temporal variations in flow, gas exchange, and particle/aerosol deposition. The depth of the 3D/1D transition (at branching generation n) is a key parameter, and can be varied. High-fidelity models (large n) are run on massively parallel distributed-memory clusters, and are used to generate physical insight and to calibrate/validate the 1D and 0D models. Suitably validated lower-order models (small n) can be run on single-processor PC’s with run times that allow model-based clinical intervention for individual patients.

  7. a Novel Approach of Indexing and Retrieving Spatial Polygons for Efficient Spatial Region Queries

    NASA Astrophysics Data System (ADS)

    Zhao, J. H.; Wang, X. Z.; Wang, F. Y.; Shen, Z. H.; Zhou, Y. C.; Wang, Y. L.

    2017-10-01

    Spatial region queries are more and more widely used in web-based applications. Mechanisms to provide efficient query processing over geospatial data are essential. However, due to the massive geospatial data volume, heavy geometric computation, and high access concurrency, it is difficult to get response in real time. Spatial indexes are usually used in this situation. In this paper, based on k-d tree, we introduce a distributed KD-Tree (DKD-Tree) suitbable for polygon data, and a two-step query algorithm. The spatial index construction is recursive and iterative, and the query is an in memory process. Both the index and query methods can be processed in parallel, and are implemented based on HDFS, Spark and Redis. Experiments on a large volume of Remote Sensing images metadata have been carried out, and the advantages of our method are investigated by comparing with spatial region queries executed on PostgreSQL and PostGIS. Results show that our approach not only greatly improves the efficiency of spatial region query, but also has good scalability, Moreover, the two-step spatial range query algorithm can also save cluster resources to support a large number of concurrent queries. Therefore, this method is very useful when building large geographic information systems.

  8. Quantum Monte Carlo for large chemical systems: implementing efficient strategies for petascale platforms and beyond.

    PubMed

    Scemama, Anthony; Caffarel, Michel; Oseret, Emmanuel; Jalby, William

    2013-04-30

    Various strategies to implement efficiently quantum Monte Carlo (QMC) simulations for large chemical systems are presented. These include: (i) the introduction of an efficient algorithm to calculate the computationally expensive Slater matrices. This novel scheme is based on the use of the highly localized character of atomic Gaussian basis functions (not the molecular orbitals as usually done), (ii) the possibility of keeping the memory footprint minimal, (iii) the important enhancement of single-core performance when efficient optimization tools are used, and (iv) the definition of a universal, dynamic, fault-tolerant, and load-balanced framework adapted to all kinds of computational platforms (massively parallel machines, clusters, or distributed grids). These strategies have been implemented in the QMC=Chem code developed at Toulouse and illustrated with numerical applications on small peptides of increasing sizes (158, 434, 1056, and 1731 electrons). Using 10-80 k computing cores of the Curie machine (GENCI-TGCC-CEA, France), QMC=Chem has been shown to be capable of running at the petascale level, thus demonstrating that for this machine a large part of the peak performance can be achieved. Implementation of large-scale QMC simulations for future exascale platforms with a comparable level of efficiency is expected to be feasible. Copyright © 2013 Wiley Periodicals, Inc.

  9. Parallel Algorithms for Switching Edges in Heterogeneous Graphs.

    PubMed

    Bhuiyan, Hasanuzzaman; Khan, Maleq; Chen, Jiangzhuo; Marathe, Madhav

    2017-06-01

    An edge switch is an operation on a graph (or network) where two edges are selected randomly and one of their end vertices are swapped with each other. Edge switch operations have important applications in graph theory and network analysis, such as in generating random networks with a given degree sequence, modeling and analyzing dynamic networks, and in studying various dynamic phenomena over a network. The recent growth of real-world networks motivates the need for efficient parallel algorithms. The dependencies among successive edge switch operations and the requirement to keep the graph simple (i.e., no self-loops or parallel edges) as the edges are switched lead to significant challenges in designing a parallel algorithm. Addressing these challenges requires complex synchronization and communication among the processors leading to difficulties in achieving a good speedup by parallelization. In this paper, we present distributed memory parallel algorithms for switching edges in massive networks. These algorithms provide good speedup and scale well to a large number of processors. A harmonic mean speedup of 73.25 is achieved on eight different networks with 1024 processors. One of the steps in our edge switch algorithms requires the computation of multinomial random variables in parallel. This paper presents the first non-trivial parallel algorithm for the problem, achieving a speedup of 925 using 1024 processors.

  10. Parallel Algorithms for Switching Edges in Heterogeneous Graphs☆

    PubMed Central

    Khan, Maleq; Chen, Jiangzhuo; Marathe, Madhav

    2017-01-01

    An edge switch is an operation on a graph (or network) where two edges are selected randomly and one of their end vertices are swapped with each other. Edge switch operations have important applications in graph theory and network analysis, such as in generating random networks with a given degree sequence, modeling and analyzing dynamic networks, and in studying various dynamic phenomena over a network. The recent growth of real-world networks motivates the need for efficient parallel algorithms. The dependencies among successive edge switch operations and the requirement to keep the graph simple (i.e., no self-loops or parallel edges) as the edges are switched lead to significant challenges in designing a parallel algorithm. Addressing these challenges requires complex synchronization and communication among the processors leading to difficulties in achieving a good speedup by parallelization. In this paper, we present distributed memory parallel algorithms for switching edges in massive networks. These algorithms provide good speedup and scale well to a large number of processors. A harmonic mean speedup of 73.25 is achieved on eight different networks with 1024 processors. One of the steps in our edge switch algorithms requires the computation of multinomial random variables in parallel. This paper presents the first non-trivial parallel algorithm for the problem, achieving a speedup of 925 using 1024 processors. PMID:28757680

  11. MHD code using multi graphical processing units: SMAUG+

    NASA Astrophysics Data System (ADS)

    Gyenge, N.; Griffiths, M. K.; Erdélyi, R.

    2018-01-01

    This paper introduces the Sheffield Magnetohydrodynamics Algorithm Using GPUs (SMAUG+), an advanced numerical code for solving magnetohydrodynamic (MHD) problems, using multi-GPU systems. Multi-GPU systems facilitate the development of accelerated codes and enable us to investigate larger model sizes and/or more detailed computational domain resolutions. This is a significant advancement over the parent single-GPU MHD code, SMAUG (Griffiths et al., 2015). Here, we demonstrate the validity of the SMAUG + code, describe the parallelisation techniques and investigate performance benchmarks. The initial configuration of the Orszag-Tang vortex simulations are distributed among 4, 16, 64 and 100 GPUs. Furthermore, different simulation box resolutions are applied: 1000 × 1000, 2044 × 2044, 4000 × 4000 and 8000 × 8000 . We also tested the code with the Brio-Wu shock tube simulations with model size of 800 employing up to 10 GPUs. Based on the test results, we observed speed ups and slow downs, depending on the granularity and the communication overhead of certain parallel tasks. The main aim of the code development is to provide massively parallel code without the memory limitation of a single GPU. By using our code, the applied model size could be significantly increased. We demonstrate that we are able to successfully compute numerically valid and large 2D MHD problems.

  12. A massively parallel computational approach to coupled thermoelastic/porous gas flow problems

    NASA Technical Reports Server (NTRS)

    Shia, David; Mcmanus, Hugh L.

    1995-01-01

    A new computational scheme for coupled thermoelastic/porous gas flow problems is presented. Heat transfer, gas flow, and dynamic thermoelastic governing equations are expressed in fully explicit form, and solved on a massively parallel computer. The transpiration cooling problem is used as an example problem. The numerical solutions have been verified by comparison to available analytical solutions. Transient temperature, pressure, and stress distributions have been obtained. Small spatial oscillations in pressure and stress have been observed, which would be impractical to predict with previously available schemes. Comparisons between serial and massively parallel versions of the scheme have also been made. The results indicate that for small scale problems the serial and parallel versions use practically the same amount of CPU time. However, as the problem size increases the parallel version becomes more efficient than the serial version.

  13. Memory operation mechanism of fullerene-containing polymer memory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakajima, Anri, E-mail: anakajima@hiroshima-u.ac.jp; Fujii, Daiki

    2015-03-09

    The memory operation mechanism in fullerene-containing nanocomposite gate insulators was investigated while varying the kind of fullerene in a polymer gate insulator. It was cleared what kind of traps and which positions in the nanocomposite the injected electrons or holes are stored in. The reason for the difference in the easiness of programming was clarified taking the role of the charging energy of an injected electron into account. The dependence of the carrier dynamics on the kind of fullerene molecule was investigated. A nonuniform distribution of injected carriers occurred after application of a large magnitude programming voltage due to themore » width distribution of the polystyrene barrier between adjacent fullerene molecules. Through the investigations, we demonstrated a nanocomposite gate with fullerene molecules having excellent retention characteristics and a programming capability. This will lead to the realization of practical organic memories with fullerene-containing polymer nanocomposites.« less

  14. Single Event Upset Analysis: On-orbit performance of the Alpha Magnetic Spectrometer Digital Signal Processor Memory aboard the International Space Station

    NASA Astrophysics Data System (ADS)

    Li, Jiaqiang; Choutko, Vitaly; Xiao, Liyi

    2018-03-01

    Based on the collection of error data from the Alpha Magnetic Spectrometer (AMS) Digital Signal Processors (DSP), on-orbit Single Event Upsets (SEUs) of the DSP program memory are analyzed. The daily error distribution and time intervals between errors are calculated to evaluate the reliability of the system. The particle density distribution of International Space Station (ISS) orbit is presented and the effects from the South Atlantic Anomaly (SAA) and the geomagnetic poles are analyzed. The impact of solar events on the DSP program memory is carried out combining data analysis and Monte Carlo simulation (MC). From the analysis and simulation results, it is concluded that the area corresponding to the SAA is the main source of errors on the ISS orbit. Solar events can also cause errors on DSP program memory, but the effect depends on the on-orbit particle density.

  15. Implementation of Parallel Dynamic Simulation on Shared-Memory vs. Distributed-Memory Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Shuangshuang; Chen, Yousu; Wu, Di

    2015-12-09

    Power system dynamic simulation computes the system response to a sequence of large disturbance, such as sudden changes in generation or load, or a network short circuit followed by protective branch switching operation. It consists of a large set of differential and algebraic equations, which is computational intensive and challenging to solve using single-processor based dynamic simulation solution. High-performance computing (HPC) based parallel computing is a very promising technology to speed up the computation and facilitate the simulation process. This paper presents two different parallel implementations of power grid dynamic simulation using Open Multi-processing (OpenMP) on shared-memory platform, and Messagemore » Passing Interface (MPI) on distributed-memory clusters, respectively. The difference of the parallel simulation algorithms and architectures of the two HPC technologies are illustrated, and their performances for running parallel dynamic simulation are compared and demonstrated.« less

  16. Dynamics of massive black holes as a possible candidate of Galactic dark matter

    NASA Technical Reports Server (NTRS)

    Xu, Guohong; Ostriker, Jeremiah P.

    1994-01-01

    If the dark halo of the Galaxy is comprised of massive black holes (MBHs), then those within approximately 1 kpc will spiral to the center, where they will interact with one another, forming binaries which contract, owing to further dynamical friction, and then possibly merge to become more massive objects by emission of gravitational radiation. If successive mergers would invariably lead, as has been proposed by various authors, to the formation of a very massive nucleus of 10(exp 8) solar mass, then the idea of MBHs as a dark matter candidate could be excluded on observational grounds, since the observed limit (or value) for a Galactic central black hole is approximately 10(exp 6.5) solar mass. But, if successive mergers are delayed or prevented by other processes, such as the gravitational slingshot or rocket effect of gravitational radiation, then a large mass accumulation will not occur. In order to resolve this issue, we perform detailed N-body simulations using a modfied Aarseth code to explore the dynamical behavior of the MBHs, and we find that for a 'best estimate' model of the Galaxy a runaway does not occur. The code treates the MBHs as subject to the primary gravitational forces of one another and to the smooth stellar distribution, as well as the secondary perturbations in their orbits due to another and to the smooth stellar distribution, as well as the secondary perturbations in their orbits due to dynamical friction and gravitational radiation. Instead of a runaway, three-body interactions between hard binaries and single MBHs eject massive objects before accumulation of more than a few units, so that typically the center will contain zero, one, or two MBHs. We study how the situation depends in detail on the mass per MBH, the rotation of the halo, the mass distribution within the Galaxy, and other parameters. A runaway will most sensitively depend on the ratio of initial (spheroid/halo) central mass densities and secondarily on the typical values for the mass per MBH, with the rough dividing line, using Galactic parameters, being M(sub BH) less than or = 10(exp 6.5) solar mass. Using parameters from Lacey & Ostriker (1985) and our most accurate model for Galaxy, no runaway occurs.

  17. CD4 T-Cell Memory Generation and Maintenance

    PubMed Central

    Gasper, David J.; Tejera, Melba Marie; Suresh, M.

    2014-01-01

    Immunologic memory is the adaptive immune system's powerful ability to remember a previous antigen encounter and react with accelerated vigor upon antigen re-exposure. It provides durable protection against reinfection with pathogens and is the foundation for vaccine-induced immunity. Unlike the relatively restricted immunologic purview of memory B cells and CD8 T cells, the field of CD4 T-cell memory must account for multiple distinct lineages with diverse effector functions, the issue of lineage commitment and plasticity, and the variable distribution of memory cells within each lineage. Here, we discuss the evidence for lineage-specific CD4 T-cell memory and summarize the known factors contributing to memory-cell generation, plasticity, and long-term maintenance. PMID:24940912

  18. Global Infrared–Radio Spectral Energy Distributions of Galactic Massive Star-Forming Regions

    NASA Astrophysics Data System (ADS)

    Povich, Matthew Samuel; Binder, Breanna Arlene

    2018-01-01

    We present a multiwavelength study of 30 Galactic massive star-forming regions. We fit multicomponent dust, blackbody, and power-law continuum models to 3.6 µm through 10 mm spectral energy distributions obtained from Spitzer, MSX, IRAS, Herschel, and Planck archival survey data. Averaged across our sample, ~20% of Lyman continuum photons emitted by massive stars are absorbed by dust before contributing to the ionization of H II regions, while ~50% of the stellar bolometric luminosity is absorbed and reprocessed by dust in the H II regions and surrounding photodissociation regions. The most luminous, infrared-bright regions that fully sample the upper stellar initial mass function (ionizing photon rates NC ≥ 1050 s–1 and total infrared luminosity LTIR ≥ 106.8 L⊙) have higher percentages of absorbed Lyman continuum photons (~40%) and dust-reprocessed starlight (~80%). The monochromatic 70-µm luminosity L70 is linearly correlated with LTIR, and on average L70/LTIR = 50%, in good agreement with extragalactic studies. Calibrated against the known massive stellar content in our sampled H II regions, we find that star formation rates based on L70 are in reasonably good agreement with extragalactic calibrations, when corrected for the smaller physical sizes of the Galactic regions. We caution that absorption of Lyman continuum photons prior to contributing to the observed ionizing photon rate may reduce the attenuation-corrected Hα emission, systematically biasing extragalactic calibrations toward lower star formation rates when applied to spatially-resolved studies of obscured star formation.This work was supported by the National Science Foundation under award CAREER-1454333.

  19. HBT+: an improved code for finding subhaloes and building merger trees in cosmological simulations

    NASA Astrophysics Data System (ADS)

    Han, Jiaxin; Cole, Shaun; Frenk, Carlos S.; Benitez-Llambay, Alejandro; Helly, John

    2018-02-01

    Dark matter subhalos are the remnants of (incomplete) halo mergers. Identifying them and establishing their evolutionary links in the form of merger trees is one of the most important applications of cosmological simulations. The HBT (Hierachical Bound-Tracing) code identifies haloes as they form and tracks their evolution as they merge, simultaneously detecting subhaloes and building their merger trees. Here we present a new implementation of this approach, HBT+ , that is much faster, more user friendly, and more physically complete than the original code. Applying HBT+ to cosmological simulations, we show that both the subhalo mass function and the peak-mass function are well fitted by similar double-Schechter functions. The ratio between the two is highest at the high-mass end, reflecting the resilience of massive subhaloes that experience substantial dynamical friction but limited tidal stripping. The radial distribution of the most-massive subhaloes is more concentrated than the universal radial distribution of lower mass subhaloes. Subhalo finders that work in configuration space tend to underestimate the masses of massive subhaloes, an effect that is stronger in the host centre. This may explain, at least in part, the excess of massive subhaloes in galaxy cluster centres inferred from recent lensing observations. We demonstrate that the peak-mass function is a powerful diagnostic of merger tree defects, and the merger trees constructed using HBT+ do not suffer from the missing or switched links that tend to afflict merger trees constructed from more conventional halo finders. We make the HBT+ code publicly available.

  20. The effect of paleotopography on lithic distribution and facies associations of small volume ignimbrites: the WTT Cupa (Roccamonfina volcano, Italy)

    NASA Astrophysics Data System (ADS)

    Giordano, Guido

    1998-12-01

    The distribution of lithic clasts within two trachytic, small volume, pumiceous ignimbrites are described from the Quaternary `White Trachytic Tuff Cupa' formation of Roccamonfina volcano, Italy. The ignimbrites show a downslope grading of lithics, with a maximum size where there is a major break in the volcano's slope, rather than at proximal locations. This is also the location where ignimbrites are thickest and most massive. The break in slope is interpreted to have reduced flow capacity and velocity, increasing the sedimentation rate, so that massive ignimbrite formed by hindered settling sedimentation. Ignimbrite Cc, exhibits no vertical grading of lithics, though it does show downslope grading with maximum size at the major break in slope and a rapid decrease further downslope. Ignimbrite Cc thins away from the break in slope, and shows an upward fining of the grain size within the topmost few decimeters of the unit. The ignimbrite is stratified proximally, and grades to massive facies at the break in slope, and distally to stratified facies with numerous inverse-graded beds. The simplest mechanism accounting for these downslope variations is progressive aggradation from a quasi-steady, nonuniform pyroclastic density current. The changes in deposit thickness and facies are interpreted to record downcurrent changes in sedimentation rate. The upward fining reflects waning flow. Inversely graded, bedded depositional facies in distal areas is interpreted to reflect flow unsteadiness and a decrease in suspended sediment load. Ignimbrite Cd shows vertical, as well as downslope grading of lithics. This characteristic, coupled with the widespread massive facies of the deposit and the tabular unit geometry are features that can be reconciled with both the debris flow/plug analogy for pyroclastic flows ( Sparks, 1976) and the progressive aggradation model ( Branney and Kokelaar, 1992). However, none of them appears to satisfy completely the field evidences, implying that when dealing with massive ignimbrites, other evidence than lithic grading needs to be presented to better understand the related transport and depositional processes.

  1. Probing Globular Cluster Formation in Low Metallicity Dwarf Galaxies

    NASA Astrophysics Data System (ADS)

    Johnson, Kelsey E.; Hunt, Leslie K.; Reines, Amy E.

    2008-12-01

    The ubiquitous presence of globular clusters around massive galaxies today suggests that these extreme star clusters must have been formed prolifically in the earlier universe in low-metallicity galaxies. Numerous adolescent and massive star clusters are already known to be present in a variety of galaxies in the local universe; however most of these systems have metallicities of 12 + log(O/H) > 8, and are thus not representative of the galaxies in which today's ancient globular clusters were formed. In order to better understand the formation and evolution of these massive clusters in environments with few heavy elements, we have targeted several low-metallicity dwarf galaxies with radio observations, searching for newly-formed massive star clusters still embedded in their birth material. The galaxies in this initial study are HS 0822+3542, UGC 4483, Pox 186, and SBS 0335-052, all of which have metallicities of 12 + log(O/H) < 7.75. While no thermal radio sources, indicative of natal massive star clusters, are found in three of the four galaxies, SBS 0335-052 hosts two such objects, which are incredibly luminous. The radio spectral energy distributions of these intense star-forming regions in SBS 0335-052 suggest the presence of ~12,000 equivalent O-type stars, and the implied star formation rate is nearing the maximum starburst intensity limit.

  2. Mixed memory, (non) Hurst effect, and maximum entropy of rainfall in the tropical Andes

    NASA Astrophysics Data System (ADS)

    Poveda, Germán

    2011-02-01

    Diverse linear and nonlinear statistical parameters of rainfall under aggregation in time and the kind of temporal memory are investigated. Data sets from the Andes of Colombia at different resolutions (15 min and 1-h), and record lengths (21 months and 8-40 years) are used. A mixture of two timescales is found in the autocorrelation and autoinformation functions, with short-term memory holding for time lags less than 15-30 min, and long-term memory onwards. Consistently, rainfall variance exhibits different temporal scaling regimes separated at 15-30 min and 24 h. Tests for the Hurst effect evidence the frailty of the R/ S approach in discerning the kind of memory in high resolution rainfall, whereas rigorous statistical tests for short-memory processes do reject the existence of the Hurst effect. Rainfall information entropy grows as a power law of aggregation time, S( T) ˜ Tβ with < β> = 0.51, up to a timescale, TMaxEnt (70-202 h), at which entropy saturates, with β = 0 onwards. Maximum entropy is reached through a dynamic Generalized Pareto distribution, consistently with the maximum information-entropy principle for heavy-tailed random variables, and with its asymptotically infinitely divisible property. The dynamics towards the limit distribution is quantified. Tsallis q-entropies also exhibit power laws with T, such that Sq( T) ˜ Tβ( q) , with β( q) ⩽ 0 for q ⩽ 0, and β( q) ≃ 0.5 for q ⩾ 1. No clear patterns are found in the geographic distribution within and among the statistical parameters studied, confirming the strong variability of tropical Andean rainfall.

  3. Towards Low-Cost Effective and Homogeneous Thermal Activation of Shape Memory Polymers

    PubMed Central

    Lantada, Andrés Díaz; Rebollo, María Ángeles Santamaría

    2013-01-01

    A typical limitation of intelligent devices based on the use of shape-memory polymers as actuators is linked to the widespread use of distributed heating resistors, via Joule effect, as activation method, which involves several relevant issues needing attention, such as: (a) Final device size is importantly increased due to the additional space required for the resistances; (b) the use of resistances limits materials’ strength and the obtained devices are normally weaker; (c) the activation process through heating resistances is not homogeneous, thus leading to important temperature differences among the polymeric structure and to undesirable thermal gradients and stresses, also limiting the application fields of shape-memory polymers. In our present work we describe interesting activation alternatives, based on coating shape-memory polymers with different kinds of conductive materials, including textiles, conductive threads and conductive paint, which stand out for their easy, rapid and very cheap implementation. Distributed heating and homogeneous activation can be achieved in several of the alternatives studied and the technical results are comparable to those obtained by using advanced shape-memory nanocomposites, which have to deal with complex synthesis, processing and security aspects. Different combinations of shape memory epoxy resin with several coating electrotextiles, conductive films and paints are prepared, simulated with the help of thermal finite element method based resources and characterized using infrared thermography for validating the simulations and overall design process. A final application linked to an active catheter pincer is detailed and the advantages of using distributed heating instead of conventional resistors are discussed. PMID:28788401

  4. Analysis of crossover between local and massive separation on airfoils

    NASA Technical Reports Server (NTRS)

    Barnett, Mark

    1987-01-01

    The occurrence of massive separation on airfoils operating at high Reynolds number poses an important problem to the aerodynamicist. In the present study, the phenomenon of crossover, induced by airfoil thickness, between local separation and massive separation is investigated for low speed (incompressible), symmetric flow past realistic airfoil geometries. This problem is studied both for the infinite Reynolds number asymptotic limit using triple-deck theory and for finite Reynolds number using interacting boundary-layer theory. Numerical results are presented which illustrate how the flow evolves from local to massive separation as the airfoil thickness is increased. The results of the triple-deck and the interacting boundary-layer analyses are found to be in qualitative agreement for the NACA four digit series and an uncambered supercritical airfoil. The effect of turbulence on the evolution of the flow is also considered. Solutions are presented for turbulent flows past a NACA 0014 airfoil and a circular cylinder. For the latter case, the calculated surface pressure distribution is found to agree well with experimental data if the proper eddy pressure level is specified.

  5. Global Arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnamoorthy, Sriram; Daily, Jeffrey A.; Vishnu, Abhinav

    2015-11-01

    Global Arrays (GA) is a distributed-memory programming model that allows for shared-memory-style programming combined with one-sided communication, to create a set of tools that combine high performance with ease-of-use. GA exposes a relatively straightforward programming abstraction, while supporting fully-distributed data structures, locality of reference, and high-performance communication. GA was originally formulated in the early 1990’s to provide a communication layer for the Northwest Chemistry (NWChem) suite of chemistry modeling codes that was being developed concurrently.

  6. Variable Order and Distributed Order Fractional Operators

    NASA Technical Reports Server (NTRS)

    Lorenzo, Carl F.; Hartley, Tom T.

    2002-01-01

    Many physical processes appear to exhibit fractional order behavior that may vary with time or space. The continuum of order in the fractional calculus allows the order of the fractional operator to be considered as a variable. This paper develops the concept of variable and distributed order fractional operators. Definitions based on the Riemann-Liouville definitions are introduced and behavior of the operators is studied. Several time domain definitions that assign different arguments to the order q in the Riemann-Liouville definition are introduced. For each of these definitions various characteristics are determined. These include: time invariance of the operator, operator initialization, physical realization, linearity, operational transforms. and memory characteristics of the defining kernels. A measure (m2) for memory retentiveness of the order history is introduced. A generalized linear argument for the order q allows the concept of "tailored" variable order fractional operators whose a, memory may be chosen for a particular application. Memory retentiveness (m2) and order dynamic behavior are investigated and applications are shown. The concept of distributed order operators where the order of the time based operator depends on an additional independent (spatial) variable is also forwarded. Several definitions and their Laplace transforms are developed, analysis methods with these operators are demonstrated, and examples shown. Finally operators of multivariable and distributed order are defined in their various applications are outlined.

  7. Still searching for the engram.

    PubMed

    Eichenbaum, Howard

    2016-09-01

    For nearly a century, neurobiologists have searched for the engram-the neural representation of a memory. Early studies showed that the engram is widely distributed both within and across brain areas and is supported by interactions among large networks of neurons. Subsequent research has identified engrams that support memory within dedicated functional systems for habit learning and emotional memory, but the engram for declarative memories has been elusive. Nevertheless, recent years have brought progress from molecular biological approaches that identify neurons and networks that are necessary and sufficient to support memory, and from recording approaches and population analyses that characterize the information coded by large neural networks. These new directions offer the promise of revealing the engrams for episodic and semantic memories.

  8. Emotion-attention interactions in recognition memory for distractor faces.

    PubMed

    Srinivasan, Narayanan; Gupta, Rashmi

    2010-04-01

    Effective filtering of distractor information has been shown to be dependent on perceptual load. Given the salience of emotional information and the presence of emotion-attention interactions, we wanted to explore the recognition memory for emotional distractors especially as a function of focused attention and distributed attention by manipulating load and the spatial spread of attention. We performed two experiments to study emotion-attention interactions by measuring recognition memory performance for distractor neutral and emotional faces. Participants performed a color discrimination task (low-load) or letter identification task (high-load) with a letter string display in Experiment 1 and a high-load letter identification task with letters presented in a circular array in Experiment 2. The stimuli were presented against a distractor face background. The recognition memory results show that happy faces were recognized better than sad faces under conditions of less focused or distributed attention. When attention is more spatially focused, sad faces were recognized better than happy faces. The study provides evidence for emotion-attention interactions in which specific emotional information like sad or happy is associated with focused or distributed attention respectively. Distractor processing with emotional information also has implications for theories of attention. Copyright 2010 APA, all rights reserved.

  9. A trade-off between local and distributed information processing associated with remote episodic versus semantic memory.

    PubMed

    Heisz, Jennifer J; Vakorin, Vasily; Ross, Bernhard; Levine, Brian; McIntosh, Anthony R

    2014-01-01

    Episodic memory and semantic memory produce very different subjective experiences yet rely on overlapping networks of brain regions for processing. Traditional approaches for characterizing functional brain networks emphasize static states of function and thus are blind to the dynamic information processing within and across brain regions. This study used information theoretic measures of entropy to quantify changes in the complexity of the brain's response as measured by magnetoencephalography while participants listened to audio recordings describing past personal episodic and general semantic events. Personal episodic recordings evoked richer subjective mnemonic experiences and more complex brain responses than general semantic recordings. Critically, we observed a trade-off between the relative contribution of local versus distributed entropy, such that personal episodic recordings produced relatively more local entropy whereas general semantic recordings produced relatively more distributed entropy. Changes in the relative contributions of local and distributed entropy to the total complexity of the system provides a potential mechanism that allows the same network of brain regions to represent cognitive information as either specific episodes or more general semantic knowledge.

  10. Work stealing for GPU-accelerated parallel programs in a global address space framework: WORK STEALING ON GPU-ACCELERATED SYSTEMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arafat, Humayun; Dinan, James; Krishnamoorthy, Sriram

    Task parallelism is an attractive approach to automatically load balance the computation in a parallel system and adapt to dynamism exhibited by parallel systems. Exploiting task parallelism through work stealing has been extensively studied in shared and distributed-memory contexts. In this paper, we study the design of a system that uses work stealing for dynamic load balancing of task-parallel programs executed on hybrid distributed-memory CPU-graphics processing unit (GPU) systems in a global-address space framework. We take into account the unique nature of the accelerator model employed by GPUs, the significant performance difference between GPU and CPU execution as a functionmore » of problem size, and the distinct CPU and GPU memory domains. We consider various alternatives in designing a distributed work stealing algorithm for CPU-GPU systems, while taking into account the impact of task distribution and data movement overheads. These strategies are evaluated using microbenchmarks that capture various execution configurations as well as the state-of-the-art CCSD(T) application module from the computational chemistry domain.« less

  11. Work stealing for GPU-accelerated parallel programs in a global address space framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arafat, Humayun; Dinan, James; Krishnamoorthy, Sriram

    Task parallelism is an attractive approach to automatically load balance the computation in a parallel system and adapt to dynamism exhibited by parallel systems. Exploiting task parallelism through work stealing has been extensively studied in shared and distributed-memory contexts. In this paper, we study the design of a system that uses work stealing for dynamic load balancing of task-parallel programs executed on hybrid distributed-memory CPU-graphics processing unit (GPU) systems in a global-address space framework. We take into account the unique nature of the accelerator model employed by GPUs, the significant performance difference between GPU and CPU execution as a functionmore » of problem size, and the distinct CPU and GPU memory domains. We consider various alternatives in designing a distributed work stealing algorithm for CPU-GPU systems, while taking into account the impact of task distribution and data movement overheads. These strategies are evaluated using microbenchmarks that capture various execution configurations as well as the state-of-the-art CCSD(T) application module from the computational chemistry domain« less

  12. When the third is dead: memory, mourning, and witnessing in the aftermath of the holocaust.

    PubMed

    Gerson, Samuel

    2009-12-01

    The origins of psychoanalysis, as well as the concerns of our daily endeavors, center on engagement with the fate of the unbearable - be it wish, affect, or experience. In this paper, I explore psychological states and dynamics faced by survivors of genocide and their children in their struggle to sustain life in the midst of unremitting deadliness. Toward this continuous effort, I re-examine Freud's theoretical formulations concerning memory and mourning, elaborate André Green's concept of the 'Dead Mother', and introduce more recent work on the concepts of the 'third' and 'thirdness'. Throughout, my thoughts are informed by our clinical experience with the essential role of witnessing in sustaining life after massive trauma. I bring aspects of all these forms of knowing to reflections about a poem by Primo Levi entitled Unfinished business and to our own never finished business of avoiding denial while living in an age of genocide and under the aura of uncontained destructiveness.

  13. Emotional organization of autobiographical memory.

    PubMed

    Schulkind, Matthew D; Woldorf, Gillian M

    2005-09-01

    The emotional organization of autobiographical memory was examined by determining whether emotional cues would influence autobiographical retrieval in younger and older adults. Unfamiliar musical cues that represented orthogonal combinations of positive and negative valence and high and low arousal were used. Whereas cue valence influenced the valence of the retrieved memories, cue arousal did not affect arousal ratings. However, high-arousal cues were associated with reduced response latencies. A significant bias to report positive memories was observed, especially for the older adults, but neither the distribution of memories across the life span nor response latencies varied across memories differing in valence or arousal. These data indicate that emotional information can serve as effective cues for autobiographical memories and that autobiographical memories are organized in terms of emotional valence but not emotional arousal. Thus, current theories of autobiographical memory must be expanded to include emotional valence as a primary dimension of organization.

  14. A Cerebellar-model Associative Memory as a Generalized Random-access Memory

    NASA Technical Reports Server (NTRS)

    Kanerva, Pentti

    1989-01-01

    A versatile neural-net model is explained in terms familiar to computer scientists and engineers. It is called the sparse distributed memory, and it is a random-access memory for very long words (for patterns with thousands of bits). Its potential utility is the result of several factors: (1) a large pattern representing an object or a scene or a moment can encode a large amount of information about what it represents; (2) this information can serve as an address to the memory, and it can also serve as data; (3) the memory is noise tolerant--the information need not be exact; (4) the memory can be made arbitrarily large and hence an arbitrary amount of information can be stored in it; and (5) the architecture is inherently parallel, allowing large memories to be fast. Such memories can become important components of future computers.

  15. GRIM-Filter: Fast seed location filtering in DNA read mapping using processing-in-memory technologies.

    PubMed

    Kim, Jeremie S; Senol Cali, Damla; Xin, Hongyi; Lee, Donghyuk; Ghose, Saugata; Alser, Mohammed; Hassan, Hasan; Ergin, Oguz; Alkan, Can; Mutlu, Onur

    2018-05-09

    Seed location filtering is critical in DNA read mapping, a process where billions of DNA fragments (reads) sampled from a donor are mapped onto a reference genome to identify genomic variants of the donor. State-of-the-art read mappers 1) quickly generate possible mapping locations for seeds (i.e., smaller segments) within each read, 2) extract reference sequences at each of the mapping locations, and 3) check similarity between each read and its associated reference sequences with a computationally-expensive algorithm (i.e., sequence alignment) to determine the origin of the read. A seed location filter comes into play before alignment, discarding seed locations that alignment would deem a poor match. The ideal seed location filter would discard all poor match locations prior to alignment such that there is no wasted computation on unnecessary alignments. We propose a novel seed location filtering algorithm, GRIM-Filter, optimized to exploit 3D-stacked memory systems that integrate computation within a logic layer stacked under memory layers, to perform processing-in-memory (PIM). GRIM-Filter quickly filters seed locations by 1) introducing a new representation of coarse-grained segments of the reference genome, and 2) using massively-parallel in-memory operations to identify read presence within each coarse-grained segment. Our evaluations show that for a sequence alignment error tolerance of 0.05, GRIM-Filter 1) reduces the false negative rate of filtering by 5.59x-6.41x, and 2) provides an end-to-end read mapper speedup of 1.81x-3.65x, compared to a state-of-the-art read mapper employing the best previous seed location filtering algorithm. GRIM-Filter exploits 3D-stacked memory, which enables the efficient use of processing-in-memory, to overcome the memory bandwidth bottleneck in seed location filtering. We show that GRIM-Filter significantly improves the performance of a state-of-the-art read mapper. GRIM-Filter is a universal seed location filter that can be applied to any read mapper. We hope that our results provide inspiration for new works to design other bioinformatics algorithms that take advantage of emerging technologies and new processing paradigms, such as processing-in-memory using 3D-stacked memory devices.

  16. The CAnadian NIRISS Unbiased Cluster Survey (CANUCS)

    NASA Astrophysics Data System (ADS)

    Ravindranath, Swara; NIRISS GTO Team

    2017-06-01

    CANUCS GTO program is a JWST spectroscopy and imaging survey of five massive galaxy clusters and ten parallel fields using the NIRISS low-resolution grisms, NIRCam imaging and NIRSpec multi-object spectroscopy. The primary goal is to understand the evolution of low mass galaxies across cosmic time. The resolved emission line maps and line ratios for many galaxies, with some at resolution of 100pc via the magnification by gravitational lensing will enable determining the spatial distribution of star formation, dust and metals. Other science goals include the detection and characterization of galaxies within the reionization epoch, using multiply-imaged lensed galaxies to constrain cluster mass distributions and dark matter substructure, and understanding star-formation suppression in the most massive galaxy clusters. In this talk I will describe the science goals of the CANUCS program. The proposed prime and parallel observations will be presented with details of the implementation of the observation strategy using JWST proposal planning tools.

  17. Globular cluster seeding by primordial black hole population

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dolgov, A.; Postnov, K., E-mail: dolgov@fe.infn.it, E-mail: kpostnov@gmail.com

    Primordial black holes (PBHs) that form in the early Universe in the modified Affleck-Dine (AD) mechanism of baryogenesis should have intrinsic log-normal mass distribution of PBHs. We show that the parameters of this distribution adjusted to provide the required spatial density of massive seeds (≥ 10{sup 4} M {sub ⊙}) for early galaxy formation and not violating the dark matter density constraints, predict the existence of the population of intermediate-mass PBHs with a number density of 0∼ 100 Mpc{sup −3}. We argue that the population of intermediate-mass AD PBHs can also seed the formation of globular clusters in galaxies. Inmore » this scenario, each globular cluster should host an intermediate-mass black hole with a mass of a few thousand solar masses, and should not obligatorily be immersed in a massive dark matter halo.« less

  18. Performance Modeling and Measurement of Parallelized Code for Distributed Shared Memory Multiprocessors

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry

    1998-01-01

    This paper presents a model to evaluate the performance and overhead of parallelizing sequential code using compiler directives for multiprocessing on distributed shared memory (DSM) systems. With increasing popularity of shared address space architectures, it is essential to understand their performance impact on programs that benefit from shared memory multiprocessing. We present a simple model to characterize the performance of programs that are parallelized using compiler directives for shared memory multiprocessing. We parallelized the sequential implementation of NAS benchmarks using native Fortran77 compiler directives for an Origin2000, which is a DSM system based on a cache-coherent Non Uniform Memory Access (ccNUMA) architecture. We report measurement based performance of these parallelized benchmarks from four perspectives: efficacy of parallelization process; scalability; parallelization overhead; and comparison with hand-parallelized and -optimized version of the same benchmarks. Our results indicate that sequential programs can conveniently be parallelized for DSM systems using compiler directives but realizing performance gains as predicted by the performance model depends primarily on minimizing architecture-specific data locality overhead.

  19. The mysteries of remote memory.

    PubMed

    Albo, Zimbul; Gräff, Johannes

    2018-03-19

    Long-lasting memories form the basis of our identity as individuals and lie central in shaping future behaviours that guide survival. Surprisingly, however, our current knowledge of how such memories are stored in the brain and retrieved, as well as the dynamics of the circuits involved, remains scarce despite seminal technical and experimental breakthroughs in recent years. Traditionally, it has been proposed that, over time, information initially learnt in the hippocampus is stored in distributed cortical networks. This process-the standard theory of memory consolidation-would stabilize the newly encoded information into a lasting memory, become independent of the hippocampus, and remain essentially unmodifiable throughout the lifetime of the individual. In recent years, several pieces of evidence have started to challenge this view and indicate that long-lasting memories might already ab ovo be encoded, and subsequently stored in distributed cortical networks, akin to the multiple trace theory of memory consolidation. In this review, we summarize these recent findings and attempt to identify the biologically plausible mechanisms based on which a contextual memory becomes remote by integrating different levels of analysis: from neural circuits to cell ensembles across synaptic remodelling and epigenetic modifications. From these studies, remote memory formation and maintenance appear to occur through a multi-trace, dynamic and integrative cellular process ranging from the synapse to the nucleus, and represent an exciting field of research primed to change quickly as new experimental evidence emerges.This article is part of a discussion meeting issue 'Of mice and mental health: facilitating dialogue between basic and clinical neuroscientists'. © 2018 The Authors.

  20. Memory systems in schizophrenia: Modularity is preserved but deficits are generalized.

    PubMed

    Haut, Kristen M; Karlsgodt, Katherine H; Bilder, Robert M; Congdon, Eliza; Freimer, Nelson B; London, Edythe D; Sabb, Fred W; Ventura, Joseph; Cannon, Tyrone D

    2015-10-01

    Schizophrenia patients exhibit impaired working and episodic memory, but this may represent generalized impairment across memory modalities or performance deficits restricted to particular memory systems in subgroups of patients. Furthermore, it is unclear whether deficits are unique from those associated with other disorders. Healthy controls (n=1101) and patients with schizophrenia (n=58), bipolar disorder (n=49) and attention-deficit-hyperactivity-disorder (n=46) performed 18 tasks addressing primarily verbal and spatial episodic and working memory. Effect sizes for group contrasts were compared across tasks and the consistency of subjects' distributional positions across memory domains was measured. Schizophrenia patients performed poorly relative to the other groups on every test. While low to moderate correlation was found between memory domains (r=.320), supporting modularity of these systems, there was limited agreement between measures regarding each individual's task performance (ICC=.292) and in identifying those individuals falling into the lowest quintile (kappa=0.259). A general ability factor accounted for nearly all of the group differences in performance and agreement across measures in classifying low performers. Pathophysiological processes involved in schizophrenia appear to act primarily on general abilities required in all tasks rather than on specific abilities within different memory domains and modalities. These effects represent a general shift in the overall distribution of general ability (i.e., each case functioning at a lower level than they would have if not for the illness), rather than presence of a generally low-performing subgroup of patients. There is little evidence that memory impairments in schizophrenia are shared with bipolar disorder and ADHD. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Memory systems in schizophrenia: Modularity is preserved but deficits are generalized

    PubMed Central

    Haut, Kristen M.; Karlsgodt, Katherine H.; Bilder, Robert M.; Congdon, Eliza; Freimer, Nelson; London, Edythe D.; Sabb, Fred W.; Ventura, Joseph; Cannon, Tyrone D.

    2015-01-01

    Objective Schizophrenia patients exhibit impaired working and episodic memory, but this may represent generalized impairment across memory modalities or performance deficits restricted to particular memory systems in subgroups of patients. Furthermore, it is unclear whether deficits are unique from those associated with other disorders. Method Healthy controls (n=1101) and patients with schizophrenia (n=58), bipolar disorder (n=49) and attention-deficit-hyperactivity-disorder (n=46) performed 18 tasks addressing primarily verbal and spatial episodic and working memory. Effect sizes for group contrasts were compared across tasks and the consistency of subjects’ distributional positions across memory domains was measured. Results Schizophrenia patients performed poorly relative to the other groups on every test. While low to moderate correlation was found between memory domains (r=.320), supporting modularity of these systems, there was limited agreement between measures regarding each individual’s task performance (ICC=.292) and in identifying those individuals falling into the lowest quintile (kappa=0.259). A general ability factor accounted for nearly all of the group differences in performance and agreement across measures in classifying low performers. Conclusions Pathophysiological processes involved in schizophrenia appear to act primarily on general abilities required in all tasks rather than on specific abilities within different memory domains and modalities. These effects represent a general shift in the overall distribution of general ability (i.e., each case functioning at a lower level than they would have if not for the illness), rather than presence of a generally low-performing subgroup of patients. There is little evidence that memory impairments in schizophrenia are shared with bipolar disorder and ADHD. PMID:26299707

  2. The mysteries of remote memory

    PubMed Central

    2018-01-01

    Long-lasting memories form the basis of our identity as individuals and lie central in shaping future behaviours that guide survival. Surprisingly, however, our current knowledge of how such memories are stored in the brain and retrieved, as well as the dynamics of the circuits involved, remains scarce despite seminal technical and experimental breakthroughs in recent years. Traditionally, it has been proposed that, over time, information initially learnt in the hippocampus is stored in distributed cortical networks. This process—the standard theory of memory consolidation—would stabilize the newly encoded information into a lasting memory, become independent of the hippocampus, and remain essentially unmodifiable throughout the lifetime of the individual. In recent years, several pieces of evidence have started to challenge this view and indicate that long-lasting memories might already ab ovo be encoded, and subsequently stored in distributed cortical networks, akin to the multiple trace theory of memory consolidation. In this review, we summarize these recent findings and attempt to identify the biologically plausible mechanisms based on which a contextual memory becomes remote by integrating different levels of analysis: from neural circuits to cell ensembles across synaptic remodelling and epigenetic modifications. From these studies, remote memory formation and maintenance appear to occur through a multi-trace, dynamic and integrative cellular process ranging from the synapse to the nucleus, and represent an exciting field of research primed to change quickly as new experimental evidence emerges. This article is part of a discussion meeting issue ‘Of mice and mental health: facilitating dialogue between basic and clinical neuroscientists’. PMID:29352028

  3. Luminous and Variable Stars in M31 and M33. V. The Upper HR Diagram

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humphreys, Roberta M.; Davidson, Kris; Hahn, David

    We present HR diagrams for the massive star populations in M31 and M33, including several different types of emission-line stars: the confirmed luminous blue variables (LBVs), candidate LBVs, B[e] supergiants, and the warm hypergiants. We estimate their apparent temperatures and luminosities for comparison with their respective massive star populations and evaluate the possible relationships of these different classes of evolved, massive stars, and their evolutionary state. Several of the LBV candidates lie near the LBV/S Dor instability strip that supports their classification. Most of the B[e] supergiants, however, are less luminous than the LBVs. Many are very dusty with themore » infrared flux contributing one-third or more to their total flux. They are also relatively isolated from other luminous OB stars. Overall, their spatial distribution suggests a more evolved state. Some may be post-RSGs (red supergiants) like the warm hypergiants, and there may be more than one path to becoming a B[e] star. There are sufficient differences in the spectra, luminosities, spatial distribution, and the presence or lack of dust between the LBVs and B[e] supergiants to conclude that one group does not evolve into the other.« less

  4. Massively Parallel Assimilation of TOGA/TAO and Topex/Poseidon Measurements into a Quasi Isopycnal Ocean General Circulation Model Using an Ensemble Kalman Filter

    NASA Technical Reports Server (NTRS)

    Keppenne, Christian L.; Rienecker, Michele; Borovikov, Anna Y.; Suarez, Max

    1999-01-01

    A massively parallel ensemble Kalman filter (EnKF)is used to assimilate temperature data from the TOGA/TAO array and altimetry from TOPEX/POSEIDON into a Pacific basin version of the NASA Seasonal to Interannual Prediction Project (NSIPP)ls quasi-isopycnal ocean general circulation model. The EnKF is an approximate Kalman filter in which the error-covariance propagation step is modeled by the integration of multiple instances of a numerical model. An estimate of the true error covariances is then inferred from the distribution of the ensemble of model state vectors. This inplementation of the filter takes advantage of the inherent parallelism in the EnKF algorithm by running all the model instances concurrently. The Kalman filter update step also occurs in parallel by having each processor process the observations that occur in the region of physical space for which it is responsible. The massively parallel data assimilation system is validated by withholding some of the data and then quantifying the extent to which the withheld information can be inferred from the assimilation of the remaining data. The distributions of the forecast and analysis error covariances predicted by the ENKF are also examined.

  5. Detecting Massive, High-Redshift Galaxy Clusters Using the Thermal Sunyaev-Zel'dovich Effect

    NASA Astrophysics Data System (ADS)

    Adams, Carson; Steinhardt, Charles L.; Loeb, Abraham; Karim, Alexander; Staguhn, Johannes; Erler, Jens; Capak, Peter L.

    2017-01-01

    We develop the thermal Sunyaev-Zel'dovich (SZ) effect as a direct astrophysical measure of the mass distribution of dark matter halos. The SZ effect increases with cosmological distance, a unique astronomical property, and is highly sensitive to halo mass. We find that this presents a powerful methodology for distinguishing between competing models of the halo mass function distribution, particularly in the high-redshift domain just a few hundred million years after the Big Bang. Recent surveys designed to probe this epoch of initial galaxy formation such as CANDELS and SPLASH report an over-abundance of highly massive halos as inferred from stellar ultraviolet (UV) luminosities and the stellar mass to halo mass ratio estimated from nearby galaxies. If these UV luminosity to halo mass relations hold to high-redshift, observations estimate several orders of magnitude more highly massive halos than predicted by hierarchical merging and the standard cosmological paradigm. Strong constraints on the masses of these galaxy clusters are essential to resolving the current tension between observation and theory. We conclude that detections of thermal SZ sources are plausible at high-redshift only for the halo masses inferred from observation. Therefore, future SZ surveys will provide a robust determination between theoretical and observational predictions.

  6. High Performance Distributed Computing in a Supercomputer Environment: Computational Services and Applications Issues

    NASA Technical Reports Server (NTRS)

    Kramer, Williams T. C.; Simon, Horst D.

    1994-01-01

    This tutorial proposes to be a practical guide for the uninitiated to the main topics and themes of high-performance computing (HPC), with particular emphasis to distributed computing. The intent is first to provide some guidance and directions in the rapidly increasing field of scientific computing using both massively parallel and traditional supercomputers. Because of their considerable potential computational power, loosely or tightly coupled clusters of workstations are increasingly considered as a third alternative to both the more conventional supercomputers based on a small number of powerful vector processors, as well as high massively parallel processors. Even though many research issues concerning the effective use of workstation clusters and their integration into a large scale production facility are still unresolved, such clusters are already used for production computing. In this tutorial we will utilize the unique experience made at the NAS facility at NASA Ames Research Center. Over the last five years at NAS massively parallel supercomputers such as the Connection Machines CM-2 and CM-5 from Thinking Machines Corporation and the iPSC/860 (Touchstone Gamma Machine) and Paragon Machines from Intel were used in a production supercomputer center alongside with traditional vector supercomputers such as the Cray Y-MP and C90.

  7. Microparticles controllable accumulation, arrangement, and spatial shaping performed by tapered-fiber-based laser-induced convection flow.

    PubMed

    Zhang, Yu; Lei, Jiaojie; Zhang, Yaxun; Liu, Zhihai; Zhang, Jianzhong; Yang, Xinghua; Yang, Jun; Yuan, Libo

    2017-10-30

    The ability to arrange cells and/or microparticles into the desired pattern is critical in biological, chemical, and metamaterial studies and other applications. Researchers have developed a variety of patterning techniques, which either have a limited capacity to simultaneously trap massive particles or lack the spatial resolution necessary to manipulate individual particle. Several approaches have been proposed that combine both high spatial selectivity and high throughput simultaneously. However, those methods are complex and difficult to fabricate. In this article, we propose and demonstrate a simple method that combines the laser-induced convection flow and fiber-based optical trapping methods to perform both regular and special spatial shaping arrangement. Essentially, we combine a light field with a large optical intensity gradient distribution and a thermal field with a large temperature gradient distribution to perform the microparticles shaping arrangement. The tapered-fiber-based laser-induced convection flow provides not only the batch manipulation of massive particles, but also the finer manipulation of special one or several particles, which break out the limit of single-fiber-based massive/individual particles photothermal manipulation. The combination technique allows for microparticles quick accumulation, single-layer and multilayer arrangement; special spatial shaping arrangement/adjustment, and microparticles sorting.

  8. The radial distribution of supernovae in nuclear starbursts

    NASA Astrophysics Data System (ADS)

    Herrero-Illana, R.; Pérez-Torres, M. A.; Alberdi, A.

    2013-05-01

    Galaxy-galaxy interactions are expected to be responsible for triggering massive star formation and possibly accretion onto a supermassive black hole, by providing large amounts of dense molecular gas down to the central kiloparsec region. Several scenarios to drive the gas further down to the central ˜100 pc, have been proposed, including the formation of a nuclear disk around the black hole, where massive stars would produce supernovae. Here, we probe the radial distribution of supernovae and supernova remnants in the nuclear regions of the starburst galaxies M82, Arp 299-A, and Arp 220, by using high-angular resolution (≲ 0.''1) radio observations. We derived scale-length values for the putative nuclear disks, which range from ˜20-30 pc for Arp 299-A and Arp 220, up to ˜140 pc for M82. The radial distribution of SNe for the nuclear disks in Arp 299-A and Arp 220 is also consistent with a power-law surface density profile of exponent γ = 1, as expected from detailed hydrodynamical simulations of nuclear disks. This study is detailed in te{herrero-illana12}.

  9. Deficits in working memory and motor performance in the APP/PS1ki mouse model for Alzheimer's disease.

    PubMed

    Wirths, Oliver; Breyhan, Henning; Schäfer, Stephanie; Roth, Christian; Bayer, Thomas A

    2008-06-01

    The APP/PS1ki mouse model for Alzheimer's disease (AD) exhibits robust brain and spinal cord axonal degeneration and hippocampal CA1 neuron loss starting at 6 months of age. It expresses human mutant APP751 with the Swedish and London mutations together with two FAD-linked knocked-in mutations (PS1 M233T and PS1 L235P) in the murine PS1 gene. The present report covers a phenotypical analysis of this model using either behavioral tests for working memory and motor performance, as well as an analysis of weight development and body shape. At the age of 6 months, a dramatic, age-dependent change in all of these properties and characteristics was observed, accompanied by a significantly reduced ability to perform working memory and motor tasks. The APP/PS1ki mice were smaller and showed development of a thoracolumbar kyphosis, together with an incremental loss of body weight. While 2-month-old APP/PS1ki mice were inconspicuous in all of these tasks and properties, there is a massive age-related impairment in all tested behavioral paradigms. We have previously reported robust axonal degeneration in brain and spinal cord, as well as abundant hippocampal CA1 neuron loss starting at 6 months of age in the APP/PS1ki mouse model, which coincides with the onset of motor and memory deficits described in the present report.

  10. Multi-mode sensor processing on a dynamically reconfigurable massively parallel processor array

    NASA Astrophysics Data System (ADS)

    Chen, Paul; Butts, Mike; Budlong, Brad; Wasson, Paul

    2008-04-01

    This paper introduces a novel computing architecture that can be reconfigured in real time to adapt on demand to multi-mode sensor platforms' dynamic computational and functional requirements. This 1 teraOPS reconfigurable Massively Parallel Processor Array (MPPA) has 336 32-bit processors. The programmable 32-bit communication fabric provides streamlined inter-processor connections with deterministically high performance. Software programmability, scalability, ease of use, and fast reconfiguration time (ranging from microseconds to milliseconds) are the most significant advantages over FPGAs and DSPs. This paper introduces the MPPA architecture, its programming model, and methods of reconfigurability. An MPPA platform for reconfigurable computing is based on a structural object programming model. Objects are software programs running concurrently on hundreds of 32-bit RISC processors and memories. They exchange data and control through a network of self-synchronizing channels. A common application design pattern on this platform, called a work farm, is a parallel set of worker objects, with one input and one output stream. Statically configured work farms with homogeneous and heterogeneous sets of workers have been used in video compression and decompression, network processing, and graphics applications.

  11. Spectral Calculation of ICRF Wave Propagation and Heating in 2-D Using Massively Parallel Computers

    NASA Astrophysics Data System (ADS)

    Jaeger, E. F.; D'Azevedo, E.; Berry, L. A.; Carter, M. D.; Batchelor, D. B.

    2000-10-01

    Spectral calculations of ICRF wave propagation in plasmas have the natural advantage that they require no assumption regarding the smallness of the ion Larmor radius ρ relative to wavelength λ. Results are therefore applicable to all orders in k_bot ρ where k_bot = 2π/λ. But because all modes in the spectral representation are coupled, the solution requires inversion of a large dense matrix. In contrast, finite difference algorithms involve only matrices that are sparse and banded. Thus, spectral calculations of wave propagation and heating in tokamak plasmas have so far been limited to 1-D. In this paper, we extend the spectral method to 2-D by taking advantage of new matrix inversion techniques that utilize massively parallel computers. By spreading the dense matrix over 576 processors on the ORNL IBM RS/6000 SP supercomputer, we are able to solve up to 120,000 coupled complex equations requiring 230 GBytes of memory and achieving over 500 Gflops/sec. Initial results for ASDEX and NSTX will be presented using up to 200 modes in both the radial and vertical dimensions.

  12. Probing the mass assembly of massive nearby galaxies with deep imaging

    NASA Astrophysics Data System (ADS)

    Duc, P.-A.; Cuillandre, J.-C.; Alatalo, K.; Blitz, L.; Bois, M.; Bournaud, F.; Bureau, M.; Cappellari, M.; Côté, P.; Davies, R. L.; Davis, T. A.; de Zeeuw, P. T.; Emsellem, E.; Ferrarese, L.; Ferriere, E.; Gwyn, S.; Khochfar, S.; Krajnovic, D.; Kuntschner, H.; Lablanche, P.-Y.; McDermid, R. M.; Michel-Dansac, L.; Morganti, R.; Naab, T.; Oosterloo, T.; Sarzi, M.; Scott, N.; Serra, P.; Weijmans, A.; Young, L. M.

    2013-07-01

    According to a popular scenario supported by numerical models, the mass assembly and growth of massive galaxies, in particular the Early-Type Galaxies (ETGs), is, below a redshift of 1, mainly due to the accretion of multiple gas-poor satellites. In order to get observational evidence of the role played by minor dry mergers, we are obtaining extremely deep optical images of a complete volume limited sample of nearby ETGs. These observations, done with the CFHT as part of the ATLAS3D, NGVS and MATLAS projects, reach a stunning 28.5 - 29 mag.arcsec-2 surface brightness limit in the g' band. They allow us to detect the relics of past collisions such as faint stellar tidal tails as well as the very extended stellar halos which keep the memory of the last episodes of galactic accretion. Images and preliminary results from this on-going survey are presented, in particular a possible correlation between the fine structure index (which parametrizes the amount of tidal perturbation) of the ETGs, their stellar mass, effective radius and gas content.

  13. Password Cracking Using Sony Playstations

    NASA Astrophysics Data System (ADS)

    Kleinhans, Hugo; Butts, Jonathan; Shenoi, Sujeet

    Law enforcement agencies frequently encounter encrypted digital evidence for which the cryptographic keys are unknown or unavailable. Password cracking - whether it employs brute force or sophisticated cryptanalytic techniques - requires massive computational resources. This paper evaluates the benefits of using the Sony PlayStation 3 (PS3) to crack passwords. The PS3 offers massive computational power at relatively low cost. Moreover, multiple PS3 systems can be introduced easily to expand parallel processing when additional power is needed. This paper also describes a distributed framework designed to enable law enforcement agents to crack encrypted archives and applications in an efficient and cost-effective manner.

  14. Molecular Cloud Structures and Massive Star Formation in N159

    NASA Astrophysics Data System (ADS)

    Nayak, O.; Meixner, M.; Fukui, Y.; Tachihara, K.; Onishi, T.; Saigo, K.; Tokuda, K.; Harada, R.

    2018-02-01

    The N159 star-forming region is one of the most massive giant molecular clouds (GMCs) in the Large Magellanic Cloud (LMC). We show the 12CO, 13CO, CS molecular gas lines observed with ALMA in N159 west (N159W) and N159 east (N159E). We relate the structure of the gas clumps to the properties of 24 massive young stellar objects (YSOs) that include 10 newly identified YSOs based on our search. We use dendrogram analysis to identify properties of the molecular clumps, such as flux, mass, linewidth, size, and virial parameter. We relate the YSO properties to the molecular gas properties. We find that the CS gas clumps have a steeper size–linewidth relation than the 12CO or 13CO gas clumps. This larger slope could potentially occur if the CS gas is tracing shocks. The virial parameters of the 13CO gas clumps in N159W and N159E are low (<1). The threshold for massive star formation in N159W is 501 M ⊙ pc‑2, and the threshold for massive star formation in N159E is 794 M ⊙ pc‑2. We find that 13CO is more photodissociated in N159E than N159W. The most massive YSO in N159E has cleared out a molecular gas hole in its vicinity. All the massive YSO candidates in N159E have a more evolved spectral energy distribution type in comparison to the YSO candidates in N159W. These differences lead us to conclude that the giant molecular cloud complex in N159E is more evolved than the giant molecular cloud complex in N159W.

  15. NGC 346: Looking in the Cradle of a Massive Star Cluster

    NASA Astrophysics Data System (ADS)

    Gouliermis, Dimitrios A.; Hony, Sacha

    2017-03-01

    How does a star cluster of more than few 10,000 solar masses form? We present the case of the cluster NGC 346 in the Small Magellanic Cloud, still embedded in its natal star-forming region N66, and we propose a scenario for its formation, based on observations of the rich stellar populations in the region. Young massive clusters host a high fraction of early-type stars, indicating an extremely high star formation efficiency. The Milky Way galaxy hosts several young massive clusters that fill the gap between young low-mass open clusters and old massive globular clusters. Only a handful, though, are young enough to study their formation. Moreover, the investigation of their gaseous natal environments suffers from contamination by the Galactic disk. Young massive clusters are very abundant in distant starburst and interacting galaxies, but the distance of their hosting galaxies do not also allow a detailed analysis of their formation. The Magellanic Clouds, on the other hand, host young massive clusters in a wide range of ages with the youngest being still embedded in their giant HII regions. Hubble Space Telescope imaging of such star-forming complexes provide a stellar sampling with a high dynamic range in stellar masses, allowing the detailed study of star formation at scales typical for molecular clouds. Our cluster analysis on the distribution of newly-born stars in N66 shows that star formation in the region proceeds in a clumpy hierarchical fashion, leading to the formation of both a dominant young massive cluster, hosting about half of the observed pre-main-sequence population, and a self-similar dispersed distribution of the remaining stars. We investigate the correlation between stellar surface density (and star formation rate derived from star-counts) and molecular gas surface density (derived from dust column density) in order to unravel the physical conditions that gave birth to NGC 346. A power law fit to the data yields a steep correlation between these two parameters with a considerable scatter. The fraction of stellar over the total (gas plus young stars) mass is found to be systematically higher within the central 15 pc (where the young massive cluster is located) than outside, which suggests variations in the star formation efficiency within the same star-forming complex. This trend possibly reflects a change of star formation efficiency in N66 between clustered and non-clustered star formation. Our findings suggest that the formation of NGC 346 is the combined result of star formation regulated by turbulence and of early dynamical evolution induced by the gravitational potential of the dense interstellar medium.

  16. Limits in decision making arise from limits in memory retrieval.

    PubMed

    Giguère, Gyslain; Love, Bradley C

    2013-05-07

    Some decisions, such as predicting the winner of a baseball game, are challenging in part because outcomes are probabilistic. When making such decisions, one view is that humans stochastically and selectively retrieve a small set of relevant memories that provides evidence for competing options. We show that optimal performance at test is impossible when retrieving information in this fashion, no matter how extensive training is, because limited retrieval introduces noise into the decision process that cannot be overcome. One implication is that people should be more accurate in predicting future events when trained on idealized rather than on the actual distributions of items. In other words, we predict the best way to convey information to people is to present it in a distorted, idealized form. Idealization of training distributions is predicted to reduce the harmful noise induced by immutable bottlenecks in people's memory retrieval processes. In contrast, machine learning systems that selectively weight (i.e., retrieve) all training examples at test should not benefit from idealization. These conjectures are strongly supported by several studies and supporting analyses. Unlike machine systems, people's test performance on a target distribution is higher when they are trained on an idealized version of the distribution rather than on the actual target distribution. Optimal machine classifiers modified to selectively and stochastically sample from memory match the pattern of human performance. These results suggest firm limits on human rationality and have broad implications for how to train humans tasked with important classification decisions, such as radiologists, baggage screeners, intelligence analysts, and gamblers.

  17. Limits in decision making arise from limits in memory retrieval

    PubMed Central

    Giguère, Gyslain; Love, Bradley C.

    2013-01-01

    Some decisions, such as predicting the winner of a baseball game, are challenging in part because outcomes are probabilistic. When making such decisions, one view is that humans stochastically and selectively retrieve a small set of relevant memories that provides evidence for competing options. We show that optimal performance at test is impossible when retrieving information in this fashion, no matter how extensive training is, because limited retrieval introduces noise into the decision process that cannot be overcome. One implication is that people should be more accurate in predicting future events when trained on idealized rather than on the actual distributions of items. In other words, we predict the best way to convey information to people is to present it in a distorted, idealized form. Idealization of training distributions is predicted to reduce the harmful noise induced by immutable bottlenecks in people’s memory retrieval processes. In contrast, machine learning systems that selectively weight (i.e., retrieve) all training examples at test should not benefit from idealization. These conjectures are strongly supported by several studies and supporting analyses. Unlike machine systems, people’s test performance on a target distribution is higher when they are trained on an idealized version of the distribution rather than on the actual target distribution. Optimal machine classifiers modified to selectively and stochastically sample from memory match the pattern of human performance. These results suggest firm limits on human rationality and have broad implications for how to train humans tasked with important classification decisions, such as radiologists, baggage screeners, intelligence analysts, and gamblers. PMID:23610402

  18. Memory for Serial Order.

    ERIC Educational Resources Information Center

    Lewandowsky, Stephan; Murdock, Bennet B., Jr.

    1989-01-01

    An extension to Murdock's Theory of Distributed Associative Memory, based on associative chaining between items, is presented. The extended theory is applied to several serial order phenomena, including serial list learning, delayed recall effects, partial report effects, and buildup and release from proactive interference. (TJH)

  19. Exploring the use of memory colors for image enhancement

    NASA Astrophysics Data System (ADS)

    Xue, Su; Tan, Minghui; McNamara, Ann; Dorsey, Julie; Rushmeier, Holly

    2014-02-01

    Memory colors refer to those colors recalled in association with familiar objects. While some previous work introduces this concept to assist digital image enhancement, their basis, i.e., on-screen memory colors, are not appropriately investigated. In addition, the resulting adjustment methods developed are not evaluated from a perceptual view of point. In this paper, we first perform a context-free perceptual experiment to establish the overall distributions of screen memory colors for three pervasive objects. Then, we use a context-based experiment to locate the most representative memory colors; at the same time, we investigate the interactions of memory colors between different objects. Finally, we show a simple yet effective application using representative memory colors to enhance digital images. A user study is performed to evaluate the performance of our technique.

  20. Confronting Models of Massive Star Evolution and Explosions with Remnant Mass Measurements

    NASA Astrophysics Data System (ADS)

    Raithel, Carolyn A.; Sukhbold, Tuguldur; Özel, Feryal

    2018-03-01

    The mass distribution of compact objects provides a fossil record that can be studied to uncover information on the late stages of massive star evolution, the supernova explosion mechanism, and the dense matter equation of state. Observations of neutron star masses indicate a bimodal Gaussian distribution, while the observed black hole mass distribution decays exponentially for stellar-mass black holes. We use these observed distributions to directly confront the predictions of stellar evolution models and the neutrino-driven supernova simulations of Sukhbold et al. We find strong agreement between the black hole and low-mass neutron star distributions created by these simulations and the observations. We show that a large fraction of the stellar envelope must be ejected, either during the formation of stellar-mass black holes or prior to the implosion through tidal stripping due to a binary companion, in order to reproduce the observed black hole mass distribution. We also determine the origins of the bimodal peaks of the neutron star mass distribution, finding that the low-mass peak (centered at ∼1.4 M ⊙) originates from progenitors with M ZAMS ≈ 9–18 M ⊙. The simulations fail to reproduce the observed peak of high-mass neutron stars (centered at ∼1.8 M ⊙) and we explore several possible explanations. We argue that the close agreement between the observed and predicted black hole and low-mass neutron star mass distributions provides new, promising evidence that these stellar evolution and explosion models capture the majority of relevant stellar, nuclear, and explosion physics involved in the formation of compact objects.

Top