Sample records for pvm parallel virtual

  1. Using PVM to host CLIPS in distributed environments

    NASA Technical Reports Server (NTRS)

    Myers, Leonard; Pohl, Kym

    1994-01-01

    It is relatively easy to enhance CLIPS (C Language Integrated Production System) to support multiple expert systems running in a distributed environment with heterogeneous machines. The task is minimized by using the PVM (Parallel Virtual Machine) code from Oak Ridge Labs to provide the distributed utility. PVM is a library of C and FORTRAN subprograms that supports distributive computing on many different UNIX platforms. A PVM deamon is easily installed on each CPU that enters the virtual machine environment. Any user with rsh or rexec access to a machine can use the one PVM deamon to obtain a generous set of distributed facilities. The ready availability of both CLIPS and PVM makes the combination of software particularly attractive for budget conscious experimentation of heterogeneous distributive computing with multiple CLIPS executables. This paper presents a design that is sufficient to provide essential message passing functions in CLIPS and enable the full range of PVM facilities.

  2. PVM Wrapper

    NASA Technical Reports Server (NTRS)

    Katz, Daniel

    2004-01-01

    PVM Wrapper is a software library that makes it possible for code that utilizes the Parallel Virtual Machine (PVM) software library to run using the message-passing interface (MPI) software library, without needing to rewrite the entire code. PVM and MPI are the two most common software libraries used for applications that involve passing of messages among parallel computers. Since about 1996, MPI has been the de facto standard. Codes written when PVM was popular often feature patterns of {"initsend," "pack," "send"} and {"receive," "unpack"} calls. In many cases, these calls are not contiguous and one set of calls may even exist over multiple subroutines. These characteristics make it difficult to obtain equivalent functionality via a single MPI "send" call. Because PVM Wrapper is written to run with MPI- 1.2, some PVM functions are not permitted and must be replaced - a task that requires some programming expertise. The "pvm_spawn" and "pvm_parent" function calls are not replaced, but a programmer can use "mpirun" and knowledge of the ranks of parent and child tasks with supplied macroinstructions to enable execution of codes that use "pvm_spawn" and "pvm_parent."

  3. Automated Instrumentation, Monitoring and Visualization of PVM Programs Using AIMS

    NASA Technical Reports Server (NTRS)

    Mehra, Pankaj; VanVoorst, Brian; Yan, Jerry; Lum, Henry, Jr. (Technical Monitor)

    1994-01-01

    We present views and analysis of the execution of several PVM (Parallel Virtual Machine) codes for Computational Fluid Dynamics on a networks of Sparcstations, including: (1) NAS Parallel Benchmarks CG and MG; (2) a multi-partitioning algorithm for NAS Parallel Benchmark SP; and (3) an overset grid flowsolver. These views and analysis were obtained using our Automated Instrumentation and Monitoring System (AIMS) version 3.0, a toolkit for debugging the performance of PVM programs. We will describe the architecture, operation and application of AIMS. The AIMS toolkit contains: (1) Xinstrument, which can automatically instrument various computational and communication constructs in message-passing parallel programs; (2) Monitor, a library of runtime trace-collection routines; (3) VK (Visual Kernel), an execution-animation tool with source-code clickback; and (4) Tally, a tool for statistical analysis of execution profiles. Currently, Xinstrument can handle C and Fortran 77 programs using PVM 3.2.x; Monitor has been implemented and tested on Sun 4 systems running SunOS 4.1.2; and VK uses XIIR5 and Motif 1.2. Data and views obtained using AIMS clearly illustrate several characteristic features of executing parallel programs on networked workstations: (1) the impact of long message latencies; (2) the impact of multiprogramming overheads and associated load imbalance; (3) cache and virtual-memory effects; and (4) significant skews between workstation clocks. Interestingly, AIMS can compensate for constant skew (zero drift) by calibrating the skew between a parent and its spawned children. In addition, AIMS' skew-compensation algorithm can adjust timestamps in a way that eliminates physically impossible communications (e.g., messages going backwards in time). Our current efforts are directed toward creating new views to explain the observed performance of PVM programs. Some of the features planned for the near future include: (1) ConfigView, showing the physical topology of the virtual machine, inferred using specially formatted IP (Internet Protocol) packets: and (2) LoadView, synchronous animation of PVM-program execution and resource-utilization patterns.

  4. Intel NX to PVM 3.2 message passing conversion library

    NASA Technical Reports Server (NTRS)

    Arthur, Trey; Nelson, Michael L.

    1993-01-01

    NASA Langley Research Center has developed a library that allows Intel NX message passing codes to be executed under the more popular and widely supported Parallel Virtual Machine (PVM) message passing library. PVM was developed at Oak Ridge National Labs and has become the defacto standard for message passing. This library will allow the many programs that were developed on the Intel iPSC/860 or Intel Paragon in a Single Program Multiple Data (SPMD) design to be ported to the numerous architectures that PVM (version 3.2) supports. Also, the library adds global operations capability to PVM. A familiarity with Intel NX and PVM message passing is assumed.

  5. Dust Dynamics in Protoplanetary Disks: Parallel Computing with PVM

    NASA Astrophysics Data System (ADS)

    de La Fuente Marcos, Carlos; Barge, Pierre; de La Fuente Marcos, Raúl

    2002-03-01

    We describe a parallel version of our high-order-accuracy particle-mesh code for the simulation of collisionless protoplanetary disks. We use this code to carry out a massively parallel, two-dimensional, time-dependent, numerical simulation, which includes dust particles, to study the potential role of large-scale, gaseous vortices in protoplanetary disks. This noncollisional problem is easy to parallelize on message-passing multicomputer architectures. We performed the simulations on a cache-coherent nonuniform memory access Origin 2000 machine, using both the parallel virtual machine (PVM) and message-passing interface (MPI) message-passing libraries. Our performance analysis suggests that, for our problem, PVM is about 25% faster than MPI. Using PVM and MPI made it possible to reduce CPU time and increase code performance. This allows for simulations with a large number of particles (N ~ 105-106) in reasonable CPU times. The performances of our implementation of the pa! rallel code on an Origin 2000 supercomputer are presented and discussed. They exhibit very good speedup behavior and low load unbalancing. Our results confirm that giant gaseous vortices can play a dominant role in giant planet formation.

  6. Automated Instrumentation, Monitoring and Visualization of PVM Programs Using AIMS

    NASA Technical Reports Server (NTRS)

    Mehra, Pankaj; VanVoorst, Brian; Yan, Jerry; Tucker, Deanne (Technical Monitor)

    1994-01-01

    We present views and analysis of the execution of several PVM codes for Computational Fluid Dynamics on a network of Sparcstations, including (a) NAS Parallel benchmarks CG and MG (White, Alund and Sunderam 1993); (b) a multi-partitioning algorithm for NAS Parallel Benchmark SP (Wijngaart 1993); and (c) an overset grid flowsolver (Smith 1993). These views and analysis were obtained using our Automated Instrumentation and Monitoring System (AIMS) version 3.0, a toolkit for debugging the performance of PVM programs. We will describe the architecture, operation and application of AIMS. The AIMS toolkit contains (a) Xinstrument, which can automatically instrument various computational and communication constructs in message-passing parallel programs; (b) Monitor, a library of run-time trace-collection routines; (c) VK (Visual Kernel), an execution-animation tool with source-code clickback; and (d) Tally, a tool for statistical analysis of execution profiles. Currently, Xinstrument can handle C and Fortran77 programs using PVM 3.2.x; Monitor has been implemented and tested on Sun 4 systems running SunOS 4.1.2; and VK uses X11R5 and Motif 1.2. Data and views obtained using AIMS clearly illustrate several characteristic features of executing parallel programs on networked workstations: (a) the impact of long message latencies; (b) the impact of multiprogramming overheads and associated load imbalance; (c) cache and virtual-memory effects; and (4significant skews between workstation clocks. Interestingly, AIMS can compensate for constant skew (zero drift) by calibrating the skew between a parent and its spawned children. In addition, AIMS' skew-compensation algorithm can adjust timestamps in a way that eliminates physically impossible communications (e.g., messages going backwards in time). Our current efforts are directed toward creating new views to explain the observed performance of PVM programs. Some of the features planned for the near future include: (a) ConfigView, showing the physical topology of the virtual machine, inferred using specially formatted IP (Internet Protocol) packets; and (b) LoadView, synchronous animation of PVM-program execution and resource-utilization patterns.

  7. Porting Gravitational Wave Signal Extraction to Parallel Virtual Machine (PVM)

    NASA Technical Reports Server (NTRS)

    Thirumalainambi, Rajkumar; Thompson, David E.; Redmon, Jeffery

    2009-01-01

    Laser Interferometer Space Antenna (LISA) is a planned NASA-ESA mission to be launched around 2012. The Gravitational Wave detection is fundamentally the determination of frequency, source parameters, and waveform amplitude derived in a specific order from the interferometric time-series of the rotating LISA spacecrafts. The LISA Science Team has developed a Mock LISA Data Challenge intended to promote the testing of complicated nested search algorithms to detect the 100-1 millihertz frequency signals at amplitudes of 10E-21. However, it has become clear that, sequential search of the parameters is very time consuming and ultra-sensitive; hence, a new strategy has been developed. Parallelization of existing sequential search algorithms of Gravitational Wave signal identification consists of decomposing sequential search loops, beginning with outermost loops and working inward. In this process, the main challenge is to detect interdependencies among loops and partitioning the loops so as to preserve concurrency. Existing parallel programs are based upon either shared memory or distributed memory paradigms. In PVM, master and node programs are used to execute parallelization and process spawning. The PVM can handle process management and process addressing schemes using a virtual machine configuration. The task scheduling and the messaging and signaling can be implemented efficiently for the LISA Gravitational Wave search process using a master and 6 nodes. This approach is accomplished using a server that is available at NASA Ames Research Center, and has been dedicated to the LISA Data Challenge Competition. Historically, gravitational wave and source identification parameters have taken around 7 days in this dedicated single thread Linux based server. Using PVM approach, the parameter extraction problem can be reduced to within a day. The low frequency computation and a proxy signal-to-noise ratio are calculated in separate nodes that are controlled by the master using message and vector of data passing. The message passing among nodes follows a pattern of synchronous and asynchronous send-and-receive protocols. The communication model and the message buffers are allocated dynamically to address rapid search of gravitational wave source information in the Mock LISA data sets.

  8. Performance Evaluation in Network-Based Parallel Computing

    NASA Technical Reports Server (NTRS)

    Dezhgosha, Kamyar

    1996-01-01

    Network-based parallel computing is emerging as a cost-effective alternative for solving many problems which require use of supercomputers or massively parallel computers. The primary objective of this project has been to conduct experimental research on performance evaluation for clustered parallel computing. First, a testbed was established by augmenting our existing SUNSPARCs' network with PVM (Parallel Virtual Machine) which is a software system for linking clusters of machines. Second, a set of three basic applications were selected. The applications consist of a parallel search, a parallel sort, a parallel matrix multiplication. These application programs were implemented in C programming language under PVM. Third, we conducted performance evaluation under various configurations and problem sizes. Alternative parallel computing models and workload allocations for application programs were explored. The performance metric was limited to elapsed time or response time which in the context of parallel computing can be expressed in terms of speedup. The results reveal that the overhead of communication latency between processes in many cases is the restricting factor to performance. That is, coarse-grain parallelism which requires less frequent communication between processes will result in higher performance in network-based computing. Finally, we are in the final stages of installing an Asynchronous Transfer Mode (ATM) switch and four ATM interfaces (each 155 Mbps) which will allow us to extend our study to newer applications, performance metrics, and configurations.

  9. High Performance Programming Using Explicit Shared Memory Model on Cray T3D1

    NASA Technical Reports Server (NTRS)

    Simon, Horst D.; Saini, Subhash; Grassi, Charles

    1994-01-01

    The Cray T3D system is the first-phase system in Cray Research, Inc.'s (CRI) three-phase massively parallel processing (MPP) program. This system features a heterogeneous architecture that closely couples DEC's Alpha microprocessors and CRI's parallel-vector technology, i.e., the Cray Y-MP and Cray C90. An overview of the Cray T3D hardware and available programming models is presented. Under Cray Research adaptive Fortran (CRAFT) model four programming methods (data parallel, work sharing, message-passing using PVM, and explicit shared memory model) are available to the users. However, at this time data parallel and work sharing programming models are not available to the user community. The differences between standard PVM and CRI's PVM are highlighted with performance measurements such as latencies and communication bandwidths. We have found that the performance of neither standard PVM nor CRI s PVM exploits the hardware capabilities of the T3D. The reasons for the bad performance of PVM as a native message-passing library are presented. This is illustrated by the performance of NAS Parallel Benchmarks (NPB) programmed in explicit shared memory model on Cray T3D. In general, the performance of standard PVM is about 4 to 5 times less than obtained by using explicit shared memory model. This degradation in performance is also seen on CM-5 where the performance of applications using native message-passing library CMMD on CM-5 is also about 4 to 5 times less than using data parallel methods. The issues involved (such as barriers, synchronization, invalidating data cache, aligning data cache etc.) while programming in explicit shared memory model are discussed. Comparative performance of NPB using explicit shared memory programming model on the Cray T3D and other highly parallel systems such as the TMC CM-5, Intel Paragon, Cray C90, IBM-SP1, etc. is presented.

  10. The Pedagogical Variation Model (PVM) for Work-Based Training in Virtual Classrooms: Evaluation at Kuwait University

    ERIC Educational Resources Information Center

    Rogers, Maria Susy; Aldhafeeri, Fayiz Mensher

    2015-01-01

    A collaborative research initiative was undertaken to evaluate the pedagogical variation model (PVM) for online learning and teaching at Kuwait University. Outcomes from sample populations of students--both postgraduates and undergraduates--from the Faculty of Education were analyzed for comparison. As predicted in the PVM, the findings indicate…

  11. Scalable and reusable emulator for evaluating the performance of SS7 networks

    NASA Astrophysics Data System (ADS)

    Lazar, Aurel A.; Tseng, Kent H.; Lim, Koon Seng; Choe, Winston

    1994-04-01

    A scalable and reusable emulator was designed and implemented for studying the behavior of SS7 networks. The emulator design was largely based on public domain software. It was developed on top of an environment supported by PVM, the Parallel Virtual Machine, and managed by OSIMIS-the OSI Management Information Service platform. The emulator runs on top of a commercially available ATM LAN interconnecting engineering workstations. As a case study for evaluating the emulator, the behavior of the Singapore National SS7 Network under fault and unbalanced loading conditions was investigated.

  12. The engine design engine. A clustered computer platform for the aerodynamic inverse design and analysis of a full engine

    NASA Technical Reports Server (NTRS)

    Sanz, J.; Pischel, K.; Hubler, D.

    1992-01-01

    An application for parallel computation on a combined cluster of powerful workstations and supercomputers was developed. A Parallel Virtual Machine (PVM) is used as message passage language on a macro-tasking parallelization of the Aerodynamic Inverse Design and Analysis for a Full Engine computer code. The heterogeneous nature of the cluster is perfectly handled by the controlling host machine. Communication is established via Ethernet with the TCP/IP protocol over an open network. A reasonable overhead is imposed for internode communication, rendering an efficient utilization of the engaged processors. Perhaps one of the most interesting features of the system is its versatile nature, that permits the usage of the computational resources available that are experiencing less use at a given point in time.

  13. Research in Parallel Algorithms and Software for Computational Aerosciences

    NASA Technical Reports Server (NTRS)

    Domel, Neal D.

    1996-01-01

    Phase I is complete for the development of a Computational Fluid Dynamics parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.

  14. Research in Parallel Algorithms and Software for Computational Aerosciences

    NASA Technical Reports Server (NTRS)

    Domel, Neal D.

    1996-01-01

    Phase 1 is complete for the development of a computational fluid dynamics CFD) parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.

  15. Scalability and Portability of Two Parallel Implementations of ADI

    NASA Technical Reports Server (NTRS)

    Phung, Thanh; VanderWijngaart, Rob F.

    1994-01-01

    Two domain decompositions for the implementation of the NAS Scalar Penta-diagonal Parallel Benchmark on MIMD systems are investigated, namely transposition and multi-partitioning. Hardware platforms considered are the Intel iPSC/860 and Paragon XP/S-15, and clusters of SGI workstations on ethernet, communicating through PVM. It is found that the multi-partitioning strategy offers the kind of coarse granularity that allows scaling up to hundreds of processors on a massively parallel machine. Moreover, efficiency is retained when the code is ported verbatim (save message passing syntax) to a PVM environment on a modest size cluster of workstations.

  16. Efficient Process Migration for Parallel Processing on Non-Dedicated Networks of Workstations

    NASA Technical Reports Server (NTRS)

    Chanchio, Kasidit; Sun, Xian-He

    1996-01-01

    This paper presents the design and preliminary implementation of MpPVM, a software system that supports process migration for PVM application programs in a non-dedicated heterogeneous computing environment. New concepts of migration point as well as migration point analysis and necessary data analysis are introduced. In MpPVM, process migrations occur only at previously inserted migration points. Migration point analysis determines appropriate locations to insert migration points; whereas, necessary data analysis provides a minimum set of variables to be transferred at each migration pint. A new methodology to perform reliable point-to-point data communications in a migration environment is also discussed. Finally, a preliminary implementation of MpPVM and its experimental results are presented, showing the correctness and promising performance of our process migration mechanism in a scalable non-dedicated heterogeneous computing environment. While MpPVM is developed on top of PVM, the process migration methodology introduced in this study is general and can be applied to any distributed software environment.

  17. Implementation of Helioseismic Data Reduction and Diagnostic Techniques on Massively Parallel Architectures

    NASA Technical Reports Server (NTRS)

    Korzennik, Sylvain

    1997-01-01

    Under the direction of Dr. Rhodes, and the technical supervision of Dr. Korzennik, the data assimilation of high spatial resolution solar dopplergrams has been carried out throughout the program on the Intel Delta Touchstone supercomputer. With the help of a research assistant, partially supported by this grant, and under the supervision of Dr. Korzennik, code development was carried out at SAO, using various available resources. To ensure cross-platform portability, PVM was selected as the message passing library. A parallel implementation of power spectra computation for helioseismology data reduction, using PVM was successfully completed. It was successfully ported to SMP architectures (i.e. SUN), and to some MPP architectures (i.e. the CM5). Due to limitation of the implementation of PVM on the Cray T3D, the port to that architecture was not completed at the time.

  18. Distributed multitasking ITS with PVM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fan, W.C.; Halbleib, J.A. Sr.

    1995-12-31

    Advances in computer hardware and communication software have made it possible to perform parallel-processing computing on a collection of desktop workstations. For many applications, multitasking on a cluster of high-performance workstations has achieved performance comparable to or better than that on a traditional supercomputer. From the point of view of cost-effectiveness, it also allows users to exploit available but unused computational resources and thus achieve a higher performance-to-cost ratio. Monte Carlo calculations are inherently parallelizable because the individual particle trajectories can be generated independently with minimum need for interprocessor communication. Furthermore, the number of particle histories that can be generatedmore » in a given amount of wall-clock time is nearly proportional to the number of processors in the cluster. This is an important fact because the inherent statistical uncertainty in any Monte Carlo result decreases as the number of histories increases. For these reasons, researchers have expended considerable effort to take advantage of different parallel architectures for a variety of Monte Carlo radiation transport codes, often with excellent results. The initial interest in this work was sparked by the multitasking capability of the MCNP code on a cluster of workstations using the Parallel Virtual Machine (PVM) software. On a 16-machine IBM RS/6000 cluster, it has been demonstrated that MCNP runs ten times as fast as on a single-processor CRAY YMP. In this paper, we summarize the implementation of a similar multitasking capability for the coupled electronphoton transport code system, the Integrated TIGER Series (ITS), and the evaluation of two load-balancing schemes for homogeneous and heterogeneous networks.« less

  19. Distributed multitasking ITS with PVM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fan, W.C.; Halbleib, J.A. Sr.

    1995-02-01

    Advances of computer hardware and communication software have made it possible to perform parallel-processing computing on a collection of desktop workstations. For many applications, multitasking on a cluster of high-performance workstations has achieved performance comparable or better than that on a traditional supercomputer. From the point of view of cost-effectiveness, it also allows users to exploit available but unused computational resources, and thus achieve a higher performance-to-cost ratio. Monte Carlo calculations are inherently parallelizable because the individual particle trajectories can be generated independently with minimum need for interprocessor communication. Furthermore, the number of particle histories that can be generated inmore » a given amount of wall-clock time is nearly proportional to the number of processors in the cluster. This is an important fact because the inherent statistical uncertainty in any Monte Carlo result decreases as the number of histories increases. For these reasons, researchers have expended considerable effort to take advantage of different parallel architectures for a variety of Monte Carlo radiation transport codes, often with excellent results. The initial interest in this work was sparked by the multitasking capability of MCNP on a cluster of workstations using the Parallel Virtual Machine (PVM) software. On a 16-machine IBM RS/6000 cluster, it has been demonstrated that MCNP runs ten times as fast as on a single-processor CRAY YMP. In this paper, we summarize the implementation of a similar multitasking capability for the coupled electron/photon transport code system, the Integrated TIGER Series (ITS), and the evaluation of two load balancing schemes for homogeneous and heterogeneous networks.« less

  20. Parallel computation and the basis system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, G.R.

    1993-05-01

    A software package has been written that can facilitate efforts to develop powerful, flexible, and easy-to use programs that can run in single-processor, massively parallel, and distributed computing environments. Particular attention has been given to the difficulties posed by a program consisting of many science packages that represent subsystems of a complicated, coupled system. Methods have been found to maintain independence of the packages by hiding data structures without increasing the communications costs in a parallel computing environment. Concepts developed in this work are demonstrated by a prototype program that uses library routines from two existing software systems, Basis andmore » Parallel Virtual Machine (PVM). Most of the details of these libraries have been encapsulated in routines and macros that could be rewritten for alternative libraries that possess certain minimum capabilities. The prototype software uses a flexible master-and-slaves paradigm for parallel computation and supports domain decomposition with message passing for partitioning work among slaves. Facilities are provided for accessing variables that are distributed among the memories of slaves assigned to subdomains. The software is named PROTOPAR.« less

  1. Parallel computation and the Basis system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, G.R.

    1992-12-16

    A software package has been written that can facilitate efforts to develop powerful, flexible, and easy-to-use programs that can run in single-processor, massively parallel, and distributed computing environments. Particular attention has been given to the difficulties posed by a program consisting of many science packages that represent subsystems of a complicated, coupled system. Methods have been found to maintain independence of the packages by hiding data structures without increasing the communication costs in a parallel computing environment. Concepts developed in this work are demonstrated by a prototype program that uses library routines from two existing software systems, Basis and Parallelmore » Virtual Machine (PVM). Most of the details of these libraries have been encapsulated in routines and macros that could be rewritten for alternative libraries that possess certain minimum capabilities. The prototype software uses a flexible master-and-slaves paradigm for parallel computation and supports domain decomposition with message passing for partitioning work among slaves. Facilities are provided for accessing variables that are distributed among the memories of slaves assigned to subdomains. The software is named PROTOPAR.« less

  2. HeNCE: A Heterogeneous Network Computing Environment

    DOE PAGES

    Beguelin, Adam; Dongarra, Jack J.; Geist, George Al; ...

    1994-01-01

    Network computing seeks to utilize the aggregate resources of many networked computers to solve a single problem. In so doing it is often possible to obtain supercomputer performance from an inexpensive local area network. The drawback is that network computing is complicated and error prone when done by hand, especially if the computers have different operating systems and data formats and are thus heterogeneous. The heterogeneous network computing environment (HeNCE) is an integrated graphical environment for creating and running parallel programs over a heterogeneous collection of computers. It is built on a lower level package called parallel virtual machine (PVM).more » The HeNCE philosophy of parallel programming is to have the programmer graphically specify the parallelism of a computation and to automate, as much as possible, the tasks of writing, compiling, executing, debugging, and tracing the network computation. Key to HeNCE is a graphical language based on directed graphs that describe the parallelism and data dependencies of an application. Nodes in the graphs represent conventional Fortran or C subroutines and the arcs represent data and control flow. This article describes the present state of HeNCE, its capabilities, limitations, and areas of future research.« less

  3. Application of a distributed network in computational fluid dynamic simulations

    NASA Technical Reports Server (NTRS)

    Deshpande, Manish; Feng, Jinzhang; Merkle, Charles L.; Deshpande, Ashish

    1994-01-01

    A general-purpose 3-D, incompressible Navier-Stokes algorithm is implemented on a network of concurrently operating workstations using parallel virtual machine (PVM) and compared with its performance on a CRAY Y-MP and on an Intel iPSC/860. The problem is relatively computationally intensive, and has a communication structure based primarily on nearest-neighbor communication, making it ideally suited to message passing. Such problems are frequently encountered in computational fluid dynamics (CDF), and their solution is increasingly in demand. The communication structure is explicitly coded in the implementation to fully exploit the regularity in message passing in order to produce a near-optimal solution. Results are presented for various grid sizes using up to eight processors.

  4. Monitoring Data-Structure Evolution in Distributed Message-Passing Programs

    NASA Technical Reports Server (NTRS)

    Sarukkai, Sekhar R.; Beers, Andrew; Woodrow, Thomas S. (Technical Monitor)

    1996-01-01

    Monitoring the evolution of data structures in parallel and distributed programs, is critical for debugging its semantics and performance. However, the current state-of-art in tracking and presenting data-structure information on parallel and distributed environments is cumbersome and does not scale. In this paper we present a methodology that automatically tracks memory bindings (not the actual contents) of static and dynamic data-structures of message-passing C programs, using PVM. With the help of a number of examples we show that in addition to determining the impact of memory allocation overheads on program performance, graphical views can help in debugging the semantics of program execution. Scalable animations of virtual address bindings of source-level data-structures are used for debugging the semantics of parallel programs across all processors. In conjunction with light-weight core-files, this technique can be used to complement traditional debuggers on single processors. Detailed information (such as data-structure contents), on specific nodes, can be determined using traditional debuggers after the data structure evolution leading to the semantic error is observed graphically.

  5. Parallelization of KENO-Va Monte Carlo code

    NASA Astrophysics Data System (ADS)

    Ramón, Javier; Peña, Jorge

    1995-07-01

    KENO-Va is a code integrated within the SCALE system developed by Oak Ridge that solves the transport equation through the Monte Carlo Method. It is being used at the Consejo de Seguridad Nuclear (CSN) to perform criticality calculations for fuel storage pools and shipping casks. Two parallel versions of the code: one for shared memory machines and other for distributed memory systems using the message-passing interface PVM have been generated. In both versions the neutrons of each generation are tracked in parallel. In order to preserve the reproducibility of the results in both versions, advanced seeds for random numbers were used. The CONVEX C3440 with four processors and shared memory at CSN was used to implement the shared memory version. A FDDI network of 6 HP9000/735 was employed to implement the message-passing version using proprietary PVM. The speedup obtained was 3.6 in both cases.

  6. Optimized and parallelized implementation of the electronegativity equalization method and the atom-bond electronegativity equalization method.

    PubMed

    Vareková, R Svobodová; Koca, J

    2006-02-01

    The most common way to calculate charge distribution in a molecule is ab initio quantum mechanics (QM). Some faster alternatives to QM have also been developed, the so-called "equalization methods" EEM and ABEEM, which are based on DFT. We have implemented and optimized the EEM and ABEEM methods and created the EEM SOLVER and ABEEM SOLVER programs. It has been found that the most time-consuming part of equalization methods is the reduction of the matrix belonging to the equation system generated by the method. Therefore, for both methods this part was replaced by the parallel algorithm WIRS and implemented within the PVM environment. The parallelized versions of the programs EEM SOLVER and ABEEM SOLVER showed promising results, especially on a single computer with several processors (compact PVM). The implemented programs are available through the Web page http://ncbr.chemi.muni.cz/~n19n/eem_abeem.

  7. Computation of Coupled Thermal-Fluid Problems in Distributed Memory Environment

    NASA Technical Reports Server (NTRS)

    Wei, H.; Shang, H. M.; Chen, Y. S.

    2001-01-01

    The thermal-fluid coupling problems are very important to aerospace and engineering applications. Instead of analyzing heat transfer and fluid flow separately, this study merged two well-accepted engineering solution methods, SINDA for thermal analysis and FDNS for fluid flow simulation, into a unified multi-disciplinary thermal fluid prediction method. A fully conservative patched grid interface algorithm for arbitrary two-dimensional and three-dimensional geometry has been developed. The state-of-the-art parallel computing concept was used to couple SINDA and FDNS for the communication of boundary conditions through PVM (Parallel Virtual Machine) libraries. Therefore, the thermal analysis performed by SINDA and the fluid flow calculated by FDNS are fully coupled to obtain steady state or transient solutions. The natural convection between two thick-walled eccentric tubes was calculated and the predicted results match the experiment data perfectly. A 3-D rocket engine model and a real 3-D SSME geometry were used to test the current model, and the reasonable temperature field was obtained.

  8. Users manual for the Chameleon parallel programming tools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gropp, W.; Smith, B.

    1993-06-01

    Message passing is a common method for writing programs for distributed-memory parallel computers. Unfortunately, the lack of a standard for message passing has hampered the construction of portable and efficient parallel programs. In an attempt to remedy this problem, a number of groups have developed their own message-passing systems, each with its own strengths and weaknesses. Chameleon is a second-generation system of this type. Rather than replacing these existing systems, Chameleon is meant to supplement them by providing a uniform way to access many of these systems. Chameleon`s goals are to (a) be very lightweight (low over-head), (b) be highlymore » portable, and (c) help standardize program startup and the use of emerging message-passing operations such as collective operations on subsets of processors. Chameleon also provides a way to port programs written using PICL or Intel NX message passing to other systems, including collections of workstations. Chameleon is tracking the Message-Passing Interface (MPI) draft standard and will provide both an MPI implementation and an MPI transport layer. Chameleon provides support for heterogeneous computing by using p4 and PVM. Chameleon`s support for homogeneous computing includes the portable libraries p4, PICL, and PVM and vendor-specific implementation for Intel NX, IBM EUI (SP-1), and Thinking Machines CMMD (CM-5). Support for Ncube and PVM 3.x is also under development.« less

  9. Development and Application of a Parallel LCAO Cluster Method

    NASA Astrophysics Data System (ADS)

    Patton, David C.

    1997-08-01

    CPU intensive steps in the SCF electronic structure calculations of clusters and molecules with a first-principles LCAO method have been fully parallelized via a message passing paradigm. Identification of the parts of the code that are composed of many independent compute-intensive steps is discussed in detail as they are the most readily parallelized. Most of the parallelization involves spatially decomposing numerical operations on a mesh. One exception is the solution of Poisson's equation which relies on distribution of the charge density and multipole methods. The method we use to parallelize this part of the calculation is quite novel and is covered in detail. We present a general method for dynamically load-balancing a parallel calculation and discuss how we use this method in our code. The results of benchmark calculations of the IR and Raman spectra of PAH molecules such as anthracene (C_14H_10) and tetracene (C_18H_12) are presented. These benchmark calculations were performed on an IBM SP2 and a SUN Ultra HPC server with both MPI and PVM. Scalability and speedup for these calculations is analyzed to determine the efficiency of the code. In addition, performance and usage issues for MPI and PVM are presented.

  10. A heterogeneous computing environment for simulating astrophysical fluid flows

    NASA Technical Reports Server (NTRS)

    Cazes, J.

    1994-01-01

    In the Concurrent Computing Laboratory in the Department of Physics and Astronomy at Louisiana State University we have constructed a heterogeneous computing environment that permits us to routinely simulate complicated three-dimensional fluid flows and to readily visualize the results of each simulation via three-dimensional animation sequences. An 8192-node MasPar MP-1 computer with 0.5 GBytes of RAM provides 250 MFlops of execution speed for our fluid flow simulations. Utilizing the parallel virtual machine (PVM) language, at periodic intervals data is automatically transferred from the MP-1 to a cluster of workstations where individual three-dimensional images are rendered for inclusion in a single animation sequence. Work is underway to replace executions on the MP-1 with simulations performed on the 512-node CM-5 at NCSA and to simultaneously gain access to more potent volume rendering workstations.

  11. Demonstration of a full volume 3D pre-stack depth migration in the Garden Banks area using massively parallel processor (MPP) technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Solano, M.; Chang, H.; VanDyke, J.

    1996-12-31

    This paper describes the implementation and results of portable, production-scale 3D Pre-stack Kirchhoff depth migration software. Full volume pre-stack imaging was applied to a six million trace (46.9 Gigabyte) data set from a subsalt play in the Garden Banks area in the Gulf of Mexico. The velocity model building and updating, were derived using image depth gathers and an image-driven strategy. After three velocity iterations, depth migrated sections revealed drilling targets that were not visible in the conventional 3D post-stack time migrated data set. As expected from the implementation of the migration algorithm, it was found that amplitudes are wellmore » preserved and anomalies associated with known reservoirs, conform to petrophysical predictions. Image gathers for velocity analysis and the final depth migrated volume, were generated on an 1824 node Intel Paragon at Sandia National Laboratories. The code has been successfully ported to a CRAY (T3D) and Unix workstation Parallel Virtual Machine environments (PVM).« less

  12. Parallel Event Analysis Under Unix

    NASA Astrophysics Data System (ADS)

    Looney, S.; Nilsson, B. S.; Oest, T.; Pettersson, T.; Ranjard, F.; Thibonnier, J.-P.

    The ALEPH experiment at LEP, the CERN CN division and Digital Equipment Corp. have, in a joint project, developed a parallel event analysis system. The parallel physics code is identical to ALEPH's standard analysis code, ALPHA, only the organisation of input/output is changed. The user may switch between sequential and parallel processing by simply changing one input "card". The initial implementation runs on an 8-node DEC 3000/400 farm, using the PVM software, and exhibits a near-perfect speed-up linearity, reducing the turn-around time by a factor of 8.

  13. High-energy physics software parallelization using database techniques

    NASA Astrophysics Data System (ADS)

    Argante, E.; van der Stok, P. D. V.; Willers, I.

    1997-02-01

    A programming model for software parallelization, called CoCa, is introduced that copes with problems caused by typical features of high-energy physics software. By basing CoCa on the database transaction paradimg, the complexity induced by the parallelization is for a large part transparent to the programmer, resulting in a higher level of abstraction than the native message passing software. CoCa is implemented on a Meiko CS-2 and on a SUN SPARCcenter 2000 parallel computer. On the CS-2, the performance is comparable with the performance of native PVM and MPI.

  14. Searching for patterns in remote sensing image databases using neural networks

    NASA Technical Reports Server (NTRS)

    Paola, Justin D.; Schowengerdt, Robert A.

    1995-01-01

    We have investigated a method, based on a successful neural network multispectral image classification system, of searching for single patterns in remote sensing databases. While defining the pattern to search for and the feature to be used for that search (spectral, spatial, temporal, etc.) is challenging, a more difficult task is selecting competing patterns to train against the desired pattern. Schemes for competing pattern selection, including random selection and human interpreted selection, are discussed in the context of an example detection of dense urban areas in Landsat Thematic Mapper imagery. When applying the search to multiple images, a simple normalization method can alleviate the problem of inconsistent image calibration. Another potential problem, that of highly compressed data, was found to have a minimal effect on the ability to detect the desired pattern. The neural network algorithm has been implemented using the PVM (Parallel Virtual Machine) library and nearly-optimal speedups have been obtained that help alleviate the long process of searching through imagery.

  15. Evaluation of Pneumonia Virus of Mice as a Possible Human Pathogen

    PubMed Central

    Brock, Linda G.; Karron, Ruth A.; Krempl, Christine D.; Collins, Peter L.

    2012-01-01

    Pneumonia virus of mice (PVM), a relative of human respiratory syncytial virus (RSV), causes respiratory disease in mice. There is serologic evidence suggesting widespread exposure of humans to PVM. To investigate replication in primates, African green monkeys (AGM) and rhesus macaques (n = 4) were inoculated with PVM by the respiratory route. Virus was shed intermittently at low levels by a subset of animals, suggesting poor permissiveness. PVM efficiently replicated in cultured human cells and inhibited the type I interferon (IFN) response in these cells. This suggests that poor replication in nonhuman primates was not due to a general nonpermissiveness of primate cells or poor control of the IFN response. Seroprevalence in humans was examined by screening sera from 30 adults and 17 young children for PVM-neutralizing activity. Sera from a single child (6%) and 40% of adults had low neutralizing activity against PVM, which could be consistent with increasing incidence of exposure following early childhood. There was no cross-reaction of human or AGM sera between RSV and PVM and no cross-protection in the mouse model. In native Western blots, human sera reacted with RSV but not PVM proteins under conditions in which AGM immune sera reacted strongly. Serum reactivity was further evaluated by flow cytometry using unfixed Vero cells infected with PVM or RSV expressing green fluorescent protein (GFP) as a measure of viral gene expression. The reactivity of human sera against RSV-infected cells correlated with GFP expression, whereas reactivity against PVM-infected cells was low and uncorrelated with GFP expression. Thus, PVM specificity was not evident. Our results indicate that the PVM-neutralizing activity of human sera is not due to RSV- or PVM-specific antibodies but may be due to low-affinity, polyreactive natural antibodies of the IgG subclass. The absence of PVM-specific antibodies and restriction in nonhuman primates makes PVM unlikely to be a human pathogen. PMID:22438539

  16. High Performance Programming Using Explicit Shared Memory Model on the Cray T3D

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Simon, Horst D.; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    The Cray T3D is the first-phase system in Cray Research Inc.'s (CRI) three-phase massively parallel processing program. In this report we describe the architecture of the T3D, as well as the CRAFT (Cray Research Adaptive Fortran) programming model, and contrast it with PVM, which is also supported on the T3D We present some performance data based on the NAS Parallel Benchmarks to illustrate both architectural and software features of the T3D.

  17. Unique nonstructural proteins of Pneumonia Virus of Mice (PVM) promote degradation of interferon (IFN) pathway components and IFN-stimulated gene proteins.

    PubMed

    Dhar, Jayeeta; Barik, Sailen

    2016-12-01

    Pneumonia Virus of Mice (PVM) is the only virus that shares the Pneumovirus genus of the Paramyxoviridae family with Respiratory Syncytial Virus (RSV). A deadly mouse pathogen, PVM has the potential to serve as a robust animal model of RSV infection, since human RSV does not fully replicate the human pathology in mice. Like RSV, PVM also encodes two nonstructural proteins that have been implicated to suppress the IFN pathway, but surprisingly, they exhibit no sequence similarity with their RSV equivalents. The molecular mechanism of PVM NS function, therefore, remains unknown. Here, we show that recombinant PVM NS proteins degrade the mouse counterparts of the IFN pathway components. Proteasomal degradation appears to be mediated by ubiquitination promoted by PVM NS proteins. Interestingly, NS proteins of PVM lowered the levels of several ISG (IFN-stimulated gene) proteins as well. These results provide a molecular foundation for the mechanisms by which PVM efficiently subverts the IFN response of the murine cell. They also reveal that in spite of their high sequence dissimilarity, the two pneumoviral NS proteins are functionally and mechanistically similar.

  18. 21 CFR 872.3500 - Polyvinylmethylether maleic anhydride (PVM-MA), acid copolymer, and carboxymethylcellulose sodium...

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Polyvinylmethylether maleic anhydride (PVM-MA...) MEDICAL DEVICES DENTAL DEVICES Prosthetic Devices § 872.3500 Polyvinylmethylether maleic anhydride (PVM-MA.... Polyvinylmethylether maleic anhydride (PVM-MA), acid copolymer, and carboxymethylcellulose sodium (NACMC) denture...

  19. 21 CFR 872.3500 - Polyvinylmethylether maleic anhydride (PVM-MA), acid copolymer, and carboxymethylcellulose sodium...

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Polyvinylmethylether maleic anhydride (PVM-MA...) MEDICAL DEVICES DENTAL DEVICES Prosthetic Devices § 872.3500 Polyvinylmethylether maleic anhydride (PVM-MA.... Polyvinylmethylether maleic anhydride (PVM-MA), acid copolymer, and carboxymethylcellulose sodium (NACMC) denture...

  20. 21 CFR 872.3500 - Polyvinylmethylether maleic anhydride (PVM-MA), acid copolymer, and carboxymethylcellulose sodium...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Polyvinylmethylether maleic anhydride (PVM-MA...) MEDICAL DEVICES DENTAL DEVICES Prosthetic Devices § 872.3500 Polyvinylmethylether maleic anhydride (PVM-MA.... Polyvinylmethylether maleic anhydride (PVM-MA), acid copolymer, and carboxymethylcellulose sodium (NACMC) denture...

  1. 21 CFR 872.3500 - Polyvinylmethylether maleic anhydride (PVM-MA), acid copolymer, and carboxymethylcellulose sodium...

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Polyvinylmethylether maleic anhydride (PVM-MA...) MEDICAL DEVICES DENTAL DEVICES Prosthetic Devices § 872.3500 Polyvinylmethylether maleic anhydride (PVM-MA.... Polyvinylmethylether maleic anhydride (PVM-MA), acid copolymer, and carboxymethylcellulose sodium (NACMC) denture...

  2. 21 CFR 872.3500 - Polyvinylmethylether maleic anhydride (PVM-MA), acid copolymer, and carboxymethylcellulose sodium...

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Polyvinylmethylether maleic anhydride (PVM-MA...) MEDICAL DEVICES DENTAL DEVICES Prosthetic Devices § 872.3500 Polyvinylmethylether maleic anhydride (PVM-MA.... Polyvinylmethylether maleic anhydride (PVM-MA), acid copolymer, and carboxymethylcellulose sodium (NACMC) denture...

  3. Detection of a pneumonia virus of mice (PVM) in an African hedgehog (Atelerix arbiventris) with suspected wobbly hedgehog syndrome (WHS).

    PubMed

    Madarame, Hiroo; Ogihara, Kikumi; Kimura, Moe; Nagai, Makoto; Omatsu, Tsutomu; Ochiai, Hideharu; Mizutani, Tetsyuya

    2014-09-17

    A pneumonia virus of mice (PVM) from an African hedgehog (Atelerix arbiventris) with suspected wobbly hedgehog syndrome (WHS) was detected and genetically characterized. The affected hedgehog had a nonsuppurative encephalitis with vacuolization of the white matter, and the brain samples yielded RNA reads highly homogeneous to PVM strain 15 (96.5% of full genomic sequence homology by analysis of next generation sequencing). PVM antigen was also detected in the brain and the lungs immunohistochemically. A PVM was strongly suggested as a causative agent of encephalitis of a hedgehog with suspected WHS. This is a first report of PVM infection in hedgehogs. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Portable parallel stochastic optimization for the design of aeropropulsion components

    NASA Technical Reports Server (NTRS)

    Sues, Robert H.; Rhodes, G. S.

    1994-01-01

    This report presents the results of Phase 1 research to develop a methodology for performing large-scale Multi-disciplinary Stochastic Optimization (MSO) for the design of aerospace systems ranging from aeropropulsion components to complete aircraft configurations. The current research recognizes that such design optimization problems are computationally expensive, and require the use of either massively parallel or multiple-processor computers. The methodology also recognizes that many operational and performance parameters are uncertain, and that uncertainty must be considered explicitly to achieve optimum performance and cost. The objective of this Phase 1 research was to initialize the development of an MSO methodology that is portable to a wide variety of hardware platforms, while achieving efficient, large-scale parallelism when multiple processors are available. The first effort in the project was a literature review of available computer hardware, as well as review of portable, parallel programming environments. The first effort was to implement the MSO methodology for a problem using the portable parallel programming language, Parallel Virtual Machine (PVM). The third and final effort was to demonstrate the example on a variety of computers, including a distributed-memory multiprocessor, a distributed-memory network of workstations, and a single-processor workstation. Results indicate the MSO methodology can be well-applied towards large-scale aerospace design problems. Nearly perfect linear speedup was demonstrated for computation of optimization sensitivity coefficients on both a 128-node distributed-memory multiprocessor (the Intel iPSC/860) and a network of workstations (speedups of almost 19 times achieved for 20 workstations). Very high parallel efficiencies (75 percent for 31 processors and 60 percent for 50 processors) were also achieved for computation of aerodynamic influence coefficients on the Intel. Finally, the multi-level parallelization strategy that will be needed for large-scale MSO problems was demonstrated to be highly efficient. The same parallel code instructions were used on both platforms, demonstrating portability. There are many applications for which MSO can be applied, including NASA's High-Speed-Civil Transport, and advanced propulsion systems. The use of MSO will reduce design and development time and testing costs dramatically.

  5. Methodologies and systems for heterogeneous concurrent computing

    NASA Technical Reports Server (NTRS)

    Sunderam, V. S.

    1994-01-01

    Heterogeneous concurrent computing is gaining increasing acceptance as an alternative or complementary paradigm to multiprocessor-based parallel processing as well as to conventional supercomputing. While algorithmic and programming aspects of heterogeneous concurrent computing are similar to their parallel processing counterparts, system issues, partitioning and scheduling, and performance aspects are significantly different. In this paper, we discuss critical design and implementation issues in heterogeneous concurrent computing, and describe techniques for enhancing its effectiveness. In particular, we highlight the system level infrastructures that are required, aspects of parallel algorithm development that most affect performance, system capabilities and limitations, and tools and methodologies for effective computing in heterogeneous networked environments. We also present recent developments and experiences in the context of the PVM system and comment on ongoing and future work.

  6. Sensory Neuron Fates Are Distinguished by a Transcriptional Switch that Regulates Dendrite Branch Stabilization

    PubMed Central

    Smith, Cody J.; O’Brien, Timothy; Chatzigeorgiou, Marios; Spencer, W. Clay; Feingold-Link, Elana; Husson, Steven J.; Hori, Sayaka; Mitani, Shohei; Gottschalk, Alexander; Schafer, William R.; Miller, David M.

    2013-01-01

    SUMMARY Sensory neurons adopt distinct morphologies and functional modalities to mediate responses to specific stimuli. Transcription factors and their downstream effectors orchestrate this outcome but are incompletely defined. Here, we show that different classes of mechanosensory neurons in C. elegans are distinguished by the combined action of the transcription factors MEC-3, AHR-1, and ZAG-1. Low levels of MEC-3 specify the elaborate branching pattern of PVD nociceptors, whereas high MEC-3 is correlated with the simple morphology of AVM and PVM touch neurons. AHR-1 specifies AVM touch neuron fate by elevating MEC-3 while simultaneously blocking expression of nociceptive genes such as the MEC-3 target, the claudin-like membrane protein HPO-30, that promotes the complex dendritic branching pattern of PVD. ZAG-1 exercises a parallel role to prevent PVM from adopting the PVD fate. The conserved dendritic branching function of the Drosophila AHR-1 homolog, Spineless, argues for similar pathways in mammals. PMID:23889932

  7. Automated Concurrent Blackboard System Generation in C++

    NASA Technical Reports Server (NTRS)

    Kaplan, J. A.; McManus, J. W.; Bynum, W. L.

    1999-01-01

    In his 1992 Ph.D. thesis, "Design and Analysis Techniques for Concurrent Blackboard Systems", John McManus defined several performance metrics for concurrent blackboard systems and developed a suite of tools for creating and analyzing such systems. These tools allow a user to analyze a concurrent blackboard system design and predict the performance of the system before any code is written. The design can be modified until simulated performance is satisfactory. Then, the code generator can be invoked to generate automatically all of the code required for the concurrent blackboard system except for the code implementing the functionality of each knowledge source. We have completed the port of the source code generator and a simulator for a concurrent blackboard system. The source code generator generates the necessary C++ source code to implement the concurrent blackboard system using Parallel Virtual Machine (PVM) running on a heterogeneous network of UNIX(trademark) workstations. The concurrent blackboard simulator uses the blackboard specification file to predict the performance of the concurrent blackboard design. The only part of the source code for the concurrent blackboard system that the user must supply is the code implementing the functionality of the knowledge sources.

  8. Brain perivascular macrophages: characterization and functional roles in health and disease.

    PubMed

    Faraco, Giuseppe; Park, Laibaik; Anrather, Josef; Iadecola, Costantino

    2017-11-01

    Perivascular macrophages (PVM) are a distinct population of resident brain macrophages characterized by a close association with the cerebral vasculature. PVM migrate from the yolk sac into the brain early in development and, like microglia, are likely to be a self-renewing cell population that, in the normal state, is not replenished by circulating monocytes. Increasing evidence implicates PVM in several disease processes, ranging from brain infections and immune activation to regulation of the hypothalamic-adrenal axis and neurovascular-neurocognitive dysfunction in the setting of hypertension, Alzheimer disease pathology, or obesity. These effects involve crosstalk between PVM and cerebral endothelial cells, interaction with circulating immune cells, and/or production of reactive oxygen species. Overall, the available evidence supports the idea that PVM are a key component of the brain-resident immune system with broad implications for the pathogenesis of major brain diseases. A better understanding of the biology and pathobiology of PVM may lead to new insights and therapeutic strategies for a wide variety of brain diseases.

  9. Thermal control system for Space Station Freedom photovoltaic power module

    NASA Technical Reports Server (NTRS)

    Hacha, Thomas H.; Howard, Laura

    1994-01-01

    The electric power for Space Station Freedom (SSF) is generated by the solar arrays of the photovoltaic power modules (PVM's) and conditioned, controlled, and distributed by a power management and distribution system. The PVM's are located outboard of the alpha gimbals of SSF. A single-phase thermal control system is being developed to provide thermal control of PVM electrical equipment and energy storage batteries. This system uses ammonia as the coolant and a direct-flow deployable radiator. The description and development status of the PVM thermal control system is presented.

  10. Thermal control system for Space Station Freedom photovoltaic power module

    NASA Technical Reports Server (NTRS)

    Hacha, Thomas H.; Howard, Laura S.

    1992-01-01

    The electric power for Space Station Freedom (SSF) is generated by the solar arrays of the photovoltaic power modules (PVM's) and conditioned, controlled, and distributed by a power management and distribution system. The PVM's are located outboard of the alpha gimbals of SSF. A single-phase thermal control system is being developed to provide thermal control of PVM electrical equipment and energy storage batteries. This system uses ammonia as the coolant and a direct-flow deployable radiator. This paper presents the description and development status of the PVM thermal control system.

  11. Visual Computing Environment

    NASA Technical Reports Server (NTRS)

    Lawrence, Charles; Putt, Charles W.

    1997-01-01

    The Visual Computing Environment (VCE) is a NASA Lewis Research Center project to develop a framework for intercomponent and multidisciplinary computational simulations. Many current engineering analysis codes simulate various aspects of aircraft engine operation. For example, existing computational fluid dynamics (CFD) codes can model the airflow through individual engine components such as the inlet, compressor, combustor, turbine, or nozzle. Currently, these codes are run in isolation, making intercomponent and complete system simulations very difficult to perform. In addition, management and utilization of these engineering codes for coupled component simulations is a complex, laborious task, requiring substantial experience and effort. To facilitate multicomponent aircraft engine analysis, the CFD Research Corporation (CFDRC) is developing the VCE system. This system, which is part of NASA's Numerical Propulsion Simulation System (NPSS) program, can couple various engineering disciplines, such as CFD, structural analysis, and thermal analysis. The objectives of VCE are to (1) develop a visual computing environment for controlling the execution of individual simulation codes that are running in parallel and are distributed on heterogeneous host machines in a networked environment, (2) develop numerical coupling algorithms for interchanging boundary conditions between codes with arbitrary grid matching and different levels of dimensionality, (3) provide a graphical interface for simulation setup and control, and (4) provide tools for online visualization and plotting. VCE was designed to provide a distributed, object-oriented environment. Mechanisms are provided for creating and manipulating objects, such as grids, boundary conditions, and solution data. This environment includes parallel virtual machine (PVM) for distributed processing. Users can interactively select and couple any set of codes that have been modified to run in a parallel distributed fashion on a cluster of heterogeneous workstations. A scripting facility allows users to dictate the sequence of events that make up the particular simulation.

  12. Parallel algorithms for modeling flow in permeable media. Annual report, February 15, 1995 - February 14, 1996

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    G.A. Pope; K. Sephernoori; D.C. McKinney

    1996-03-15

    This report describes the application of distributed-memory parallel programming techniques to a compositional simulator called UTCHEM. The University of Texas Chemical Flooding reservoir simulator (UTCHEM) is a general-purpose vectorized chemical flooding simulator that models the transport of chemical species in three-dimensional, multiphase flow through permeable media. The parallel version of UTCHEM addresses solving large-scale problems by reducing the amount of time that is required to obtain the solution as well as providing a flexible and portable programming environment. In this work, the original parallel version of UTCHEM was modified and ported to CRAY T3D and CRAY T3E, distributed-memory, multiprocessor computersmore » using CRAY-PVM as the interprocessor communication library. Also, the data communication routines were modified such that the portability of the original code across different computer architectures was mad possible.« less

  13. Laboratory Investigation of Direct Measurement of Ice Water Content, Ice Surface Area, and Effective Radius of Ice Crystals Using a Laser-Diffraction Instrument

    NASA Technical Reports Server (NTRS)

    Gerber, H.; DeMott, P. J.; Rogers, D. C.

    1995-01-01

    The aircraft microphysics probe, PVM-100A, was tested in the Colorado State University dynamic cloud chamber to establish its ability to measure ice water content (IWC), PSA, and Re in ice clouds. Its response was compared to other means of measuring those ice-cloud parameters that included using FSSP-100 and 230-X 1-D optical probes for ice-crystal concentrations, a film-loop microscope for ice-crystal habits and dimensions, and an in-situ microscope for determining ice-crystal orientation. Intercomparisons were made in ice clouds containing ice crystals ranging in size from about 10 microns to 150 microns diameter, and ice crystals with plate, columnar, dendritic, and spherical shapes. It was not possible to determine conclusively that the PVM accurately measures IWC, PSA, and Re of ice crystals, because heat from the PVM evaporated in part the crystals in its vicinity in the chamber thus affecting its measurements. Similarities in the operating principle of the FSSP and PVM, and a comparison between Re measured by both instruments, suggest, however, that the PVM can make those measurements. The resolution limit of the PVM for IWC measurements was found to be on the order of 0.001 g/cubic m. Algorithms for correcting IWC measured by FSSP and PVM were developed.

  14. Computational strategies for three-dimensional flow simulations on distributed computer systems. Ph.D. Thesis Semiannual Status Report, 15 Aug. 1993 - 15 Feb. 1994

    NASA Technical Reports Server (NTRS)

    Weed, Richard Allen; Sankar, L. N.

    1994-01-01

    An increasing amount of research activity in computational fluid dynamics has been devoted to the development of efficient algorithms for parallel computing systems. The increasing performance to price ratio of engineering workstations has led to research to development procedures for implementing a parallel computing system composed of distributed workstations. This thesis proposal outlines an ongoing research program to develop efficient strategies for performing three-dimensional flow analysis on distributed computing systems. The PVM parallel programming interface was used to modify an existing three-dimensional flow solver, the TEAM code developed by Lockheed for the Air Force, to function as a parallel flow solver on clusters of workstations. Steady flow solutions were generated for three different wing and body geometries to validate the code and evaluate code performance. The proposed research will extend the parallel code development to determine the most efficient strategies for unsteady flow simulations.

  15. Preparation of Poly-(Methyl vinyl ether-co-maleic Anhydride) Nanoparticles by Solution-Enhanced Dispersion by Supercritical CO2

    PubMed Central

    Chen, Ai-Zheng; Wang, Guang-Ya; Wang, Shi-Bin; Feng, Jian-Gang; Liu, Yuan-Gang; Kang, Yong-Qiang

    2012-01-01

    The supercritical CO2-based technologies have been widely used in the formation of drug and/or polymer particles for biomedical applications. In this study, nanoparticles of poly-(methyl vinyl ether-co-maleic anhydride) (PVM/MA) were successfully fabricated by a process of solution-enhanced dispersion by supercritical CO2 (SEDS). A 23 factorial experiment was designed to investigate and identify the significance of the processing parameters (concentration, flow and solvent/nonsolvent) for the surface morphology, particle size, and particle size distribution of the products. The effect of the concentration of PVM/MA was found to be dominant in the results regarding particle size. Decreasing the initial solution concentration of PVM/MA decreased the particle size significantly. After optimization, the resulting PVM/MA nanoparticles exhibited a good spherical shape, a smooth surface, and a narrow particle size distribution. Fourier transform infrared spectroscopy (FTIR) spectra demonstrated that the chemical composition of PVM/MA was not altered during the SEDS process and that the SEDS process was therefore a typical physical process. The absolute value of zeta potential of the obtained PVM/MA nanoparticles was larger than 40 mV, indicating the samples’ stability in aqueous suspension. Analysis of thermogravimetry-differential scanning calorimetry (TG-DSC) revealed that the effect of the SEDS process on the thermostability of PVM/MA was negligible. The results of gas chromatography (GC) analysis confirmed that the SEDS process could efficiently remove the organic residue.

  16. The use of portable video media vs standard verbal communication in the urological consent process: a multicentre, randomised controlled, crossover trial.

    PubMed

    Winter, Matthew; Kam, Jonathan; Nalavenkata, Sunny; Hardy, Ellen; Handmer, Marcus; Ainsworth, Hannah; Lee, Wai Gin; Louie-Johnsun, Mark

    2016-11-01

    To determine if portable video media (PVM) improves patient's knowledge and satisfaction acquired during the consent process for cystoscopy and insertion of a ureteric stent compared to standard verbal communication (SVC), as informed consent is a crucial component of patient care and PVM is an emerging technology that may help improve the consent process. In this multi-centre randomised controlled crossover trial, patients requiring cystoscopy and stent insertion were recruited from two major teaching hospitals in Australia over a 15-month period (July 2014-December 2015). Patient information delivery was via PVM and SVC. The PVM consisted of an audio-visual presentation with cartoon animation presented on an iPad. Patient satisfaction was assessed using the validated Client Satisfaction Questionnaire 8 (CSQ-8; maximum score 32) and knowledge was tested using a true/false questionnaire (maximum score 28). Questionnaires were completed after first intervention and after crossover. Scores were analysed using the independent samples t-test and Wilcoxon signed-rank test for the crossover analysis. In all, 88 patients were recruited. A significant 3.1 point (15.5%) increase in understanding was demonstrable favouring the use of PVM (P < 0.001). There was no difference in patient satisfaction between the groups as judged by the CSQ-8. A significant 3.6 point (17.8%) increase in knowledge score was seen when the SVC group were crossed over to the PVM arm. A total of 80.7% of patients preferred PVM and 19.3% preferred SVC. Limitations include the lack of a validated questionnaire to test knowledge acquired from the interventions. This study demonstrates patients' preference towards PVM in the urological consent process of cystoscopy and ureteric stent insertion. PVM improves patient's understanding compared with SVC and is a more effective means of content delivery to patients in terms of overall preference and knowledge gained during the consent process. © 2016 The Authors BJU International © 2016 BJU International Published by John Wiley & Sons Ltd.

  17. Professional Veterinary Programs' Perceptions and Experiences Pertaining to Emotional Support Animals and Service Animals, and Recommendations for Policy Development.

    PubMed

    Schoenfeld-Tacher, Regina M; Kogan, Lori R

    Given the unique nature of programs in professional veterinary medicine (PVM), the increasing numbers of students requesting accommodations for emotional support animals (ESAs) in higher education settings is of growing interest to student affairs and administrative staff in PVM settings. Since the legislation pertaining to this type of support animal differs from the laws governing disability service animals, colleges and universities now need to develop new policies and guidelines. Representatives from a sample of 28 PVM programs completed a survey about the prevalence of student requests for ESAs and service animals. PVM associate deans for academic affairs also reported their perceptions of this issue and the challenges these requests might pose within veterinary teaching laboratories and patient treatment areas. Responses indicated that approximately one third of PVM programs have received requests for ESAs (32.1%) in the last 2 years, 17.9% have had requests for psychiatric service animals, and 17.9% for other types of service animals. Despite this, most associate deans reported not having or not being aware of university or college policies pertaining to these issues. Most associate deans are interested in learning more about this topic. This paper provides general recommendations for establishing university or PVM program policies.

  18. STUDIES ON PNEUMONIA VIRUS OF MICE (PVM) IN CELL CULTURE

    PubMed Central

    Harter, Donald H.; Choppin, Purnell W.

    1967-01-01

    Pneumonia virus of mice (PVM) has been serially propagated in a line of baby hamster kidney (BHK21) cells. A maximum titer of 6.3 x 106 TCID50 per ml was obtained, and there was little variation in yield on serial passage. PVM grown in BHK21 cells was antigenically similar to virus obtained from the mouse lung, but was somewhat less virulent for the mouse after 10 serial passages in these cells. Virus produced by BHK21 cells agglutinated mouse erythrocytes without prior heating or other treatment. Sedimentation of PVM in the ultracentrifuge or precipitation by ammonium sulfate resulted in a loss in infectivity but an increase in hemagglutinating activity, presumably due to disruption of the virus particle. In a potassium tartrate density gradient, the major portion of infective virus sedimented at a density of approximately 1.15, and noninfective hemagglutinin, at a density of approximately 1.13. Stock virus preparations appear to contain a large amount of noninfective hemagglutinin. The replication of PVM was not inhibited by 5-fluoro-2'-deoxyuridine, 5-bromo-2'-deoxyuridine, or 5-iodo-2'-deoxyuridine. Infected cells contained eosinophilic cytoplasmic inclusions which showed the acridine orange staining characteristic of single-stranded RNA. Foci of viral antigen were observed in the cytoplasm of infected cells by fluorescent antibody staining. The results suggest that PVM is an RNA virus that replicates in the cytoplasm. PMID:4165740

  19. Molecular make-up of the Plasmodium parasitophorous vacuolar membrane.

    PubMed

    Spielmann, Tobias; Montagna, Georgina N; Hecht, Leonie; Matuschewski, Kai

    2012-10-01

    Plasmodium, the causative agent of malaria, is an obligate, intracellular, eukaryotic cell that invades, replicates, and differentiates within hepatocytes and erythrocytes. Inside a host cell, a second membrane delineates the developing pathogen in addition to the parasite plasma membrane, resulting in a distinct cellular compartment, termed parasitophorous vacuole (PV). The PV membrane (PVM) constitutes the parasite-host cell interface and is likely central to nutrient acquisition, host cell remodeling, waste disposal, environmental sensing, and protection from innate defense. Over the past two decades, a number of parasite-encoded PVM proteins have been identified. They include multigene families and protein complexes, such as early-transcribed membrane proteins (ETRAMPs) and the Plasmodium translocon for exported proteins (PTEX). Nearly all Plasmodium PVM proteins are restricted to this genus and display transient and stage-specific expression. Here, we provide an overview of the PVM proteins of Plasmodium blood and liver stages. Biochemical and experimental genetics data suggest that some PVM proteins are ideal targets for novel anti-malarial intervention strategies. Copyright © 2012 Elsevier GmbH. All rights reserved.

  20. Virtual Microscopy: A Useful Tool for Meeting Evolving Challenges in the Veterinary Medical Curriculum

    NASA Astrophysics Data System (ADS)

    Kogan, Lori R.; Dowers, Kristy L.; Cerda, Jacey R.; Schoenfeld-Tacher, Regina M.; Stewart, Sherry M.

    2014-12-01

    Veterinary schools, similar to many professional health programs, face a myriad of evolving challenges in delivering their professional curricula including expansion of class size, costs to maintain expensive laboratories, and increased demands on veterinary educators to use curricular time efficiently and creatively. Additionally, exponential expansion of the knowledge base through ongoing biomedical research, educational goals to increase student engagement and clinical reasoning earlier in the curriculum, and students' desire to access course materials and enhance their educational experience through the use of technology all support the need to reassess traditional microscope laboratories within Professional Veterinary Medical (PVM) educational programs. While there is clear justification for teaching veterinary students how to use a microscope for clinical evaluation of cytological preparations (i.e., complete blood count, urinalysis, fecal analysis, fine needle aspirates, etc.), virtual microscopy may be a viable alternative to using light microscopy for teaching and learning fundamental histological concepts. This article discusses results of a survey given to assess Professional Veterinary Medical students' perceptions of using virtual microscope for learning basic histology/microscopic anatomy and implications of these results for using virtual microscopy as a pedagogical tool in teaching first-year Professional Veterinary Medical students' basic histology.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glineur, Stephanie F.; Renshaw, Randall W.; Percopo, Caroline M.

    A previous report of a novel pneumovirus (PnV) isolated from the respiratory tract of a dog described its significant homology to the rodent pathogen, pneumonia virus of mice (PVM). The original PnV–Ane4 pathogen replicated in and could be re-isolated in infectious state from mouse lung but elicited minimal mortality compared to PVM strain J3666. Here we assess phylogeny and physiologic responses to 10 new PnV isolates. The G/glycoprotein sequences of all PnVs include elongated amino-termini when compared to the characterized PVMs, and suggest division into groups A and B. While we observed significant differences in cytokine production and neutrophil recruitmentmore » to the lungs of BALB/c mice in response to survival doses (50 TCID{sub 50} units) of representative group A (114378-10-29-KY-F) and group B (7968-11-OK) PnVs, we observed no evidence for positive selection (dN>dS) among the PnV/PnV, PVM/PnV or PVM/PVM G/glycoprotein or F/fusion protein sequence pairs. - Highlights: • We consider ten novel isolates of the pneumovirus (PnV) first described by Renshaw and colleagues. • The G/glycoprotein sequences of all PnVs include elongated amino-termini when compared to PVM. • We detect cytokine production and neutrophil recruitment to the lungs of mice in response to PnV. • We observed no evidence for positive selection (dN>dS) among the gene sequence pairs.« less

  2. Interactions between Multiple Genetic Determinants in the 5′ UTR and VP1 Capsid Control Pathogenesis of Chronic Post-Viral Myopathy caused by Coxsackievirus B1

    PubMed Central

    Sandager, Maribeth M.; Nugent, Jaime L.; Schulz, Wade L.; Messner, Ronald P.; Tam, Patricia E.

    2008-01-01

    Mice infected with coxsackievirus B1 Tucson (CVB1T) develop chronic, post-viral myopathy (PVM) with clinical manifestations of hind limb muscle weakness and myositis. The objective of the current study was to establish the genetic basis of myopathogenicity in CVB1T. Using a reverse genetics approach, full attenuation of PVM could only be achieved by simultaneously mutating four sites located at C706U in the 5′ untranslated region (5′ UTR) and at Y87F, V136A, and T276A in the VP1 capsid. Engineering these four myopathic determinants into an amyopathic CVB1T variant restored the ability to cause PVM. Moreover, these same four determinants controlled PVM expression in a second strain of mice, indicating that the underlying mechanism is operational in mice of different genetic backgrounds. Modeling studies predict that C706U alters both local and long-range pairing in the 5′ UTR, and that VP1 determinants are located on the capsid surface. However, these differences did not affect viral titers, temperature stability, pH stability, or the antibody response to virus. These studies demonstrate that PVM develops from a complex interplay between viral determinants in the 5′ UTR and VP1 capsid and have uncovered intriguing similarities between genetic determinants that cause PVM and those involved in pathogenesis of other enteroviruses. PMID:18029287

  3. Pneumonia Virus of Mice Severe Respiratory Virus Infection in a Natural Host

    PubMed Central

    Rosenberg, Helene F.; Domachowske, Joseph B.

    2008-01-01

    Pneumonia virus of mice (PVM; family Paramyxoviridae, genus Pneumovirus) is a natural mouse pathogen that is closely related to the human and bovine respiratory syncytial viruses. Among the prominent features of this infection, robust replication of PVM takes place in bronchial epithelial cells in response to a minimal virus inoculum. Virus replication in situ results in local production of proinflammatory cytokines (MIP-1α, MIP-2, MCP-1 and IFNγ) and granulocyte recruitment to the lung. If left unchecked, PVM infection and the ensuing inflammatory response ultimately lead to pulmonary edema, respiratory compromise and death. In this review, we consider the recent studies using the PVM model that have provided important insights into the role of the inflammatory response in the pathogenesis of severe respiratory virus infection. We also highlight several works that have elucidated acquired immune responses to this pathogen, including T cell responses and the development of humoral immunity. Finally, we consider several immunomodulatory strategies that have been used successfully to reduce morbidity and mortality when administered to PVM infected, symptomatic mice, and thus hold promise as realistic therapeutic strategies for severe respiratory virus infections in human subjects. PMID:18471897

  4. Parallel Navier-Stokes computations on shared and distributed memory architectures

    NASA Technical Reports Server (NTRS)

    Hayder, M. Ehtesham; Jayasimha, D. N.; Pillay, Sasi Kumar

    1995-01-01

    We study a high order finite difference scheme to solve the time accurate flow field of a jet using the compressible Navier-Stokes equations. As part of our ongoing efforts, we have implemented our numerical model on three parallel computing platforms to study the computational, communication, and scalability characteristics. The platforms chosen for this study are a cluster of workstations connected through fast networks (the LACE experimental testbed at NASA Lewis), a shared memory multiprocessor (the Cray YMP), and a distributed memory multiprocessor (the IBM SPI). Our focus in this study is on the LACE testbed. We present some results for the Cray YMP and the IBM SP1 mainly for comparison purposes. On the LACE testbed, we study: (1) the communication characteristics of Ethernet, FDDI, and the ALLNODE networks and (2) the overheads induced by the PVM message passing library used for parallelizing the application. We demonstrate that clustering of workstations is effective and has the potential to be computationally competitive with supercomputers at a fraction of the cost.

  5. Final report of PVM-6 and PVM-7 Weather Documentation, AFCRL/Minuteman Report Number 6

    DTIC Science & Technology

    1975-09-11

    Research Laboratories, Hanscon AFB, MA 01731. USAFGL ltr, 1 Aug 1983 TH!3 R~PORT HAS BEEN DELIMI~ED AND CLtARED FOR PUBLIC REL~5E UNDER DOP DiRECTIVE...5200.20 AND NO RESTniCTIONS ARE IMPDSED UPON r rs usE P.ND n 1 scu1sURI:. DISTRIBUTION STATE~ENT A APPROVED FQR PUBLIC RELEASE; DISTRIBUTION...NO AFCRL -TR-75 -04817^ F C ^ I - 1 ^T" 1 . » " 3^ 3 / *■ TlTUSfltf fidUtUlt- JINAL REPORT OFPyM-6 AND PVM-7 WEATHER DOCUMENTATION^ | AFCRl

  6. Non-Traumatic Myositis Ossificans in the Lumbosacral Paravertebral Muscle

    PubMed Central

    Jung, DaeYoung; Roh, Ji Hyeon

    2013-01-01

    Myositis ossificans (MO) is a benign condition of non-neoplastic heterotopic bone formation in the muscle or soft tissue. Trauma plays a role in the development of MO, thus, non-traumatic MO is very rare. Although MO may occur anywhere in the body, it is rarely seen in the lumbosacral paravertebral muscle (PVM). Herein, we report a case of non-traumatic MO in the lumbosacral PVM. A 42-year-old man with no history of trauma was referred to our hospital for pain in the low back, left buttock, and left thigh. On physical examination, a slightly tender, hard, and fixed mass was palpated in the left lumbosacral PVM. Computed tomography showed a calcified mass within the left lumbosacral PVM. Magnetic resonance imaging (MRI) showed heterogeneous high signal intensity in T1- and T2-weighted image, and no enhancement of the mass was found in the postcontrast T1-weighted MRI. The lack of typical imaging features required an open biopsy, and MO was confirmed. MO should be considered in the differential diagnosis when the imaging findings show a mass involving PVM. When it is difficult to distinguish MO from soft tissue or bone malignancy by radiology, it is necessary to perform a biopsy to confirm the diagnosis. PMID:23908707

  7. Test of prototype liquid-water-content meter for aircraft use

    NASA Technical Reports Server (NTRS)

    Gerber, Hermann E.

    1993-01-01

    This report describes the effort undertaken to meet the objectives of National Science Foundation Grant ATM-9207345 titled 'Test of Prototype Liquid-Water-Content Meter for Aircraft Use.' Three activities were proposed for testing the new aircraft instrument, PVM-100A: (1) Calibrate the PVM-100A in a facility where the liquid-water-content (LWC) channel, and the integrated surface area channel (PSA) could be compared to standard means for LWC and PSA measurements. Scaling constant for the channels were to be determined in this facility. The fog/wind tunnel at ECN, Petten, The Netherlands was judged the most suitable facility for this effort. (2) Expose the PVM-100A to high wind speeds similar to those expected on research aircraft, and test the anti-icing heaters on the PVM-100A under typical icing conditions expected in atmospheric clouds. The high-speed icing tunnel at NRC, Ottawa, Canada was to be utilized. (3) Operate the PVM-100A on an aircraft during cloud penetrations to determine its stability and practicality for such measurements. The C-131A aircraft of the University of Washington was the aircraft of opportunity for these-tests, which were to be conducted during the 4-week Atlantic Stratocumulus Transition Experiment (ASTEX) in June of 1992.

  8. On the anatomy and histology of the pubovisceral muscle enthesis in women.

    PubMed

    Kim, Jinyong; Ramanah, Rajeev; DeLancey, John O L; Ashton-Miller, James A

    2011-09-01

    The origin of the pubovisceral muscle (PVM) from the pubic bone is known to be at elevated risk for injury during difficult vaginal births. We examined the anatomy and histology of its enthesial origin to classify its type and see if it differs from appendicular entheses. Parasagittal sections of the pubic bone, PVM enthesis, myotendinous junction, and muscle proper were harvested from five female cadavers (51-98 years). Histological sections were prepared with hematoxylin and eosin, Masson's trichrome, and Verhoeff-Van Gieson stains. The type of enthesis was identified according to a published enthesial classification scheme. Quantitative imaging analysis was performed in sampling bands 2 mm apart along the enthesis to determine its cross-sectional area and composition. The PVM enthesis can be classified as a fibrous enthesis. The PVM muscle fibers terminated in collagenous fibers that insert tangentially onto the periosteum of the pubic bone for the most part. Sharpey's fibers were not observed. In a longitudinal cross-section, the area of the connective tissue and muscle becomes equal approximately 8 mm from the pubic bone. The PVM originates bilaterally from the pubic bone via fibrous entheses whose collagen fibers arise tangentially from the periosteum of the pubic bone. Copyright © 2010 Wiley-Liss, Inc.

  9. On the Anatomy and Histology of the Pubovisceral Muscle Enthesis in Women

    PubMed Central

    Kim, Jinyong; Ramanah, Rajeev; DeLancey, John O. L.; Ashton-Miller, James A.

    2012-01-01

    Aims The origin of the pubovisceral muscle (PVM) from the pubic bone is known to be at elevated risk for injury during difficult vaginal births. We examined the anatomy and histology of its enthesial origin to classify its type and see if it differs from appendicular entheses. Methods Parasagittal sections of the pubic bone, PVM enthesis, myotendinous junction and muscle proper were harvested from five female cadavers (51 - 98 years). Histological sections were prepared with hematoxylin and eosin, Masson’s trichrome, and Verhoeff-Van Gieson stains. The type of enthesis was identified according to a published enthesial classification scheme. Quantitative imaging analysis was performed in sampling bands 2 mm apart along the enthesis to determine its cross-sectional area and composition. Results The PVM enthesis can be classified as a fibrous enthesis. The PVM muscle fibers terminated in collagenous fibers that insert tangentially onto the periosteum of the pubic bone for the most part. Sharpey’s fibers were not observed. In a longitudinal cross-section, the area of the connective tissue and muscle becomes equal approximately 8 mm from the pubic bone. Conclusion The PVM originates bilaterally from the pubic bone via fibrous entheses whose collagen fibers arise tangentially from the periosteum of the pubic bone. PMID:21567449

  10. Portable Video Media Versus Standard Verbal Communication in Surgical Information Delivery to Nurses: A Prospective Multicenter, Randomized Controlled Crossover Trial.

    PubMed

    Kam, Jonathan; Ainsworth, Hannah; Handmer, Marcus; Louie-Johnsun, Mark; Winter, Matthew

    2016-10-01

    Continuing education of health professionals is important for delivery of quality health care. Surgical nurses are often required to understand surgical procedures. Nurses need to be aware of the expected outcomes and recognize potential complications of such procedures during their daily work. Traditional educational methods, such as conferences and tutorials or informal education at the bedside, have many drawbacks for delivery of this information in a universal, standardized, and timely manner. The rapid uptake of portable media devices makes portable video media (PVM) a potential alternative to current educational methods. To compare PVM to standard verbal communication (SVC) for surgical information delivery and educational training for nurses and evaluate its impact on knowledge acquisition and participant satisfaction. Prospective, multicenter, randomized controlled crossover trial. Two hospitals: Gosford District Hospital and Wyong Hospital. Seventy-two nursing staff (36 at each site). Information delivery via PVM--7-minute video compared to information delivered via SVC. Knowledge acquisition was measured by a 32-point questionnaire, and satisfaction with the method of education delivery was measured using the validated Client Satisfaction Questionnaire (CSQ-8). Knowledge acquisition was higher via PVM compared to SVC 25.9 (95% confidence interval [CI] 25.2-26.6) versus 24.3 (95% CI 23.5-25.1), p = .004. Participant satisfaction was higher with PVM 29.5 (95% CI 28.3-30.7) versus 26.5 (95% CI 25.1-27.9), p = .003. Following information delivery via SVC, participants had a 6% increase in knowledge scores, 24.3 (95% CI 23.5-25.1) versus 25.7 (95% CI 24.9-26.5) p = .001, and a 13% increase in satisfaction scores, 26.5 (95% CI 25.1-27.9) versus 29.9 (95% CI 28.8-31.0) p < .001, when they crossed-over to information delivery via PVM. PVM provides a novel method for providing education to nurses that improves knowledge retention and satisfaction with the educational process. © 2016 Sigma Theta Tau International.

  11. Model institutional infrastructures for recycling of photovoltaic modules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reaven, S.J.; Moskowitz, P.D.; Fthenakis, V.

    1996-01-01

    How will photovoltaic modules (PVMS) be recycled at the end of their service lives? This question has technological and institutional components (Reaven, 1994a). The technological aspect concerns the physical means of recycling: what advantages and disadvantages of the several existing and emerging mechanical, thermal, and chemical recycling processes and facilities merit consideration? The institutional dimension refers to the arrangements for recycling: what are the operational and financial roles of the parties with an interest in PVM recycling? These parties include PVM manufacturers, trade organizations; distributors, and retailers; residential, commercial, and utility PVM users; waste collectors, transporters, reclaimers, and reclaimers; andmore » governments.« less

  12. Intranasal treatment with a novel immunomodulator mediates innate immune protection against lethal pneumonia virus of mice.

    PubMed

    Martinez, Elisa C; Garg, Ravendra; Shrivastava, Pratima; Gomis, Susantha; van Drunen Littel-van den Hurk, Sylvia

    2016-11-01

    Respiratory syncytial virus (RSV) is the leading cause of acute lower respiratory tract infections in infants and young children. There are no licensed RSV vaccines available, and the few treatment options for high-risk individuals are either extremely costly or cause severe side effects and toxicity. Immunomodulation mediated by a novel formulation consisting of the toll-like receptor 3 agonist poly(I:C), an innate defense regulator peptide and a polyphosphazene (P-I-P) was evaluated in the context of lethal infection with pneumonia virus of mice (PVM). Intranasal delivery of a single dose of P-I-P protected adult mice against PVM when given 24 h prior to challenge. These animals experienced minimal weight loss, no clinical disease, 100% survival, and reduced lung pathology. Similar clinical outcomes were observed in mice treated up to 3 days prior to infection. P-I-P pre-treatment induced early mRNA and protein expression of key chemokine and cytokine genes, reduced the recruitment of neutrophils and eosinophils, decreased virus titers in the lungs, and modulated the delayed exacerbated nature of PVM disease without any short-term side effects. On day 14 post-infection, P-I-P-treated mice were confirmed to be PVM-free. These results demonstrate the capacity of this formulation to prevent PVM and possibly other viral respiratory infections. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Parallel computing on Unix workstation arrays

    NASA Astrophysics Data System (ADS)

    Reale, F.; Bocchino, F.; Sciortino, S.

    1994-12-01

    We have tested arrays of general-purpose Unix workstations used as MIMD systems for massive parallel computations. In particular we have solved numerically a demanding test problem with a 2D hydrodynamic code, generally developed to study astrophysical flows, by exucuting it on arrays either of DECstations 5000/200 on Ethernet LAN, or of DECstations 3000/400, equipped with powerful Alpha processors, on FDDI LAN. The code is appropriate for data-domain decomposition, and we have used a library for parallelization previously developed in our Institute, and easily extended to work on Unix workstation arrays by using the PVM software toolset. We have compared the parallel efficiencies obtained on arrays of several processors to those obtained on a dedicated MIMD parallel system, namely a Meiko Computing Surface (CS-1), equipped with Intel i860 processors. We discuss the feasibility of using non-dedicated parallel systems and conclude that the convenience depends essentially on the size of the computational domain as compared to the relative processor power and network bandwidth. We point out that for future perspectives a parallel development of processor and network technology is important, and that the software still offers great opportunities of improvement, especially in terms of latency times in the message-passing protocols. In conditions of significant gain in terms of speedup, such workstation arrays represent a cost-effective approach to massive parallel computations.

  14. Shared elements of host-targeting pathways among apicomplexan parasites of differing lifestyles.

    PubMed

    Pellé, Karell G; Jiang, Rays H Y; Mantel, Pierre-Yves; Xiao, Yu-Ping; Hjelmqvist, Daisy; Gallego-Lopez, Gina M; O T Lau, Audrey; Kang, Byung-Ho; Allred, David R; Marti, Matthias

    2015-11-01

    Apicomplexans are a diverse group of obligate parasites occupying different intracellular niches that require modification to meet the needs of the parasite. To efficiently manipulate their environment, apicomplexans translocate numerous parasite proteins into the host cell. Whereas some parasites remain contained within a parasitophorous vacuole membrane (PVM) throughout their developmental cycle, others do not, a difference that affects the machinery needed for protein export. A signal-mediated pathway for protein export into the host cell has been characterized in Plasmodium parasites, which maintain the PVM. Here, we functionally demonstrate an analogous host-targeting pathway involving organellar staging prior to secretion in the related bovine parasite, Babesia bovis, a parasite that destroys the PVM shortly after invasion. Taking into account recent identification of a similar signal-mediated pathway in the coccidian parasite Toxoplasma gondii, we suggest a model in which this conserved pathway has evolved in multiple steps from signal-mediated trafficking to specific secretory organelles for controlled secretion to a complex protein translocation process across the PVM. © 2015 John Wiley & Sons Ltd.

  15. Generalization of soft phonon modes

    NASA Astrophysics Data System (ADS)

    Rudin, Sven P.

    2018-04-01

    Soft phonon modes describe a collective movement of atoms that transform a higher-symmetry crystal structure into a lower-symmetry crystal structure. Such structural transformations occur at finite temperatures, where the phonons (i.e., the low-temperature vibrational modes) and the static perfect crystal structures provide an incomplete picture of the dynamics. Here, principal vibrational modes (PVMs) are introduced as descriptors of the dynamics of a material system with N atoms. The PVMs represent the independent collective movements of the atoms at a given temperature. Molecular dynamics (MD) simulations, here in the form of quantum MD using density functional theory calculations, provide both the data describing the atomic motion and the data used to construct the PVMs. The leading mode, PVM0, represents the 3 N -dimensional direction in which the system moves with greatest amplitude. For structural phase transitions, PVM0 serves as a generalization of soft phonon modes. At low temperatures, PVM0 reproduces the soft phonon mode in systems where one phonon dominates the phase transformation. In general, multiple phonon modes combine to describe a transformation, in which case PVM0 culls these phonon modes. Moreover, while soft phonon modes arise in the higher-symmetry crystal structure, PVM0 can be equally well calculated on either side of the structural phase transition. Two applications demonstrate these properties: first, transitions into and out of bcc titanium, and, second, the two crystal structures proposed for the β phase of uranium, the higher-symmetry structure of which stabilizes with temperature.

  16. Preparation of Chitosan-Based Hemostatic Sponges by Supercritical Fluid Technology

    PubMed Central

    Song, Hu-Fan; Chen, Ai-Zheng; Wang, Shi-Bin; Kang, Yong-Qiang; Ye, Shi-Fu; Liu, Yuan-Gang; Wu, Wen-Guo

    2014-01-01

    Using ammonium bicarbonate (AB) particles as a porogen, chitosan (CS)-based hemostatic porous sponges were prepared in supercritical carbon dioxide due to its low viscosity, small surface tension, and good compatibility with organic solvent. Fourier transform infrared spectroscopy (FTIR) spectra demonstrated that the chemical compositions of CS and poly-(methyl vinyl ether-co-maleic anhydride) (PVM/MA) were not altered during the phase inversion process. The morphology and structure of the sponge after the supercritical fluid (SCF) process were observed by scanning electron microscopy (SEM). The resulting hemostatic sponges showed a relatively high porosity (about 80%) with a controllable pore size ranging from 0.1 to 200 μm. The concentration of PVM/MA had no significant influence on the porosity of the sponges. Comparative experiments on biological assessment and hemostatic effect between the resulting sponges and Avitene® were also carried out. With the incorporation of PVM/MA into the CS-based sponges, the water absorption rate of the sponges increased significantly, and the CS-PVM/MA sponges showed a similar water absorption rate (about 90%) to that of Avitene®. The results of the whole blood clotting experiment and animal experiment also demonstrated that the clotting ability of the CS-PVM/MA sponges was similar to that of Avitene®. All these results elementarily verified that the sponges prepared in this study were suitable for hemostasis and demonstrated the feasibility of using SCF-assisted phase inversion technology to produce hemostatic porous sponges. PMID:28788577

  17. Animal model of respiratory syncytial virus: CD8+ T cells cause a cytokine storm that is chemically tractable by sphingosine-1-phosphate 1 receptor agonist therapy.

    PubMed

    Walsh, Kevin B; Teijaro, John R; Brock, Linda G; Fremgen, Daniel M; Collins, Peter L; Rosen, Hugh; Oldstone, Michael B A

    2014-06-01

    The cytokine storm is an intensified, dysregulated, tissue-injurious inflammatory response driven by cytokine and immune cell components. The cytokine storm during influenza virus infection, whereby the amplified innate immune response is primarily responsible for pulmonary damage, has been well characterized. Now we describe a novel event where virus-specific T cells induce a cytokine storm. The paramyxovirus pneumonia virus of mice (PVM) is a model of human respiratory syncytial virus (hRSV). Unexpectedly, when C57BL/6 mice were infected with PVM, the innate inflammatory response was undetectable until day 5 postinfection, at which time CD8(+) T cells infiltrated into the lung, initiating a cytokine storm by their production of gamma interferon (IFN-γ) and tumor necrosis factor alpha (TNF-α). Administration of an immunomodulatory sphingosine-1-phosphate (S1P) receptor 1 (S1P1R) agonist significantly inhibited PVM-elicited cytokine storm by blunting the PVM-specific CD8(+) T cell response, resulting in diminished pulmonary disease and enhanced survival. A dysregulated overly exuberant immune response, termed a "cytokine storm," accompanies virus-induced acute respiratory diseases (VARV), is primarily responsible for the accompanying high morbidity and mortality, and can be controlled therapeutically in influenza virus infection of mice and ferrets by administration of sphingosine-1-phosphate 1 receptor (S1P1R) agonists. Here, two novel findings are recorded. First, in contrast to influenza infection, where the cytokine storm is initiated early by the innate immune system, for pneumonia virus of mice (PVM), a model of RSV, the cytokine storm is initiated late in infection by the adaptive immune response: specifically, by virus-specific CD8 T cells via their release of IFN-γ and TNF-α. Blockading these cytokines with neutralizing antibodies blunts the cytokine storm and protects the host. Second, PVM infection is controlled by administration of an S1P1R agonist.

  18. A Systems Engineering Framework for Implementing a Security and Critical Patch Management Process in Diverse Environments (Academic Departments' Workstations)

    NASA Astrophysics Data System (ADS)

    Mohammadi, Hadi

    Use of the Patch Vulnerability Management (PVM) process should be seriously considered for any networked computing system. The PVM process prevents the operating system (OS) and software applications from being attacked due to security vulnerabilities, which lead to system failures and critical data leakage. The purpose of this research is to create and design a Security and Critical Patch Management Process (SCPMP) framework based on Systems Engineering (SE) principles. This framework will assist Information Technology Department Staff (ITDS) to reduce IT operating time and costs and mitigate the risk of security and vulnerability attacks. Further, this study evaluates implementation of the SCPMP in the networked computing systems of an academic environment in order to: 1. Meet patch management requirements by applying SE principles. 2. Reduce the cost of IT operations and PVM cycles. 3. Improve the current PVM methodologies to prevent networked computing systems from becoming the targets of security vulnerability attacks. 4. Embed a Maintenance Optimization Tool (MOT) in the proposed framework. The MOT allows IT managers to make the most practicable choice of methods for deploying and installing released patches and vulnerability remediation. In recent years, there has been a variety of frameworks for security practices in every networked computing system to protect computer workstations from becoming compromised or vulnerable to security attacks, which can expose important information and critical data. I have developed a new mechanism for implementing PVM for maximizing security-vulnerability maintenance, protecting OS and software packages, and minimizing SCPMP cost. To increase computing system security in any diverse environment, particularly in academia, one must apply SCPMP. I propose an optimal maintenance policy that will allow ITDS to measure and estimate the variation of PVM cycles based on their department's requirements. My results demonstrate that MOT optimizes the process of implementing SCPMP in academic workstations.

  19. Generalization of soft phonon modes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rudin, Sven P.

    Soft phonon modes describe a collective movement of atoms that transform a higher-symmetry crystal structure into a lower-symmetry crystal structure. Such structural transformations occur at finite temperatures, where the phonons (i.e., the low-temperature vibrational modes) and the static perfect crystal structures provide an incomplete picture of the dynamics. In this paper, principal vibrational modes (PVMs) are introduced as descriptors of the dynamics of a material system withmore » $N$ atoms. The PVMs represent the independent collective movements of the atoms at a given temperature. Molecular dynamics (MD) simulations, here in the form of quantum MD using density functional theory calculations, provide both the data describing the atomic motion and the data used to construct the PVMs. The leading mode, $${\\mathrm{PVM}}_{0}$$, represents the $3N$-dimensional direction in which the system moves with greatest amplitude. For structural phase transitions, $${\\mathrm{PVM}}_{0}$$ serves as a generalization of soft phonon modes. At low temperatures, $${\\mathrm{PVM}}_{0}$$ reproduces the soft phonon mode in systems where one phonon dominates the phase transformation. In general, multiple phonon modes combine to describe a transformation, in which case $${\\mathrm{PVM}}_{0}$$ culls these phonon modes. Moreover, while soft phonon modes arise in the higher-symmetry crystal structure, $${\\mathrm{PVM}}_{0}$$ can be equally well calculated on either side of the structural phase transition. Finally, two applications demonstrate these properties: first, transitions into and out of bcc titanium, and, second, the two crystal structures proposed for the $${\\beta}$$ phase of uranium, the higher-symmetry structure of which stabilizes with temperature.« less

  20. Generalization of soft phonon modes

    DOE PAGES

    Rudin, Sven P.

    2018-04-27

    Soft phonon modes describe a collective movement of atoms that transform a higher-symmetry crystal structure into a lower-symmetry crystal structure. Such structural transformations occur at finite temperatures, where the phonons (i.e., the low-temperature vibrational modes) and the static perfect crystal structures provide an incomplete picture of the dynamics. In this paper, principal vibrational modes (PVMs) are introduced as descriptors of the dynamics of a material system withmore » $N$ atoms. The PVMs represent the independent collective movements of the atoms at a given temperature. Molecular dynamics (MD) simulations, here in the form of quantum MD using density functional theory calculations, provide both the data describing the atomic motion and the data used to construct the PVMs. The leading mode, $${\\mathrm{PVM}}_{0}$$, represents the $3N$-dimensional direction in which the system moves with greatest amplitude. For structural phase transitions, $${\\mathrm{PVM}}_{0}$$ serves as a generalization of soft phonon modes. At low temperatures, $${\\mathrm{PVM}}_{0}$$ reproduces the soft phonon mode in systems where one phonon dominates the phase transformation. In general, multiple phonon modes combine to describe a transformation, in which case $${\\mathrm{PVM}}_{0}$$ culls these phonon modes. Moreover, while soft phonon modes arise in the higher-symmetry crystal structure, $${\\mathrm{PVM}}_{0}$$ can be equally well calculated on either side of the structural phase transition. Finally, two applications demonstrate these properties: first, transitions into and out of bcc titanium, and, second, the two crystal structures proposed for the $${\\beta}$$ phase of uranium, the higher-symmetry structure of which stabilizes with temperature.« less

  1. A Systems Engineering Framework for Implementing a Security and Critical Patch Management Process in Diverse Environments (Academic Departments' Workstations)

    ERIC Educational Resources Information Center

    Mohammadi, Hadi

    2014-01-01

    Use of the Patch Vulnerability Management (PVM) process should be seriously considered for any networked computing system. The PVM process prevents the operating system (OS) and software applications from being attacked due to security vulnerabilities, which lead to system failures and critical data leakage. The purpose of this research is to…

  2. Modeling and analysis of solar distributed generation

    NASA Astrophysics Data System (ADS)

    Ortiz Rivera, Eduardo Ivan

    Recent changes in the global economy are creating a big impact in our daily life. The price of oil is increasing and the number of reserves are less every day. Also, dramatic demographic changes are impacting the viability of the electric infrastructure and ultimately the economic future of the industry. These are some of the reasons that many countries are looking for alternative energy to produce electric energy. The most common form of green energy in our daily life is solar energy. To convert solar energy into electrical energy is required solar panels, dc-dc converters, power control, sensors, and inverters. In this work, a photovoltaic module, PVM, model using the electrical characteristics provided by the manufacturer data sheet is presented for power system applications. Experimental results from testing are showed, verifying the proposed PVM model. Also in this work, three maximum power point tracker, MPPT, algorithms would be presented to obtain the maximum power from a PVM. The first MPPT algorithm is a method based on the Rolle's and Lagrange's Theorems and can provide at least an approximate answer to a family of transcendental functions that cannot be solved using differential calculus. The second MPPT algorithm is based on the approximation of the proposed PVM model using fractional polynomials where the shape, boundary conditions and performance of the proposed PVM model are satisfied. The third MPPT algorithm is based in the determination of the optimal duty cycle for a dc-dc converter and the previous knowledge of the load or load matching conditions. Also, four algorithms to calculate the effective irradiance level and temperature over a photovoltaic module are presented in this work. The main reasons to develop these algorithms are for monitoring climate conditions, the elimination of temperature and solar irradiance sensors, reductions in cost for a photovoltaic inverter system, and development of new algorithms to be integrated with maximum power point tracking algorithms. Finally, several PV power applications will be presented like circuit analysis for a load connected to two different PV arrays, speed control for a do motor connected to a PVM, and a novel single phase photovoltaic inverter system using the Z-source converter.

  3. Kinetic and kinematic evaluation of compensatory movements of the head, pelvis and thoracolumbar spine associated with asymmetric weight bearing of the pelvic limbs in trotting dogs.

    PubMed

    Hicks, D A; Millis, D L

    2014-01-01

    To determine ground reaction forces, head and pelvis vertical motion (HVM and PVM, respectively), and thoraco-lumbar lateral angular motion (LAM) of the spine using kinematic gait analysis in dogs with mild asymmetric weight-bearing of the pelvic limbs while trotting. Twenty-seven hound-type dogs were fitted with reflective markers placed on the sagittal crest of the skull, the ischiatic tuberosity, and thoracolumbar spine of dogs to track motion while trotting. Kinetic and kinematic data were used to characterize asymmetry between the left and right pelvic limbs, and to describe HVM, PVM and thoraco-lumbar LAM. Maximum and minimum position and total motion values were determined for each measured variable. Dogs with asymmetric weight bearing of the pelvic limbs had greater PVM on the side with a greater peak vertical force (PVF), and greater thoraco-lumbar LAM toward the side with a lower PVF while trotting. No differences in mean HVM were detected, and there were no significant correlations between the magnitude of HVM, PVM and thoraco-lumbar LAM and the degree of asymmetric weight bearing. Dogs with subtle asymmetric weight bearing of a pelvic limb had patterns of body motion that may be useful in identifying subtle lameness in dogs; greater PVM on the side with greater weight bearing and greater thoraco-lumbar LAM toward the side with less weight bearing while trotting. Description of these compensatory movements is valuable when evaluating dogs with subtle weight bearing asymmetry in the pelvic limbs and may improve the sensitivity of lameness detection during subjective clinical lameness examination.

  4. Higher operation temperature quadrant photon detectors of 2-11 μm wavelength radiation with large photosensitive areas

    NASA Astrophysics Data System (ADS)

    Pawluczyk, J.; Sosna, A.; Wojnowski, D.; Koźniewski, A.; Romanis, M.; Gawron, W.; Piotrowski, J.

    2017-10-01

    We report on the quadrant photon HgCdTe detectors optimized for 2-11 μm wavelength spectral range and Peltier or no cooling, and photosensitive area of a quad-cell of 1×1 to 4×4 mm. The devices are fabricated as photoconductors or multiple photovoltaic cells connected in series (PVM). The former are characterized by a relatively uniform photosensitive area. The PVM photovoltaic cells are distributed along the wafer surface, comprising a periodical stripe structure with a period of 20 μm. Within each period, there is an insensitive gap/trench < 9 μm wide between stripe mesas. The resulting spatial quantization error prevents positioning of the beam spot of size close to the period, but becomes negligible for the optimal spot size comparable to a quadrant-cell area. The photoconductors produce 1/f noise with about 10 kHz knee frequency, due to bias necessary for their operation. The PVM photodiodes are typically operated at 0 V bias, so they generate no 1/f noise and operation from DC is enabled. At 230 K, upper corner frequency of 16 to 100 MHz is obtained for photoconductor and 60 to 80 MHz for PVM, normalized detectivity D* 6×107 cm×Hz1/2/W to >1.4×108 cm×Hz1/2/W for photoconductor and >1.7×108 cm·Hz1/2/W for PVM, allowing for position control of the radiation beam with submicron accuracy at 16 MHz, 10.6 μm wavelength of pulsed radiation spot of 0.8 mm dia at the close-to-maximal input radiation power density in a range of detector linear operation.

  5. Characterization of the passive and active material parameters of the pubovisceralis muscle using an inverse numerical method.

    PubMed

    Silva, M E T; Parente, M P L; Brandão, S; Mascarenhas, T; Natal Jorge, R M

    2018-04-11

    The mechanical characteristics of the female pelvic floor are relevant to understand pelvic floor dysfunctions (PFD), and how they are related with changes in their biomechanical behavior. Urinary incontinence (UI) and pelvic organ prolapse (POP) are the most common pathologies, which can be associated with changes in the mechanical properties of the supportive structures in the female pelvic cavity. PFD have been studied through different methods, from experimental tensile tests using tissues from fresh female cadavers or tissues collected at the time of a transvaginal hysterectomy procedure, or by applying imaging techniques. In this work, an inverse finite element analysis (FEA) was applied to understand the passive and active behavior of the pubovisceralis muscle (PVM) during Valsalva maneuver and muscle active contraction, respectively. Individual numerical models of women without pathology, with stress UI (SUI) and POP were built based on magnetic resonance images, including the PVM and surrounding structures. The passive and active material parameters obtained for a transversely isotropic hyperelastic constitutive model were estimated for the three groups. The values for the material constants were significantly higher for the women with POP when compared with the other two groups. The PVM of women with POP showed the highest stiffness. Additionally, the influence of these parameters was analyzed by evaluating their stress-strain, and force-displacements responses. The force produced by the PVM in women with POP was 47% and 82% higher when compared to women without pathology and with SUI, respectively. The inverse FEA allowed estimating the material parameters of the PVM using input information acquired non-invasively. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Heterogeneous Distributed Computing for Computational Aerosciences

    NASA Technical Reports Server (NTRS)

    Sunderam, Vaidy S.

    1998-01-01

    The research supported under this award focuses on heterogeneous distributed computing for high-performance applications, with particular emphasis on computational aerosciences. The overall goal of this project was to and investigate issues in, and develop solutions to, efficient execution of computational aeroscience codes in heterogeneous concurrent computing environments. In particular, we worked in the context of the PVM[1] system and, subsequent to detailed conversion efforts and performance benchmarking, devising novel techniques to increase the efficacy of heterogeneous networked environments for computational aerosciences. Our work has been based upon the NAS Parallel Benchmark suite, but has also recently expanded in scope to include the NAS I/O benchmarks as specified in the NHT-1 document. In this report we summarize our research accomplishments under the auspices of the grant.

  7. Air Traffic Complexity Measurement Environment (ACME): Software User's Guide

    NASA Technical Reports Server (NTRS)

    1996-01-01

    A user's guide for the Air Traffic Complexity Measurement Environment (ACME) software is presented. The ACME consists of two major components, a complexity analysis tool and user interface. The Complexity Analysis Tool (CAT) analyzes complexity off-line, producing data files which may be examined interactively via the Complexity Data Analysis Tool (CDAT). The Complexity Analysis Tool is composed of three independently executing processes that communicate via PVM (Parallel Virtual Machine) and Unix sockets. The Runtime Data Management and Control process (RUNDMC) extracts flight plan and track information from a SAR input file, and sends the information to GARP (Generate Aircraft Routes Process) and CAT (Complexity Analysis Task). GARP in turn generates aircraft trajectories, which are utilized by CAT to calculate sector complexity. CAT writes flight plan, track and complexity data to an output file, which can be examined interactively. The Complexity Data Analysis Tool (CDAT) provides an interactive graphic environment for examining the complexity data produced by the Complexity Analysis Tool (CAT). CDAT can also play back track data extracted from System Analysis Recording (SAR) tapes. The CDAT user interface consists of a primary window, a controls window, and miscellaneous pop-ups. Aircraft track and position data is displayed in the main viewing area of the primary window. The controls window contains miscellaneous control and display items. Complexity data is displayed in pop-up windows. CDAT plays back sector complexity and aircraft track and position data as a function of time. Controls are provided to start and stop playback, adjust the playback rate, and reposition the display to a specified time.

  8. Memory access in shared virtual memory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berrendorf, R.

    1992-01-01

    Shared virtual memory (SVM) is a virtual memory layer with a single address space on top of a distributed real memory on parallel computers. We examine the behavior and performance of SVM running a parallel program with medium-grained, loop-level parallelism on top of it. A simulator for the underlying parallel architecture can be used to examine the behavior of SVM more deeply. The influence of several parameters, such as the number of processors, page size, cold or warm start, and restricted page replication, is studied.

  9. Memory access in shared virtual memory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berrendorf, R.

    1992-09-01

    Shared virtual memory (SVM) is a virtual memory layer with a single address space on top of a distributed real memory on parallel computers. We examine the behavior and performance of SVM running a parallel program with medium-grained, loop-level parallelism on top of it. A simulator for the underlying parallel architecture can be used to examine the behavior of SVM more deeply. The influence of several parameters, such as the number of processors, page size, cold or warm start, and restricted page replication, is studied.

  10. Improved approach to quantitative cardiac volumetrics using automatic thresholding and manual trimming: a cardiovascular MRI study.

    PubMed

    Rayarao, Geetha; Biederman, Robert W W; Williams, Ronald B; Yamrozik, June A; Lombardi, Richard; Doyle, Mark

    2018-01-01

    To establish the clinical validity and accuracy of automatic thresholding and manual trimming (ATMT) by comparing the method with the conventional contouring method for in vivo cardiac volume measurements. CMR was performed on 40 subjects (30 patients and 10 controls) using steady-state free precession cine sequences with slices oriented in the short-axis and acquired contiguously from base to apex. Left ventricular (LV) volumes, end-diastolic volume, end-systolic volume, and stroke volume (SV) were obtained with ATMT and with the conventional contouring method. Additionally, SV was measured independently using CMR phase velocity mapping (PVM) of the aorta for validation. Three methods of calculating SV were compared by applying Bland-Altman analysis. The Bland-Altman standard deviation of variation (SD) and offset bias for LV SV for the three sets of data were: ATMT-PVM (7.65, [Formula: see text]), ATMT-contours (7.85, [Formula: see text]), and contour-PVM (11.01, 4.97), respectively. Equating the observed range to the error contribution of each approach, the error magnitude of ATMT:PVM:contours was in the ratio 1:2.4:2.5. Use of ATMT for measuring ventricular volumes accommodates trabeculae and papillary structures more intuitively than contemporary contouring methods. This results in lower variation when analyzing cardiac structure and function and consequently improved accuracy in assessing chamber volumes.

  11. Start-up capabilities of photovoltaic module for the International Space Station

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hajela, G.; Hague, L.

    1997-12-31

    The International Space Station (ISS) uses four photovoltaic modules (PVMs) to supply electric power for the US On-Orbit Segment (USOS). The ISS is assembled on orbit over a period of about 5 years and over 40 stages. PVMs are launched and integrated with the ISS at different times during the ISS assembly. During early stages, the electric power is provided by the integrated truss segment (ITS) P6; subsequently, ITS P4, S4, and S6 are launched. PVMs are launched into space in the National Space Transportation System (NSTS) cargo bay. Each PVM consists of two independent power channels. The NSTS docksmore » with the ISS, the PVM is removed from the cargo bay and installed on the ISS. At this stage the PVM is in stowed configuration and its batteries are in fully discharged state. The start-up consists of initialization and checkout of all hardware, deployment of SAW and photovoltaic radiator (PVR), thermal conditioning batteries, and charging batteries; not necessarily in the same order for all PVMs. PVMs are designed to be capable of on-orbit start-up, within a specified time period, when external power is applied to a specified electrical interface. This paper describes the essential steps required for PVM start-up and how these operations are performed for various PVMs. The integrated operations scenarios (IOS) prepared by the NASA, Johnson Space Center, details specific procedures and timelines for start-up of each PVM. The paper describes how dormant batteries are brought to their normal operating temperature range and then charged to 100% state of charge (SOC). Total time required to complete start-up is computed and compared to the IOS timelines. External power required during start-up is computed and compared to the requirements and/or available power on ISS. Also described is how these start-up procedures can be adopted for restart of PVMs when required.« less

  12. THE MODIFYING EFFECTS OF CERTAIN SUBSTANCES OF BACTERIAL ORIGIN ON THE COURSE OF INFECTION WITH PNEUMONIA VIRUS OF MICE (PVM)

    PubMed Central

    Horsfall, Frank L.; McCarty, Maclyn

    1947-01-01

    Evidence is presented which indicates that certain polysaccharide preparations derived from various bacterial species, as well as similar materials not of bacterial origin, are capable of lessening the severity of infection with pneumonia virus of mice (PVM) and inhibiting multiplication of the virus in mouse lungs infected with this agent. It seems probable that modification with respect to the virus is mediated by a substance which may be polysaccharide in nature. PMID:19871640

  13. Data communications in a parallel active messaging interface of a parallel computer

    DOEpatents

    Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2014-09-02

    Eager send data communications in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints that specify a client, a context, and a task, including receiving an eager send data communications instruction with transfer data disposed in a send buffer characterized by a read/write send buffer memory address in a read/write virtual address space of the origin endpoint; determining for the send buffer a read-only send buffer memory address in a read-only virtual address space, the read-only virtual address space shared by both the origin endpoint and the target endpoint, with all frames of physical memory mapped to pages of virtual memory in the read-only virtual address space; and communicating by the origin endpoint to the target endpoint an eager send message header that includes the read-only send buffer memory address.

  14. Data communications in a parallel active messaging interface of a parallel computer

    DOEpatents

    Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2014-09-16

    Eager send data communications in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints that specify a client, a context, and a task, including receiving an eager send data communications instruction with transfer data disposed in a send buffer characterized by a read/write send buffer memory address in a read/write virtual address space of the origin endpoint; determining for the send buffer a read-only send buffer memory address in a read-only virtual address space, the read-only virtual address space shared by both the origin endpoint and the target endpoint, with all frames of physical memory mapped to pages of virtual memory in the read-only virtual address space; and communicating by the origin endpoint to the target endpoint an eager send message header that includes the read-only send buffer memory address.

  15. RISC Processors and High Performance Computing

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Bailey, David H.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    In this tutorial, we will discuss top five current RISC microprocessors: The IBM Power2, which is used in the IBM RS6000/590 workstation and in the IBM SP2 parallel supercomputer, the DEC Alpha, which is in the DEC Alpha workstation and in the Cray T3D; the MIPS R8000, which is used in the SGI Power Challenge; the HP PA-RISC 7100, which is used in the HP 700 series workstations and in the Convex Exemplar; and the Cray proprietary processor, which is used in the new Cray J916. The architecture of these microprocessors will first be presented. The effective performance of these processors will then be compared, both by citing standard benchmarks and also in the context of implementing a real applications. In the process, different programming models such as data parallel (CM Fortran and HPF) and message passing (PVM and MPI) will be introduced and compared. The latest NAS Parallel Benchmark (NPB) absolute performance and performance per dollar figures will be presented. The next generation of the NP13 will also be described. The tutorial will conclude with a discussion of general trends in the field of high performance computing, including likely future developments in hardware and software technology, and the relative roles of vector supercomputers tightly coupled parallel computers, and clusters of workstations. This tutorial will provide a unique cross-machine comparison not available elsewhere.

  16. Reflections on the ethics of participatory visual methods to engage communities in global health research

    PubMed Central

    Black, Gillian F.; Davies, Alun; Iskander, Dalia; Chambers, Mary

    2018-01-01

    ABSTRACT There is a growing body of literature describing conceptual frameworks for working with participatory visual methods (PVM). Through a global health lens, this paper examines some key themes within these frameworks. We reflect on our experiences of working with with an array of PVM to engage community members in Vietnam, Kenya, the Philippines and South Africa in biomedical research and public health. The participants that we have engaged in these processes live in under-resourced areas with high prevalence of communicable and non-communicable diseases. Our paper describes some of the challenges that we have encountered while using PVM to foster knowledge exchange, build relationships and facilitate change among individuals and families, community members, health workers, biomedical scientists and researchers. We consider multiple ethical situations that have arisen through our work and discuss the ways in which we have navigated and negotiated them. We offer our reflections and learning from facilitating these processes and in doing so we add novel contributions to ethical framework concepts. PMID:29434532

  17. A distributed monitoring system for photovoltaic arrays based on a two-level wireless sensor network

    NASA Astrophysics Data System (ADS)

    Su, F. P.; Chen, Z. C.; Zhou, H. F.; Wu, L. J.; Lin, P. J.; Cheng, S. Y.; Li, Y. F.

    2017-11-01

    In this paper, a distributed on-line monitoring system based on a two-level wireless sensor network (WSN) is proposed for real time status monitoring of photovoltaic (PV) arrays to support the fine management and maintenance of PV power plants. The system includes the sensing nodes installed on PV modules (PVM), sensing and routing nodes installed on combiner boxes of PV sub-arrays (PVA), a sink node and a data management centre (DMC) running on a host computer. The first level WSN is implemented by the low-cost wireless transceiver nRF24L01, and it is used to achieve single hop communication between the PVM nodes and their corresponding PVA nodes. The second level WSN is realized by the CC2530 based ZigBee network for multi-hop communication among PVA nodes and the sink node. The PVM nodes are used to monitor the PVM working voltage and backplane temperature, and they send the acquired data to their PVA node via the nRF24L01 based first level WSN. The PVA nodes are used to monitor the array voltage, PV string current and environment irradiance, and they send the acquired and received data to the DMC via the ZigBee based second level WSN. The DMC is designed using the MATLAB GUIDE and MySQL database. Laboratory experiment results show that the system can effectively acquire, display, store and manage the operating and environment parameters of PVA in real time.

  18. The immunity-related GTPase Irga6 dimerizes in a parallel head-to-head fashion.

    PubMed

    Schulte, Kathrin; Pawlowski, Nikolaus; Faelber, Katja; Fröhlich, Chris; Howard, Jonathan; Daumke, Oliver

    2016-03-02

    The immunity-related GTPases (IRGs) constitute a powerful cell-autonomous resistance system against several intracellular pathogens. Irga6 is a dynamin-like protein that oligomerizes at the parasitophorous vacuolar membrane (PVM) of Toxoplasma gondii leading to its vesiculation. Based on a previous biochemical analysis, it has been proposed that the GTPase domains of Irga6 dimerize in an antiparallel fashion during oligomerization. We determined the crystal structure of an oligomerization-impaired Irga6 mutant bound to a non-hydrolyzable GTP analog. Contrary to the previous model, the structure shows that the GTPase domains dimerize in a parallel fashion. The nucleotides in the center of the interface participate in dimerization by forming symmetric contacts with each other and with the switch I region of the opposing Irga6 molecule. The latter contact appears to activate GTP hydrolysis by stabilizing the position of the catalytic glutamate 106 in switch I close to the active site. Further dimerization contacts involve switch II, the G4 helix and the trans stabilizing loop. The Irga6 structure features a parallel GTPase domain dimer, which appears to be a unifying feature of all dynamin and septin superfamily members. This study contributes important insights into the assembly and catalytic mechanisms of IRG proteins as prerequisite to understand their anti-microbial action.

  19. Empirical Analysis and Refinement of Expert System Knowledge Bases

    DTIC Science & Technology

    1990-03-31

    the number of hidden units and the error rates is listed in Figure 6. 3.3. Cancer Data A data qet for eva!ukting th.- Frognosis of breast cancer ...Alternative Rule Induction Methods A data set for evaluating the prognosis of breast cancer recurrence was analyzed by Michalski’s AQI5 rule induction program...AQ15 7 2 32% PVM 2 1 23% Figure 6-3: Comparative Summa-y for AQI5 and PVM on Breast Cancer Data 6.2.2. Alternative Decision Tree Induction Methods

  20. Can Multiple "Spatial" Virtual Timelines Convey the Relatedness of Chronological Knowledge across Parallel Domains?

    ERIC Educational Resources Information Center

    Korallo, Liliya; Foreman, Nigel; Boyd-Davis, Stephen; Moar, Magnus; Coulson, Mark

    2012-01-01

    Single linear virtual timelines have been used effectively with undergraduates and primary school children to convey the chronological ordering of historical items, improving on PowerPoint and paper/textual displays. In the present study, a virtual environment (VE) consisting of three parallel related timelines (world history and the histories of…

  1. Immunobiotic Lactobacillus administered post-exposure averts the lethal sequelae of respiratory virus infection.

    PubMed

    Percopo, Caroline M; Rice, Tyler A; Brenner, Todd A; Dyer, Kimberly D; Luo, Janice L; Kanakabandi, Kishore; Sturdevant, Daniel E; Porcella, Stephen F; Domachowske, Joseph B; Keicher, Jesse D; Rosenberg, Helene F

    2015-09-01

    We reported previously that priming of the respiratory tract with immunobiotic Lactobacillus prior to virus challenge protects mice against subsequent lethal infection with pneumonia virus of mice (PVM). We present here the results of gene microarray which document differential expression of proinflammatory mediators in response to PVM infection alone and those suppressed in response to Lactobacillus plantarum. We also demonstrate for the first time that intranasal inoculation with live or heat-inactivated L. plantarum or Lactobacillus reuteri promotes full survival from PVM infection when administered within 24h after virus challenge. Survival in response to L. plantarum administered after virus challenge is associated with suppression of proinflammatory cytokines, limited virus recovery, and diminished neutrophil recruitment to lung tissue and airways. Utilizing this post-virus challenge protocol, we found that protective responses elicited by L. plantarum at the respiratory tract were distinct from those at the gastrointestinal mucosa, as mice devoid of the anti-inflammatory cytokine, interleukin (IL)-10, exhibit survival and inflammatory responses that are indistinguishable from those of their wild-type counterparts. Finally, although L. plantarum interacts specifically with pattern recognition receptors TLR2 and NOD2, the respective gene-deleted mice were fully protected against lethal PVM infection by L. plantarum, as are mice devoid of type I interferon receptors. Taken together, L. plantarum is a versatile and flexible agent that is capable of averting the lethal sequelae of severe respiratory infection both prior to and post-virus challenge via complex and potentially redundant mechanisms. Published by Elsevier B.V.

  2. Crocodilian perivitelline membrane-bound sperm detection.

    PubMed

    Augustine, Lauren

    2017-05-01

    Advanced reproductive technologies (ART's) are often employed with various taxa to enhance captive breeding programs and maintain genetic diversity. Perivitelline membrane-bound (PVM-bound) sperm detection has previously been demonstrated in avian and chelonian species as a useful technique for breeding management. In the absence of embryotic development within an egg, this technique can detect the presence of sperm trapped on the oocyte membrane confirming breeding, male reproductive status, and pair compatibility. PVM-bound sperm were successfully detected in three clutches of Cuban crocodile (Crocodylus rhombifer) eggs at the Smithsonian's National Zoological Park (NZP) for the first time in any crocodilian species. PVM-bound sperm were detected in fresh and incubated C. rhombifer eggs, as well as eggs that were developing (banded) and those that were not (not banded). The results of this study showed significant differences in average sperm densities per egg between clutches (p = 0.001). Additionally, there was not a significant difference within clutches between eggs that banded and those that did not band (Clutch A, p = 0.505; Clutch B, p = 0.665; Clutch C, p = 0.266). The results of this study demonstrate the necessity to microscopically examine eggs that do not develop (do not band), to determine if sperm is present, which can help animal managers problem solve reproductive shortcomings. PVM-bound sperm detection could be a useful technique in assessing crocodilian breeding programs, as well as have potential uses in studies assessing sperm storage, artificial insemination, and artificial incubation. This article is a U.S. Government work and is in the public domain in the USA.

  3. Torins are potent antimalarials that block replenishment of Plasmodium liver stage parasitophorous vacuole membrane proteins

    PubMed Central

    Hanson, Kirsten K.; Ressurreição, Ana S.; Buchholz, Kathrin; Prudêncio, Miguel; Herman-Ornelas, Jonathan D.; Rebelo, Maria; Beatty, Wandy L.; Wirth, Dyann F.; Hänscheid, Thomas; Moreira, Rui; Marti, Matthias; Mota, Maria M.

    2013-01-01

    Residence within a customized vacuole is a highly successful strategy used by diverse intracellular microorganisms. The parasitophorous vacuole membrane (PVM) is the critical interface between Plasmodium parasites and their possibly hostile, yet ultimately sustaining, host cell environment. We show that torins, developed as ATP-competitive mammalian target of rapamycin (mTOR) kinase inhibitors, are fast-acting antiplasmodial compounds that unexpectedly target the parasite directly, blocking the dynamic trafficking of the Plasmodium proteins exported protein 1 (EXP1) and upregulated in sporozoites 4 (UIS4) to the liver stage PVM and leading to efficient parasite elimination by the hepatocyte. Torin2 has single-digit, or lower, nanomolar potency in both liver and blood stages of infection in vitro and is likewise effective against both stages in vivo, with a single oral dose sufficient to clear liver stage infection. Parasite elimination and perturbed trafficking of liver stage PVM-resident proteins are both specific aspects of torin-mediated Plasmodium liver stage inhibition, indicating that torins have a distinct mode of action compared with currently used antimalarials. PMID:23836641

  4. The Automated Instrumentation and Monitoring System (AIMS) reference manual

    NASA Technical Reports Server (NTRS)

    Yan, Jerry; Hontalas, Philip; Listgarten, Sherry

    1993-01-01

    Whether a researcher is designing the 'next parallel programming paradigm,' another 'scalable multiprocessor' or investigating resource allocation algorithms for multiprocessors, a facility that enables parallel program execution to be captured and displayed is invaluable. Careful analysis of execution traces can help computer designers and software architects to uncover system behavior and to take advantage of specific application characteristics and hardware features. A software tool kit that facilitates performance evaluation of parallel applications on multiprocessors is described. The Automated Instrumentation and Monitoring System (AIMS) has four major software components: a source code instrumentor which automatically inserts active event recorders into the program's source code before compilation; a run time performance-monitoring library, which collects performance data; a trace file animation and analysis tool kit which reconstructs program execution from the trace file; and a trace post-processor which compensate for data collection overhead. Besides being used as prototype for developing new techniques for instrumenting, monitoring, and visualizing parallel program execution, AIMS is also being incorporated into the run-time environments of various hardware test beds to evaluate their impact on user productivity. Currently, AIMS instrumentors accept FORTRAN and C parallel programs written for Intel's NX operating system on the iPSC family of multi computers. A run-time performance-monitoring library for the iPSC/860 is included in this release. We plan to release monitors for other platforms (such as PVM and TMC's CM-5) in the near future. Performance data collected can be graphically displayed on workstations (e.g. Sun Sparc and SGI) supporting X-Windows (in particular, Xl IR5, Motif 1.1.3).

  5. The " Swarm of Ants vs. Herd of Elephants" Debated Revisited: Performance Measurements of PVM-Overflow Across a Wide Spectrum of Architectures

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.; Jespersen, Dennis; Buning, Peter; Bailey, David (Technical Monitor)

    1996-01-01

    The Gorden Bell Prizes given out at Supercomputing every year includes at least two catergories: performance (highest GFLOP count) and price-performance (GFLOP/million $$) for real applications. In the past five years, the winners of the price-performance categories all came from networks of work-stations. This reflects three important facts: 1. supercomputers are still too expensive for the masses; 2. achieving high performance for real applications takes real work; and, most importantly; 3. it is possible to obtain acceptable performance for certain real applications on network of work stations. With the continued advance of network technology as well as increased performance of "desktop" workstation, the "Swarm of Ants vs. Herd of Elephants" debate, which began with vector multiprocessors (VPPs) against SIMD type multiprocessors (e.g. CM2), is now recast as VPPs against Symetric Multiprocessors (SMPs, e.g. SGI PowerChallenge). This paper reports on performance studies we performed solving a large scale (2-million grid pt.s) CFD problem involving a Boeing 747 based on a parallel version of OVERFLOW that utilizes message passing on PVM. A performance monitoring tool developed under NASA HPCC, called AIMS, was used to instrument and analyze the the performance data thus obtained. We plan to compare its performance data obtained across a wide spectrum of architectures including: the Cray C90, IBM/SP2, SGI/Power Challenge Cluster, to a group of workstations connected over a simple network. The metrics of comparison includes speed-up, price-performance, throughput, and turn-around time. We also plan to present a plan of attack for various issues that will make the execution of Grand Challenge Applications across the Global Information Infrastructure a reality.

  6. [Parallel virtual reality visualization of extreme large medical datasets].

    PubMed

    Tang, Min

    2010-04-01

    On the basis of a brief description of grid computing, the essence and critical techniques of parallel visualization of extreme large medical datasets are discussed in connection with Intranet and common-configuration computers of hospitals. In this paper are introduced several kernel techniques, including the hardware structure, software framework, load balance and virtual reality visualization. The Maximum Intensity Projection algorithm is realized in parallel using common PC cluster. In virtual reality world, three-dimensional models can be rotated, zoomed, translated and cut interactively and conveniently through the control panel built on virtual reality modeling language (VRML). Experimental results demonstrate that this method provides promising and real-time results for playing the role in of a good assistant in making clinical diagnosis.

  7. Reconditioning of Batteries on the International Space Station

    NASA Technical Reports Server (NTRS)

    Hajela, Gyan; Cohen, Fred; Dalton, Penni

    2004-01-01

    Primary source of electric power for the International Space Station (ISS) is the photovoltaic module (PVM). At assembly complete stage, the ISS will be served by 4 PVMs. Each PVM contains two independent power channels such that one failure will result in loss of only one power channel. During early stages of assembly, the ISS is served by only one PVM designated as P6. Solar arrays are used to convert solar flux into electrical power. Nickel hydrogen batteries are used to store electrical power for use during periods when the solar input is not adequate to support channel loads. Batteries are operated per established procedures that ensure that they are maintained within specified temperature limits, charge current is controlled to conform to a specified charge profile, and battery voltages are maintained within specified limits. Both power channels on the PVM P6 have been operating flawlessly since December 2000 with 100 percent power availability. All components, including batteries, are monitored regularly to ensure that they are operating within specified limits and to trend their wear out and age effects. The paper briefly describes the battery trend data. Batteries have started to show some effects of aging and a battery reconditioning procedure is being evaluated at this time. Reconditioning is expected to reduce cell voltage divergence and provide data that can be used to update the state of charge (SOC) computation in the software to account for battery age. During reconditioning, each battery, one at a time, will be discharged per a specified procedure and then returned to a full state of charge. The paper describes the reconditioning procedure and the expected benefits. The reconditioning procedures have been thoroughly coordinated by all affected technical teams and approved by all required boards. The reconditioning is tentatively scheduled for September 2004.

  8. Lactobacillus priming of the respiratory tract: Heterologous immunity and protection against lethal pneumovirus infection.

    PubMed

    Garcia-Crespo, Katia E; Chan, Calvin C; Gabryszewski, Stanislaw J; Percopo, Caroline M; Rigaux, Peter; Dyer, Kimberly D; Domachowske, Joseph B; Rosenberg, Helene F

    2013-03-01

    We showed previously that wild-type mice primed via intranasal inoculation with live or heat-inactivated Lactobacillus species were fully (100%) protected against the lethal sequelae of infection with the virulent pathogen, pneumonia virus of mice (PVM), a response that is associated with diminished expression of proinflammatory cytokines and diminished virus recovery. We show here that 40% of the mice primed with live Lactobacillus survived when PVM challenge was delayed for 5months. This robust and sustained resistance to PVM infection resulting from prior interaction with an otherwise unrelated microbe is a profound example of heterologous immunity. We undertook the present study in order to understand the nature and unique features of this response. We found that intranasal inoculation with L. reuteri elicited rapid, transient neutrophil recruitment in association with proinflammatory mediators (CXCL1, CCL3, CCL2, CXCL10, TNF-alpha and IL-17A) but not Th1 cytokines. IFNγ does not contribute to survival promoted by Lactobacillus-priming. Live L. reuteri detected in lung tissue underwent rapid clearance, and was undetectable at 24h after inoculation. In contrast, L. reuteri peptidoglycan (PGN) and L. reuteri genomic DNA (gDNA) were detected at 24 and 48h after inoculation, respectively. In contrast to live bacteria, intranasal inoculation with isolated L. reuteri gDNA elicited no neutrophil recruitment, had minimal impact on virus recovery and virus-associated production of CCL3, and provided no protection against the negative sequelae of virus infection. Isolated PGN elicited neutrophil recruitment and proinflammatory cytokines but did not promote sustained survival in response to subsequent PVM infection. Overall, further evaluation of the responses leading to Lactobacillus-mediated heterologous immunity may provide insight into novel antiviral preventive modalities. Published by Elsevier B.V.

  9. Lactobacillus priming of the respiratory tract: heterologous immunity and protection against lethal pneumovirus infection

    PubMed Central

    Garcia-Crespo, Katia E.; Chan, Calvin C.; Gabryszewski, Stanislaw J.; Percopo, Caroline M.; Rigaux, Peter; Dyer, Kimberly D.; Domachowske, Joseph B.; Rosenberg, Helene F.

    2013-01-01

    We showed previously that wild-type mice primed via intranasal inoculation with live or heat-inactivated Lactobacillus species were fully (100%) protected against the lethal sequelae of infection with the virulent pathogen, pneumonia virus of mice (PVM), a response that is associated with diminished expression of proinflammatory cytokines and diminished virus recovery. We show here that 40% of the mice primed with live Lactobacillus survived when PVM challenge was delayed for 5 months. This robust and sustained resistance to PVM infection resulting from prior interaction with an otherwise unrelated microbe is a profound example of heterologous immunity. We undertook the present study in order to understand the nature and unique features of this response. We found that intranasal inoculation with L. reuteri elicited rapid, transient neutrophil recruitment in association with proinflammatory mediators (CXCL1, CCL3, CCL2, CXCL10, TNF-alpha and IL-17A) but not Th1 cytokines. IFNγ does not contribute to survival promoted by Lactobacillus-priming. Live L. reuteri detected in lung tissue underwent rapid clearance, and was undetectable at 24 hrs after inoculation. In contrast, L. reuteri peptidoglycan (PGN) and L. reuteri genomic DNA (gDNA) were detected at 24 and 48 hours after inoculation, respectively. In contrast to live bacteria, intranasal inoculation with isolated L. reuteri gDNA elicited no neutrophil recruitment, had minimal impact on virus recovery and virus-associated production of CCL3, and provided no protection against the negative sequelae of virus infection. Isolated PGN elicited neutrophil recruitment and proinflammatory cytokines but did not promote sustained survival in response to subsequent PVM infection. Overall, further evaluation of the responses leading to Lactobacillus-mediated heterologous immunity may provide insight into novel antiviral preventive modalities. PMID:23274789

  10. Effect of a triclosan/PVM/MA copolymer/fluoride dentifrice on volatile sulfur compounds in vitro.

    PubMed

    Pilch, S; Williams, M I; Cummins, D

    2005-01-01

    The objective of the investigation was to document the in vitro efficacy of a triclosan/PVM/MA copolymer/fluoride (TCF) dentifrice against the formation of volatile sulfur compounds (VSC) as well as the growth of H2S-producing bacteria. Clinical studies using organoleptic judges, gas chromatography, or a portable sulfide monitor have generally been employed in the assessment of treatments for the control of oral malodor. However, these studies are not appropriate for screening purposes because of the expense and time required. An in vitro method was developed for the purpose of screening new compounds, agents or formulations for their ability to control VSC formation and for determining bio-equivalence of efficacy when implementing changes in existing formulations. The method combines basic microbiological methods, dynamic flow cell techniques and head space analysis. The in vitro VSC method was validated by comparing the efficacy of two dentifrices containing TCF with a control fluoride dentifrice as the TCF products have been clinically proven to control oral malodor. In the validation studies, the TCF-containing dentifrices were significantly better (P < 0.05) than the control dentifrice in inhibiting VSC formation and reducing H(2)S-producing bacteria. For example, when compared with baseline, the TCF dentifrices reduced VSC formation between 42 and 49% compared with the control dentifrice which reduced VSC formation 3%. There was no significant difference (P > 0.05) between the two TCF dentifrice formulations. Using an in vitro breath VSC model, it has been demonstrated that two variants of a dentifrice containing triclosan, PVM/MA copolymer and fluoride have efficacy that is significantly better than a control fluoridated dentifrice and that there is no significant difference between the triclosan/PVM/MA copolymer/fluoride dentifrice variants.

  11. Virtual Oscillator Controls | Grid Modernization | NREL

    Science.gov Websites

    Virtual Oscillator Controls Virtual Oscillator Controls NREL is developing virtual oscillator Santa-Barbara, and SunPower. Publications Synthesizing Virtual Oscillators To Control Islanded Inverters Synchronization of Parallel Single-Phase Inverters Using Virtual Oscillator Control, IEEE Transactions on Power

  12. Heterogeneous voter models

    NASA Astrophysics Data System (ADS)

    Masuda, Naoki; Gibert, N.; Redner, S.

    2010-07-01

    We introduce the heterogeneous voter model (HVM), in which each agent has its own intrinsic rate to change state, reflective of the heterogeneity of real people, and the partisan voter model (PVM), in which each agent has an innate and fixed preference for one of two possible opinion states. For the HVM, the time until consensus is reached is much longer than in the classic voter model. For the PVM in the mean-field limit, a population evolves to a preference-based state, where each agent tends to be aligned with its internal preference. For finite populations, discrete fluctuations ultimately lead to consensus being reached in a time that scales exponentially with population size.

  13. Monitoring of antisolvent crystallization of sodium scutellarein by combined FBRM-PVM-NIR.

    PubMed

    Liu, Xuesong; Sun, Di; Wang, Feng; Wu, Yongjiang; Chen, Yong; Wang, Longhu

    2011-06-01

    Antisolvent crystallization can be used as an alternative to cooling or evaporation for the separation and purification of solid product in the pharmaceutical industry. To improve the process understanding of antisolvent crystallization, the use of in-line tools is vital. In this study, the process analytical technology (PAT) tools including focused beam reflectance measurement (FBRM), particle video microscope (PVM), and near-infrared spectroscopy (NIRS) were utilized to monitor antisolvent crystallization of sodium scutellarein. FBRM was used to monitor chord count and chord length distribution of sodium scutellarein particles in the crystallizer, and PVM, as an in-line video camera, provided pictures imaging particle shape and dimension. In addition, a quantitative model of PLS was established by in-line NIRS to detect the concentration of sodium scutellarein in the solvent and good calibration statistics were obtained (r(2) = 0.976) with the residual predictive deviation value of 11.3. The discussion over sensitivities, strengths, and weaknesses of the PAT tools may be helpful in selection of suitable PAT techniques. These in-line techniques eliminate the need for sample preparation and offer a time-saving approach to understand and monitor antisolvent crystallization process. Copyright © 2011 Wiley-Liss, Inc.

  14. GPURFSCREEN: a GPU based virtual screening tool using random forest classifier.

    PubMed

    Jayaraj, P B; Ajay, Mathias K; Nufail, M; Gopakumar, G; Jaleel, U C A

    2016-01-01

    In-silico methods are an integral part of modern drug discovery paradigm. Virtual screening, an in-silico method, is used to refine data models and reduce the chemical space on which wet lab experiments need to be performed. Virtual screening of a ligand data model requires large scale computations, making it a highly time consuming task. This process can be speeded up by implementing parallelized algorithms on a Graphical Processing Unit (GPU). Random Forest is a robust classification algorithm that can be employed in the virtual screening. A ligand based virtual screening tool (GPURFSCREEN) that uses random forests on GPU systems has been proposed and evaluated in this paper. This tool produces optimized results at a lower execution time for large bioassay data sets. The quality of results produced by our tool on GPU is same as that on a regular serial environment. Considering the magnitude of data to be screened, the parallelized virtual screening has a significantly lower running time at high throughput. The proposed parallel tool outperforms its serial counterpart by successfully screening billions of molecules in training and prediction phases.

  15. Paging memory from random access memory to backing storage in a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Inglett, Todd A; Ratterman, Joseph D; Smith, Brian E

    2013-05-21

    Paging memory from random access memory (`RAM`) to backing storage in a parallel computer that includes a plurality of compute nodes, including: executing a data processing application on a virtual machine operating system in a virtual machine on a first compute node; providing, by a second compute node, backing storage for the contents of RAM on the first compute node; and swapping, by the virtual machine operating system in the virtual machine on the first compute node, a page of memory from RAM on the first compute node to the backing storage on the second compute node.

  16. By Hand or Not By-Hand: A Case Study of Alternative Approaches to Parallelize CFD Applications

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.; Bailey, David (Technical Monitor)

    1997-01-01

    While parallel processing promises to speed up applications by several orders of magnitude, the performance achieved still depends upon several factors, including the multiprocessor architecture, system software, data distribution and alignment, as well as the methods used for partitioning the application and mapping its components onto the architecture. The existence of the Gorden Bell Prize given out at Supercomputing every year suggests that while good performance can be attained for real applications on general purpose multiprocessors, the large investment in man-power and time still has to be repeated for each application-machine combination. As applications and machine architectures become more complex, the cost and time-delays for obtaining performance by hand will become prohibitive. Computer users today can turn to three possible avenues for help: parallel libraries, parallel languages and compilers, interactive parallelization tools. The success of these methodologies, in turn, depends on proper application of data dependency analysis, program structure recognition and transformation, performance prediction as well as exploitation of user supplied knowledge. NASA has been developing multidisciplinary applications on highly parallel architectures under the High Performance Computing and Communications Program. Over the past six years, the transition of underlying hardware and system software have forced the scientists to spend a large effort to migrate and recede their applications. Various attempts to exploit software tools to automate the parallelization process have not produced favorable results. In this paper, we report our most recent experience with CAPTOOL, a package developed at Greenwich University. We have chosen CAPTOOL for three reasons: 1. CAPTOOL accepts a FORTRAN 77 program as input. This suggests its potential applicability to a large collection of legacy codes currently in use. 2. CAPTOOL employs domain decomposition to obtain parallelism. Although the fact that not all kinds of parallelism are handled may seem unappealing, many NASA applications in computational aerosciences as well as earth and space sciences are amenable to domain decomposition. 3. CAPTOOL generates code for a large variety of environments employed across NASA centers: MPI/PVM on network of workstations to the IBS/SP2 and CRAY/T3D.

  17. Dockres: a computer program that analyzes the output of virtual screening of small molecules

    PubMed Central

    2010-01-01

    Background This paper describes a computer program named Dockres that is designed to analyze and summarize results of virtual screening of small molecules. The program is supplemented with utilities that support the screening process. Foremost among these utilities are scripts that run the virtual screening of a chemical library on a large number of processors in parallel. Methods Dockres and some of its supporting utilities are written Fortran-77; other utilities are written as C-shell scripts. They support the parallel execution of the screening. The current implementation of the program handles virtual screening with Autodock-3 and Autodock-4, but can be extended to work with the output of other programs. Results Analysis of virtual screening by Dockres led to both active and selective lead compounds. Conclusions Analysis of virtual screening was facilitated and enhanced by Dockres in both the authors' laboratories as well as laboratories elsewhere. PMID:20205801

  18. Establishing a group of endpoints in a parallel computer

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.; Xue, Hanhong

    2016-02-02

    A parallel computer executes a number of tasks, each task includes a number of endpoints and the endpoints are configured to support collective operations. In such a parallel computer, establishing a group of endpoints receiving a user specification of a set of endpoints included in a global collection of endpoints, where the user specification defines the set in accordance with a predefined virtual representation of the endpoints, the predefined virtual representation is a data structure setting forth an organization of tasks and endpoints included in the global collection of endpoints and the user specification defines the set of endpoints without a user specification of a particular endpoint; and defining a group of endpoints in dependence upon the predefined virtual representation of the endpoints and the user specification.

  19. Simulation Exploration through Immersive Parallel Planes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brunhart-Lupo, Nicholas J; Bush, Brian W; Gruchalla, Kenny M

    We present a visualization-driven simulation system that tightly couples systems dynamics simulations with an immersive virtual environment to allow analysts to rapidly develop and test hypotheses in a high-dimensional parameter space. To accomplish this, we generalize the two-dimensional parallel-coordinates statistical graphic as an immersive 'parallel-planes' visualization for multivariate time series emitted by simulations running in parallel with the visualization. In contrast to traditional parallel coordinate's mapping the multivariate dimensions onto coordinate axes represented by a series of parallel lines, we map pairs of the multivariate dimensions onto a series of parallel rectangles. As in the case of parallel coordinates, eachmore » individual observation in the dataset is mapped to a polyline whose vertices coincide with its coordinate values. Regions of the rectangles can be 'brushed' to highlight and select observations of interest: a 'slider' control allows the user to filter the observations by their time coordinate. In an immersive virtual environment, users interact with the parallel planes using a joystick that can select regions on the planes, manipulate selection, and filter time. The brushing and selection actions are used to both explore existing data as well as to launch additional simulations corresponding to the visually selected portions of the input parameter space. As soon as the new simulations complete, their resulting observations are displayed in the virtual environment. This tight feedback loop between simulation and immersive analytics accelerates users' realization of insights about the simulation and its output.« less

  20. Simulation Exploration through Immersive Parallel Planes: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brunhart-Lupo, Nicholas; Bush, Brian W.; Gruchalla, Kenny

    We present a visualization-driven simulation system that tightly couples systems dynamics simulations with an immersive virtual environment to allow analysts to rapidly develop and test hypotheses in a high-dimensional parameter space. To accomplish this, we generalize the two-dimensional parallel-coordinates statistical graphic as an immersive 'parallel-planes' visualization for multivariate time series emitted by simulations running in parallel with the visualization. In contrast to traditional parallel coordinate's mapping the multivariate dimensions onto coordinate axes represented by a series of parallel lines, we map pairs of the multivariate dimensions onto a series of parallel rectangles. As in the case of parallel coordinates, eachmore » individual observation in the dataset is mapped to a polyline whose vertices coincide with its coordinate values. Regions of the rectangles can be 'brushed' to highlight and select observations of interest: a 'slider' control allows the user to filter the observations by their time coordinate. In an immersive virtual environment, users interact with the parallel planes using a joystick that can select regions on the planes, manipulate selection, and filter time. The brushing and selection actions are used to both explore existing data as well as to launch additional simulations corresponding to the visually selected portions of the input parameter space. As soon as the new simulations complete, their resulting observations are displayed in the virtual environment. This tight feedback loop between simulation and immersive analytics accelerates users' realization of insights about the simulation and its output.« less

  1. Randomized clinical trial of two oral care regimens in reducing and controlling established dental plaque and gingivitis.

    PubMed

    Ayad, Farid; Mateo, Luis R; Dillon, Rensi; Miller, Jeffrey M; Pilch, Shira; Stewart, Bernal

    2015-09-01

    To evaluate the efficacy of a test regimen (TR) integrating the use of a commercially available triclosan, PVM/MA copolymer, and sodium fluoride containing toothpaste, an alcohol-free, fluoride-free cetylpyridinium chloride (CPC) mouthwash, and a manual toothbrush with cheek and tongue cleaner compared to a negative control regimen (NCR) integrating a commercially available 0.76% sodium monofluorophosphate toothpaste, a manual toothbrush and a fluoride-free and alcohol-free non-antibacterial mouthwash in the reduction and control of established plaque and gingivitis after 4 weeks of product use. A 4-week, two-cell, double-blind, parallel-group, randomized clinical study was conducted in Cedar Knolls, New Jersey, USA. Recruited subjects were randomly assigned to two regimens: (1) a commercially available toothpaste containing triclosan, PVM/MA copolymer, and 0.243% sodium fluoride, a manual toothbrush with cheek and tongue cleaner, and commercially available mouthwash containing 0.075% CPC in a fluoride-free and alcohol-free base (TR), or (2) a commercially available 0.76% sodium monofluorophosphate toothpaste, a manual toothbrush with rounded/polished bristles, and a fluoride-free and alcohol-free non-antibacterial mouthwash (NCR). Subjects were examined for dental plaque and gingivitis. Gingival, Gingival Severity, Gingival Interproximal, Plaque, Plaque Severity and Plaque Interproximal Index scores were calculated. For regimen comparison, independent t-test and ANCOVA analyses were performed. 130 subjects were screened; 120 enrolled; and 115 subjects completed the randomized clinical trial (RCT). After 4 weeks of product use, subjects using TR exhibited statistically significant (P < 0.001) reductions of 22.3%, 27.8% and 20.4% in mean Gingival, Gingival Severity and Gingival Interproximal Index scores, respectively, as compared to subjects using NCR. After 4 weeks of product use, subjects using TR exhibited statistically significant (P < 0.001) reductions of 28.2%, 60.7% and 27.6% in mean Plaque, Plaque Severity and Plaque Interproximal Index scores, respectively, as compared to subjects using NCR.

  2. Fundamental Roles of the Golgi-Associated Toxoplasma Aspartyl Protease, ASP5, at the Host-Parasite Interface

    PubMed Central

    Hammoudi, Pierre-Mehdi; Jacot, Damien; Mueller, Christina; Di Cristina, Manlio; Dogga, Sunil Kumar; Marq, Jean-Baptiste; Romano, Julia; Tosetti, Nicolò; Dubrot, Juan; Emre, Yalin; Lunghi, Matteo; Coppens, Isabelle; Yamamoto, Masahiro; Sojka, Daniel; Pino, Paco; Soldati-Favre, Dominique

    2015-01-01

    Toxoplasma gondii possesses sets of dense granule proteins (GRAs) that either assemble at, or cross the parasitophorous vacuole membrane (PVM) and exhibit motifs resembling the HT/PEXEL previously identified in a repertoire of exported Plasmodium proteins. Within Plasmodium spp., cleavage of the HT/PEXEL motif by the endoplasmic reticulum-resident protease Plasmepsin V precedes trafficking to and export across the PVM of proteins involved in pathogenicity and host cell remodelling. Here, we have functionally characterized the T. gondii aspartyl protease 5 (ASP5), a Golgi-resident protease that is phylogenetically related to Plasmepsin V. We show that deletion of ASP5 causes a significant loss in parasite fitness in vitro and an altered virulence in vivo. Furthermore, we reveal that ASP5 is necessary for the cleavage of GRA16, GRA19 and GRA20 at the PEXEL-like motif. In the absence of ASP5, the intravacuolar nanotubular network disappears and several GRAs fail to localize to the PVM, while GRA16 and GRA24, both known to be targeted to the host cell nucleus, are retained within the vacuolar space. Additionally, hypermigration of dendritic cells and bradyzoite cyst wall formation are impaired, critically impacting on parasite dissemination and persistence. Overall, the absence of ASP5 dramatically compromises the parasite’s ability to modulate host signalling pathways and immune responses. PMID:26473595

  3. Assessment of OLED displays for vision research.

    PubMed

    Cooper, Emily A; Jiang, Haomiao; Vildavski, Vladimir; Farrell, Joyce E; Norcia, Anthony M

    2013-10-23

    Vision researchers rely on visual display technology for the presentation of stimuli to human and nonhuman observers. Verifying that the desired and displayed visual patterns match along dimensions such as luminance, spectrum, and spatial and temporal frequency is an essential part of developing controlled experiments. With cathode-ray tubes (CRTs) becoming virtually unavailable on the commercial market, it is useful to determine the characteristics of newly available displays based on organic light emitting diode (OLED) panels to determine how well they may serve to produce visual stimuli. This report describes a series of measurements summarizing the properties of images displayed on two commercially available OLED displays: the Sony Trimaster EL BVM-F250 and PVM-2541. The results show that the OLED displays have large contrast ratios, wide color gamuts, and precise, well-behaved temporal responses. Correct adjustment of the settings on both models produced luminance nonlinearities that were well predicted by a power function ("gamma correction"). Both displays have adjustable pixel independence and can be set to have little to no spatial pixel interactions. OLED displays appear to be a suitable, or even preferable, option for many vision research applications.

  4. Parallel Computing Using Web Servers and "Servlets".

    ERIC Educational Resources Information Center

    Lo, Alfred; Bloor, Chris; Choi, Y. K.

    2000-01-01

    Describes parallel computing and presents inexpensive ways to implement a virtual parallel computer with multiple Web servers. Highlights include performance measurement of parallel systems; models for using Java and intranet technology including single server, multiple clients and multiple servers, single client; and a comparison of CGI (common…

  5. PISCES: An environment for parallel scientific computation

    NASA Technical Reports Server (NTRS)

    Pratt, T. W.

    1985-01-01

    The parallel implementation of scientific computing environment (PISCES) is a project to provide high-level programming environments for parallel MIMD computers. Pisces 1, the first of these environments, is a FORTRAN 77 based environment which runs under the UNIX operating system. The Pisces 1 user programs in Pisces FORTRAN, an extension of FORTRAN 77 for parallel processing. The major emphasis in the Pisces 1 design is in providing a carefully specified virtual machine that defines the run-time environment within which Pisces FORTRAN programs are executed. Each implementation then provides the same virtual machine, regardless of differences in the underlying architecture. The design is intended to be portable to a variety of architectures. Currently Pisces 1 is implemented on a network of Apollo workstations and on a DEC VAX uniprocessor via simulation of the task level parallelism. An implementation for the Flexible Computing Corp. FLEX/32 is under construction. An introduction to the Pisces 1 virtual computer and the FORTRAN 77 extensions is presented. An example of an algorithm for the iterative solution of a system of equations is given. The most notable features of the design are the provision for several granularities of parallelism in programs and the provision of a window mechanism for distributed access to large arrays of data.

  6. A clinical investigation of the efficacy of three commercially available dentifrices for controlling established gingivitis and supragingival plaque.

    PubMed

    Singh, Surrendra; Chaknis, Patricia; DeVizio, William; Petrone, Margaret; Panagakos, Fotinos S; Proskin, Howard M

    2010-01-01

    To assess the efficacy ofa dentifrice containing 0.3% triclosan/2.0% PVM/MA copolymer/0.243% sodium fluoride for controlling established gingivitis and supragingival plaque relative to that of a dentifrice containing 0.454% stannous fluoride, sodium hexametaphosphate, and zinc lactate, and a dentifrice containing 0.243% sodium fluoride as a negative control. Following a baseline examination for gingivitis and supragingival plaque, qualifying adult male and female subjects from the Piscataway, NJ, USA area were randomized into three dentifrice groups. Subjects were instructed to brush their teeth twice daily (morning and evening) for one minute with their assigned dentifrice and a soft-bristled toothbrush. Examinations for gingivitis and supragingival plaque were repeated after six weeks of product use. One-hundred and seventy-one (171) subjects complied with the protocol and completed the study. Relative to the group using the dentifrice with 0.243% sodium fluoride alone, the 0.3% triclosan/2.0% PVM/MA copolymer/0.243% sodium fluoride group exhibited statistically significant reductions in gingival index and supragingival plaque index scores of 25.3% and 33.0%, respectively, after six weeks of product use. Similarly, relative to the group using the 0.243% sodium fluoride dentifrice, the 0.454% stannous fluoride, sodium hexametaphosphate, and zinc lactate dentifrice group exhibited statistically significant reductions in gingival index and plaque index scores of 8.1% and 14.1% after six weeks of product use. Further, relative to the 0.454% stannous fluoride, sodium hexametaphosphate, and zinc lactate dentifrice group, the 0.3% triclosan/2.0% PVM/MA copolymer/0.243% sodium fluoride dentifrice group exhibited statistically significant reductions in gingival index and plaque index scores of 18.7% and 22%, respectively. The overall results of this double-blind clinical study support the conclusion that a dentifrice containing 0.3% triclosan/2.0% PVM/MA copolymer/0.243% sodium fluoride is efficacious for the control of established gingivitis and supragingival plaque as compared to a regular fluoride dentifrice, and that it provides a greater level of efficacy for the control of gingivitis and supragingival plaque than does a dentifrice containing 0.454% stannous fluoride, sodium hexametaphosphate, and zinc lactate.

  7. Muscarinic excitation of parvalbumin-positive interneurons contributes to the severity of pilocarpine-induced seizures

    PubMed Central

    Yi, Feng; DeCan, Evan; Stoll, Kurt; Marceau, Eric; Deisseroth, Karl; Lawrence, J. Josh

    2014-01-01

    SUMMARY Objective A common rodent model in epilepsy research employs the muscarinic acetylcholine receptor (mAChR) agonist pilocarpine, yet the mechanisms underlying the induction of pilocarpine-induced seizures (PISs) remain unclear. Global M1 mAChR (M1R) knockout mice are resistant to PISs, implying that M1R activation disrupts excitation/inhibition balance. Parvalbumin-positive (PV) inhibitory neurons express M1 mAChRs, participate in cholinergically-induced oscillations, and can enter a state of depolarization block (DB) during epileptiform activity. Here, we test the hypothesis that pilocarpine activation of M1Rs expressed on PV cells contributes to PISs. Methods CA1 PV cells in PV-CRE mice were visualized with a floxed YFP or hM3Dq-mCherry adeno-associated virus, or by crossing PV-CRE mice with the RosaYFP reporter line. To eliminate M1Rs from PV cells, we generated PV-M1KO mice by crossing PV-CRE and floxed M1 mice. Action potential (AP) frequency was monitored during application of pilocarpine (200 µM). In behavioral experiments, locomotion and seizure symptoms were recorded in WT or PV-M1KO mice during PISs. Results Pilocarpine significantly increased AP frequency in CA1 PV cells into the gamma range. In the continued presence of pilocarpine, a subset (5/7) of PV cells progressed to DB, which was mimicked by hM3Dq activation of Gq-receptor signaling. Pilocarpine-induced depolarization, AP firing at gamma frequency, and progression to DB were prevented in CA1 PV cells of PV-M1KO mice. Finally, compared to WT mice, PV-M1KO mice were associated with reduced severity of PISs. Significance Pilocarpine can directly depolarize PV+ cells via M1R activation but a subset of these cells progress to DB. Our electrophysiological and behavioral results suggest that this mechanism is active during PISs, contributing to a collapse of PV-mediated GABAergic inhibition, dysregulation of excitation/inhibition balance, and increased susceptibility to PISs. PMID:25495999

  8. Productive High Performance Parallel Programming with Auto-tuned Domain-Specific Embedded Languages

    DTIC Science & Technology

    2013-01-02

    Compilation JVM Java Virtual Machine KB Kilobyte KDT Knowledge Discovery Toolbox LAPACK Linear Algebra Package LLVM Low-Level Virtual Machine LOC Lines...different starting points. Leo Meyerovich also helped solidify some of the ideas here in discussions during Par Lab retreats. I would also like to thank...multi-timestep computations by blocking in both time and space. 88 Implementation Output Approx DSL Type Language Language Parallelism LoC Graphite

  9. Heterogeneous concurrent computing with exportable services

    NASA Technical Reports Server (NTRS)

    Sunderam, Vaidy

    1995-01-01

    Heterogeneous concurrent computing, based on the traditional process-oriented model, is approaching its functionality and performance limits. An alternative paradigm, based on the concept of services, supporting data driven computation, and built on a lightweight process infrastructure, is proposed to enhance the functional capabilities and the operational efficiency of heterogeneous network-based concurrent computing. TPVM is an experimental prototype system supporting exportable services, thread-based computation, and remote memory operations that is built as an extension of and an enhancement to the PVM concurrent computing system. TPVM offers a significantly different computing paradigm for network-based computing, while maintaining a close resemblance to the conventional PVM model in the interest of compatibility and ease of transition Preliminary experiences have demonstrated that the TPVM framework presents a natural yet powerful concurrent programming interface, while being capable of delivering performance improvements of upto thirty percent.

  10. Preparation and characterization of mucoadhesive nanoparticles of poly (methyl vinyl ether-co-maleic anhydride) containing glycyrrhizic acid intended for vaginal administration.

    PubMed

    Aguilar-Rosas, Irene; Alcalá-Alcalá, Sergio; Llera-Rojas, Viridiana; Ganem-Rondero, Adriana

    2015-01-01

    Traditional vaginal preparations reside in the vaginal cavity for relatively a short period of time, requiring multiple doses in order to attain the desired therapeutic effect. Therefore, mucoadhesive systems appear to be appropriate to prolong the residence time in the vaginal cavity. In the current study, mucoadhesive nanoparticles based on poly(methyl vinyl ether-co-maleic anhydride) (PVM/MA) intended for vaginal delivery of glycyrrhizic acid (GA) (a drug with well-known antiviral properties) were prepared and characterized. Nanoparticles were generated by a solvent displacement method. Incorporation of GA was performed during nanoprecipitation, followed by adsorption of drug once nanoparticles were formed. The prepared nanoparticles were characterized in terms of size, Z-potential, morphology, drug loading, interaction of GA with PVM/MA (by differential scanning calorimetry) and the in vitro interaction of nanoparticles with pig mucin (at two pH values, 3.6 and 5; with and without GA adsorbed). The preparation method led to nanoparticles of a mean diameter of 198.5 ± 24.3 nm, zeta potential of -44.8 ± 2.8 mV and drug loading of 15.07 ± 0.86 µg/mg polymer. The highest mucin interaction resulted at pH 3.6 for nanoparticles without GA adsorbed. The data obtained suggest the promise of using mucoadhesive nanoparticles of PVM/MA for intravaginal delivery of GA.

  11. Non-equilibrium plasma kinetics of reacting CO: an improved state to state approach

    NASA Astrophysics Data System (ADS)

    Pietanza, L. D.; Colonna, G.; Capitelli, M.

    2017-12-01

    Non-equilibrium plasma kinetics of reacting CO for conditions typically met in microwave discharges have been developed based on the coupling of excited state kinetics and the Boltzmann equation for the electron energy distribution function (EEDF). Particular attention is given to the insertion in the vibrational kinetics of a complete set of electron molecule resonant processes linking the whole vibrational ladder of the CO molecule, as well as to the role of Boudouard reaction, i.e. the process of forming CO2 by two vibrationally excited CO molecules, in shaping the vibrational distribution of CO and promoting reaction channels assisted by vibrational excitation (pure vibrational mechanisms, PVM). PVM mechanisms can become competitive with electron impact dissociation processes (DEM) in the activation of CO. A case study reproducing the conditions of a microwave discharge has been considered following the coupled kinetics also in the post discharge conditions. Results include the evolution of EEDF in discharge and post discharge conditions highlighting the role of superelastic vibrational and electronic collisions in shaping the EEDF. Moreover, PVM rate coefficients and DEM ones are studied as a function of gas temperature, showing a non-Arrhenius behavior, i.e. the rate coefficients increase with decreasing gas temperature as a result of a vibrational-vibrational (V-V) pumping up mechanism able to form plateaux in the vibrational distribution function. The accuracy of the results is discussed in particular in connection to the present knowledge of the activation energy of the Boudouard process.

  12. Effects of Ramadan Gasting on Postural Balance and Attentional Capacities in Elderly People.

    PubMed

    Laatar, R; Borji, R; Baccouch, R; Zahaf, F; Rebai, H; Sahli, S

    2016-01-01

    To evaluate the effects of Ramadan fasting on postural balance and attentional capacities in older adults. the Neurophysiology department of a University Hospital. Fifteen males aged between 65 and 80 years were asked to perform a postural balance protocol and a simple reaction time (SRT) test in four testing phases: one week before Ramadan (BR), during the second (SWR) and the fourth week of Ramadan (FWR) and 3 weeks after Ramadan (AR). Postural balance measurements were recorded in the bipedal stance in four different conditions: firm surface/eyes open (EO), firm surface/eyes closed (EC), foam surface/EO and foam surface/EC using a force platform. Results of the present study demonstrated that center of pressure (CoP) mean velocity (CoPVm), medio-lateral length (CoPLX) and antero-posterior length (CoPLY) were significantly higher during the SWR than BR. Likewise, values of CoPVm and CoPLX increased significantly during the FWR compared to BR. The CoPLX decreased significantly in the FWR compared to the SWR. Values of CoPVm and CoPLX were significantly higher AR in comparison with BR. In addition, SRT values increased significantly during the SWR and the FWR than BR. Ramadan fasting affects postural balance and attentional capacities in the elderly mainly in the SWR and it may, therefore, increase the risk of fall and fall-related injuries. More than three weeks are needed for older adults to recover postural balance impairment due to Ramadan fasting.

  13. From planes to brains: parallels between military development of virtual reality environments and virtual neurological surgery.

    PubMed

    Schmitt, Paul J; Agarwal, Nitin; Prestigiacomo, Charles J

    2012-01-01

    Military explorations of the practical role of simulators have served as a driving force for much of the virtual reality technology that we have today. The evolution of 3-dimensional and virtual environments from the early flight simulators used during World War II to the sophisticated training simulators in the modern military followed a path that virtual surgical and neurosurgical devices have already begun to parallel. By understanding the evolution of military simulators as well as comparing and contrasting that evolution with current and future surgical simulators, it may be possible to expedite the development of appropriate devices and establish their validity as effective training tools. As such, this article presents a historical perspective examining the progression of neurosurgical simulators, the establishment of effective and appropriate curricula for using them, and the contributions that the military has made during the ongoing maturation of this exciting treatment and training modality. Copyright © 2012. Published by Elsevier Inc.

  14. Prototype Vector Machine for Large Scale Semi-Supervised Learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Kai; Kwok, James T.; Parvin, Bahram

    2009-04-29

    Practicaldataminingrarelyfalls exactlyinto the supervisedlearning scenario. Rather, the growing amount of unlabeled data poses a big challenge to large-scale semi-supervised learning (SSL). We note that the computationalintensivenessofgraph-based SSLarises largely from the manifold or graph regularization, which in turn lead to large models that are dificult to handle. To alleviate this, we proposed the prototype vector machine (PVM), a highlyscalable,graph-based algorithm for large-scale SSL. Our key innovation is the use of"prototypes vectors" for effcient approximation on both the graph-based regularizer and model representation. The choice of prototypes are grounded upon two important criteria: they not only perform effective low-rank approximation of themore » kernel matrix, but also span a model suffering the minimum information loss compared with the complete model. We demonstrate encouraging performance and appealing scaling properties of the PVM on a number of machine learning benchmark data sets.« less

  15. Modeling of spectral signatures of littoral waters

    NASA Astrophysics Data System (ADS)

    Haltrin, Vladimir I.

    1997-12-01

    The spectral values of remotely obtained radiance reflectance coefficient (RRC) are compared with the values of RRC computed from inherent optical properties measured during the shipborne experiment near the West Florida coast. The model calculations are based on the algorithm developed at the Naval Research Laboratory at Stennis Space Center and presented here. The algorithm is based on the radiation transfer theory and uses regression relationships derived from experimental data. Overall comparison of derived and measured RRCs shows that this algorithm is suitable for processing ground truth data for the purposes of remote data calibration. The second part of this work consists of the evaluation of the predictive visibility model (PVM). The simulated three-dimensional values of optical properties are compared with the measured ones. Preliminary results of comparison are encouraging and show that the PVM can qualitatively predict the evolution of inherent optical properties in littoral waters.

  16. Implementation and Assessment of a Virtual Laboratory of Parallel Robots Developed for Engineering Students

    ERIC Educational Resources Information Center

    Gil, Arturo; Peidró, Adrián; Reinoso, Óscar; Marín, José María

    2017-01-01

    This paper presents a tool, LABEL, oriented to the teaching of parallel robotics. The application, organized as a set of tools developed using Easy Java Simulations, enables the study of the kinematics of parallel robotics. A set of classical parallel structures was implemented such that LABEL can solve the inverse and direct kinematic problem of…

  17. Modeling the 1958 Lituya Bay mega-tsunami with a PVM-IFCP GPU-based model

    NASA Astrophysics Data System (ADS)

    González-Vida, José M.; Arcas, Diego; de la Asunción, Marc; Castro, Manuel J.; Macías, Jorge; Ortega, Sergio; Sánchez-Linares, Carlos; Titov, Vasily

    2013-04-01

    In this work we present a numerical study, performed in collaboration with the NOAA Center for Tsunami Research (USA), that uses a GPU version of the PVM-IFCP landslide model for the simulation of the 1958 landslide generated tsunami of Lituya Bay. In this model, a layer composed of fluidized granular material is assumed to flow within an upper layer of an inviscid fluid (e. g. water). The model is discretized using a two dimensional PVM-IFCP [Fernández - Castro - Parés. On an Intermediate Field Capturing Riemann Solver Based on a Parabolic Viscosity Matrix for the Two-Layer Shallow Water System, J. Sci. Comput., 48 (2011):117-140] finite volume scheme implemented on GPU cards for increasing the speed-up. This model has been previously validated by using the two-dimensional physical laboratory experiments data from H. Fritz [Lituya Bay Landslide Impact Generated Mega-Tsunami 50th Anniversary. Pure Appl. Geophys., 166 (2009) pp. 153-175]. In the present work, the first step was to reconstruct the topobathymetry of the Lituya Bay before this event ocurred, this is based on USGS geological surveys data. Then, a sensitivity analysis of some model parameters has been performed in order to determine the parameters that better fit to reality, when model results are compared against available event data, as run-up areas. In this presentation, the reconstruction of the pre-tsunami scenario will be shown, a detailed simulation of the tsunami presented and several comparisons with real data (runup, wave height, etc.) shown.

  18. Long-term live imaging reveals cytosolic immune responses of host hepatocytes against Plasmodium infection and parasite escape mechanisms

    PubMed Central

    Prado, Monica; Eickel, Nina; De Niz, Mariana; Heitmann, Anna; Agop-Nersesian, Carolina; Wacker, Rahel; Schmuckli-Maurer, Jacqueline; Caldelari, Reto; Janse, Chris J; Khan, Shahid M; May, Jürgen; Meyer, Christian G; Heussler, Volker T

    2015-01-01

    Plasmodium parasites are transmitted by Anopheles mosquitoes to the mammalian host and actively infect hepatocytes after passive transport in the bloodstream to the liver. In their target host hepatocyte, parasites reside within a parasitophorous vacuole (PV). In the present study it was shown that the parasitophorous vacuole membrane (PVM) can be targeted by autophagy marker proteins LC3, ubiquitin, and SQSTM1/p62 as well as by lysosomes in a process resembling selective autophagy. The dynamics of autophagy marker proteins in individual Plasmodium berghei-infected hepatocytes were followed by live imaging throughout the entire development of the parasite in the liver. Although the host cell very efficiently recognized the invading parasite in its vacuole, the majority of parasites survived this initial attack. Successful parasite development correlated with the gradual loss of all analyzed autophagy marker proteins and associated lysosomes from the PVM. However, other autophagic events like nonselective canonical autophagy in the host cell continued. This was indicated as LC3, although not labeling the PVM anymore, still localized to autophagosomes in the infected host cell. It appears that growing parasites even benefit from this form of nonselective host cell autophagy as an additional source of nutrients, as in host cells deficient for autophagy, parasite growth was retarded and could partly be rescued by the supply of additional amino acid in the medium. Importantly, mouse infections with P. berghei sporozoites confirmed LC3 dynamics, the positive effect of autophagy activation on parasite growth, and negative effects upon autophagy inhibition. PMID:26208778

  19. Distribution Locational Real-Time Pricing Based Smart Building Control and Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hao, Jun; Dai, Xiaoxiao; Zhang, Yingchen

    This paper proposes an real-virtual parallel computing scheme for smart building operations aiming at augmenting overall social welfare. The University of Denver's campus power grid and Ritchie fitness center is used for demonstrating the proposed approach. An artificial virtual system is built in parallel to the real physical system to evaluate the overall social cost of the building operation based on the social science based working productivity model, numerical experiment based building energy consumption model and the power system based real-time pricing mechanism. Through interactive feedback exchanged between the real and virtual system, enlarged social welfare, including monetary cost reductionmore » and energy saving, as well as working productivity improvements, can be achieved.« less

  20. MOLA: a bootable, self-configuring system for virtual screening using AutoDock4/Vina on computer clusters.

    PubMed

    Abreu, Rui Mv; Froufe, Hugo Jc; Queiroz, Maria João Rp; Ferreira, Isabel Cfr

    2010-10-28

    Virtual screening of small molecules using molecular docking has become an important tool in drug discovery. However, large scale virtual screening is time demanding and usually requires dedicated computer clusters. There are a number of software tools that perform virtual screening using AutoDock4 but they require access to dedicated Linux computer clusters. Also no software is available for performing virtual screening with Vina using computer clusters. In this paper we present MOLA, an easy-to-use graphical user interface tool that automates parallel virtual screening using AutoDock4 and/or Vina in bootable non-dedicated computer clusters. MOLA automates several tasks including: ligand preparation, parallel AutoDock4/Vina jobs distribution and result analysis. When the virtual screening project finishes, an open-office spreadsheet file opens with the ligands ranked by binding energy and distance to the active site. All results files can automatically be recorded on an USB-flash drive or on the hard-disk drive using VirtualBox. MOLA works inside a customized Live CD GNU/Linux operating system, developed by us, that bypass the original operating system installed on the computers used in the cluster. This operating system boots from a CD on the master node and then clusters other computers as slave nodes via ethernet connections. MOLA is an ideal virtual screening tool for non-experienced users, with a limited number of multi-platform heterogeneous computers available and no access to dedicated Linux computer clusters. When a virtual screening project finishes, the computers can just be restarted to their original operating system. The originality of MOLA lies on the fact that, any platform-independent computer available can he added to the cluster, without ever using the computer hard-disk drive and without interfering with the installed operating system. With a cluster of 10 processors, and a potential maximum speed-up of 10x, the parallel algorithm of MOLA performed with a speed-up of 8,64× using AutoDock4 and 8,60× using Vina.

  1. Architecture Adaptive Computing Environment

    NASA Technical Reports Server (NTRS)

    Dorband, John E.

    2006-01-01

    Architecture Adaptive Computing Environment (aCe) is a software system that includes a language, compiler, and run-time library for parallel computing. aCe was developed to enable programmers to write programs, more easily than was previously possible, for a variety of parallel computing architectures. Heretofore, it has been perceived to be difficult to write parallel programs for parallel computers and more difficult to port the programs to different parallel computing architectures. In contrast, aCe is supportable on all high-performance computing architectures. Currently, it is supported on LINUX clusters. aCe uses parallel programming constructs that facilitate writing of parallel programs. Such constructs were used in single-instruction/multiple-data (SIMD) programming languages of the 1980s, including Parallel Pascal, Parallel Forth, C*, *LISP, and MasPar MPL. In aCe, these constructs are extended and implemented for both SIMD and multiple- instruction/multiple-data (MIMD) architectures. Two new constructs incorporated in aCe are those of (1) scalar and virtual variables and (2) pre-computed paths. The scalar-and-virtual-variables construct increases flexibility in optimizing memory utilization in various architectures. The pre-computed-paths construct enables the compiler to pre-compute part of a communication operation once, rather than computing it every time the communication operation is performed.

  2. Assessment of OLED displays for vision research

    PubMed Central

    Cooper, Emily A.; Jiang, Haomiao; Vildavski, Vladimir; Farrell, Joyce E.; Norcia, Anthony M.

    2013-01-01

    Vision researchers rely on visual display technology for the presentation of stimuli to human and nonhuman observers. Verifying that the desired and displayed visual patterns match along dimensions such as luminance, spectrum, and spatial and temporal frequency is an essential part of developing controlled experiments. With cathode-ray tubes (CRTs) becoming virtually unavailable on the commercial market, it is useful to determine the characteristics of newly available displays based on organic light emitting diode (OLED) panels to determine how well they may serve to produce visual stimuli. This report describes a series of measurements summarizing the properties of images displayed on two commercially available OLED displays: the Sony Trimaster EL BVM-F250 and PVM-2541. The results show that the OLED displays have large contrast ratios, wide color gamuts, and precise, well-behaved temporal responses. Correct adjustment of the settings on both models produced luminance nonlinearities that were well predicted by a power function (“gamma correction”). Both displays have adjustable pixel independence and can be set to have little to no spatial pixel interactions. OLED displays appear to be a suitable, or even preferable, option for many vision research applications. PMID:24155345

  3. Fast parallel 3D profilometer with DMD technology

    NASA Astrophysics Data System (ADS)

    Hou, Wenmei; Zhang, Yunbo

    2011-12-01

    Confocal microscope has been a powerful tool for three-dimensional profile analysis. Single mode confocal microscope is limited by scanning speed. This paper presents a 3D profilometer prototype of parallel confocal microscope based on DMD (Digital Micromirror Device). In this system the DMD takes the place of Nipkow Disk which is a classical parallel scanning scheme to realize parallel lateral scanning technique. Operated with certain pattern, the DMD generates a virtual pinholes array which separates the light into multi-beams. The key parameters that affect the measurement (pinhole size and the lateral scanning distance) can be configured conveniently by different patterns sent to DMD chip. To avoid disturbance between two virtual pinholes working at the same time, a scanning strategy is adopted. Depth response curve both axial and abaxial were extract. Measurement experiments have been carried out on silicon structured sample, and axial resolution of 55nm is achieved.

  4. Parallelization of Rocket Engine Simulator Software (PRESS)

    NASA Technical Reports Server (NTRS)

    Cezzar, Ruknet

    1998-01-01

    We have outlined our work in the last half of the funding period. We have shown how a demo package for RESSAP using MPI can be done. However, we also mentioned the difficulties with the UNIX platform. We have reiterated some of the suggestions made during the presentation of the progress of the at Fourth Annual HBCU Conference. Although we have discussed, in some detail, how TURBDES/PUMPDES software can be run in parallel using MPI, at present, we are unable to experiment any further with either MPI or PVM. Due to X windows not being implemented, we are also not able to experiment further with XPVM, which it will be recalled, has a nice GUI interface. There are also some concerns, on our part, about MPI being an appropriate tool. The best thing about MPr is that it is public domain. Although and plenty of documentation exists for the intricacies of using MPI, little information is available on its actual implementations. Other than very typical, somewhat contrived examples, such as Jacobi algorithm for solving Laplace's equation, there are few examples which can readily be applied to real situations, such as in our case. In effect, the review of literature on both MPI and PVM, and there is a lot, indicate something similar to the enormous effort which was spent on LISP and LISP-like languages as tools for artificial intelligence research. During the development of a book on programming languages [12], when we searched the literature for very simple examples like taking averages, reading and writing records, multiplying matrices, etc., we could hardly find a any! Yet, so much was said and done on that topic in academic circles. It appears that we faced the same problem with MPI, where despite significant documentation, we could not find even a simple example which supports course-grain parallelism involving only a few processes. From the foregoing, it appears that a new direction may be required for more productive research during the extension period (10/19/98 - 10/18/99). At the least, the research would need to be done on Windows 95/Windows NT based platforms. Moreover, with the acquisition of Lahey Fortran package for PC platform, and the existing Borland C + + 5. 0, we can do work on C + + wrapper issues. We have carefully studied the blueprint for Space Transportation Propulsion Integrated Design Environment for the next 25 years [13] and found the inclusion of HBCUs in that effort encouraging. Especially in the long period for which a map is provided, there is no doubt that HBCUs will grow and become better equipped to do meaningful research. In the shorter period, as was suggested in our presentation at the HBCU conference, some key decisions regarding the aging Fortran based software for rocket propellants will need to be made. One important issue is whether or not object oriented languages such as C + + or Java should be used for distributed computing. Whether or not "distributed computing" is necessary for the existing software is yet another, larger, question to be tackled with.

  5. Parallel-distributed mobile robot simulator

    NASA Astrophysics Data System (ADS)

    Okada, Hiroyuki; Sekiguchi, Minoru; Watanabe, Nobuo

    1996-06-01

    The aim of this project is to achieve an autonomous learning and growth function based on active interaction with the real world. It should also be able to autonomically acquire knowledge about the context in which jobs take place, and how the jobs are executed. This article describes a parallel distributed movable robot system simulator with an autonomous learning and growth function. The autonomous learning and growth function which we are proposing is characterized by its ability to learn and grow through interaction with the real world. When the movable robot interacts with the real world, the system compares the virtual environment simulation with the interaction result in the real world. The system then improves the virtual environment to match the real-world result more closely. This the system learns and grows. It is very important that such a simulation is time- realistic. The parallel distributed movable robot simulator was developed to simulate the space of a movable robot system with an autonomous learning and growth function. The simulator constructs a virtual space faithful to the real world and also integrates the interfaces between the user, the actual movable robot and the virtual movable robot. Using an ultrafast CG (computer graphics) system (FUJITSU AG series), time-realistic 3D CG is displayed.

  6. Establishing a group of endpoints to support collective operations without specifying unique identifiers for any endpoints

    DOEpatents

    Archer, Charles J.; Blocksom, Michael A.; Ratterman, Joseph D.; Smith, Brian E.; Xue, Hanghon

    2016-02-02

    A parallel computer executes a number of tasks, each task includes a number of endpoints and the endpoints are configured to support collective operations. In such a parallel computer, establishing a group of endpoints receiving a user specification of a set of endpoints included in a global collection of endpoints, where the user specification defines the set in accordance with a predefined virtual representation of the endpoints, the predefined virtual representation is a data structure setting forth an organization of tasks and endpoints included in the global collection of endpoints and the user specification defines the set of endpoints without a user specification of a particular endpoint; and defining a group of endpoints in dependence upon the predefined virtual representation of the endpoints and the user specification.

  7. Plasmacytoid Dendritic Cells Promote Host Defense Against Acute Pneumovirus Infection via the TLR7-MyD88-Dependent Signaling Pathway

    PubMed Central

    Davidson, Sophia; Kaiko, Gerard; Loh, Zhixuan; Lalwani, Amit; Zhang, Vivian; Spann, Kirsten; Foo, Shen Yun; Hansbro, Nicole; Uematsu, Satoshi; Akira, Shizuo; Matthaei, Klaus I.; Rosenberg, Helene F.; Foster, Paul S.; Phipps, Simon

    2012-01-01

    Human respiratory syncytial virus (RSV) is the leading cause of lower respiratory tract infection in infants. In human infants, plasmacytoid dendritic cells (pDC) are recruited to the nasal compartment during infection and initiate host defense through the secretion of type I IFN, IL-12 and IL-6. However, RSV-infected pDCs are refractory to TLR7-mediated activation. Here, we used the rodent-specific pathogen, pneumonia virus of mice (PVM), to determine the contribution of pDC and TLR7-signaling to the development of the innate inflammatory and early adaptive immune response. In wild-type (WT) but not TLR7- or myeloid differentiation protein 88 (MyD88)-deficient mice, PVM inoculation led to a marked infiltration of pDCs and increased expression of type I, II and III IFNs. The delayed induction of IFNs in the absence of TLR7 or MyD88 was associated with a diminished innate inflammatory response and augmented virus recovery from lung tissue. In the absence of TLR7, PVM-specific CD8+ T cell cytokine production was abrogated. The adoptive transfer of TLR7-sufficient but not TLR7-deficient pDC to TLR7-gene-deleted mice recapitulated the antiviral responses observed in WT mice and promoted virus clearance. In summary, TLR7-mediated signaling by pDC is required for appropriate innate responses to acute pneumovirus infection. It is conceivable that as-yet-unidentified defects in the TLR7 signaling pathway may be associated with elevated levels of RSV-associated morbidity and mortality among otherwise healthy human infants. PMID:21482736

  8. Culture media-based selection of endothelial cells, pericytes, and perivascular-resident macrophage-like melanocytes from the young mouse vestibular system.

    PubMed

    Zhang, Jinhui; Chen, Songlin; Cai, Jing; Hou, Zhiqiang; Wang, Xiaohan; Kachelmeier, Allan; Shi, Xiaorui

    2017-03-01

    The vestibular blood-labyrinth barrier (BLB) is comprised of perivascular-resident macrophage-like melanocytes (PVM/Ms) and pericytes (PCs), in addition to endothelial cells (ECs) and basement membrane (BM), and bears strong resemblance to the cochlear BLB in the stria vascularis. Over the past few decades, in vitro cell-based models have been widely used in blood-brain barrier (BBB) and blood-retina barrier (BRB) research, and have proved to be powerful tools for studying cell-cell interactions in their respective organs. Study of both the vestibular and strial BLB has been limited by the unavailability of primary culture cells from these barriers. To better understand how barrier component cells interact in the vestibular system to control BLB function, we developed a novel culture medium-based method for obtaining EC, PC, and PVM/M primary cells from tiny explants of the semicircular canal, sacculus, utriculus, and ampullae tissue of young mouse ears at post-natal age 8-12 d. Each phenotype is grown in a specific culture medium which selectively supports the phenotype in a mixed population of vestibular cell types. The unwanted phenotypes do not survive passaging. The protocol does not require additional equipment or special enzyme treatment. The harvesting process takes less than 2 h. Primary cell types are generated within 7-10 d. The primary culture ECs, PCs, and PVM/M shave consistent phenotypes more than 90% pure after two passages (∼ 3 weeks). The highly purified primary cell lines can be used for studying cell-cell interactions, barrier permeability, and angiogenesis. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Performance Evaluation of Communication Software Systems for Distributed Computing

    NASA Technical Reports Server (NTRS)

    Fatoohi, Rod

    1996-01-01

    In recent years there has been an increasing interest in object-oriented distributed computing since it is better quipped to deal with complex systems while providing extensibility, maintainability, and reusability. At the same time, several new high-speed network technologies have emerged for local and wide area networks. However, the performance of networking software is not improving as fast as the networking hardware and the workstation microprocessors. This paper gives an overview and evaluates the performance of the Common Object Request Broker Architecture (CORBA) standard in a distributed computing environment at NASA Ames Research Center. The environment consists of two testbeds of SGI workstations connected by four networks: Ethernet, FDDI, HiPPI, and ATM. The performance results for three communication software systems are presented, analyzed and compared. These systems are: BSD socket programming interface, IONA's Orbix, an implementation of the CORBA specification, and the PVM message passing library. The results show that high-level communication interfaces, such as CORBA and PVM, can achieve reasonable performance under certain conditions.

  10. RAGE deficiency predisposes mice to virus-induced paucigranulocytic asthma

    PubMed Central

    Arikkatt, Jaisy; Ullah, Md Ashik; Short, Kirsty Renfree; Zhang, Vivan; Gan, Wan Jun; Loh, Zhixuan; Werder, Rhiannon B; Simpson, Jennifer; Sly, Peter D; Mazzone, Stuart B; Spann, Kirsten M; Ferreira, Manuel AR; Upham, John W; Sukkar, Maria B; Phipps, Simon

    2017-01-01

    Asthma is a chronic inflammatory disease. Although many patients with asthma develop type-2 dominated eosinophilic inflammation, a number of individuals develop paucigranulocytic asthma, which occurs in the absence of eosinophilia or neutrophilia. The aetiology of paucigranulocytic asthma is unknown. However, both respiratory syncytial virus (RSV) infection and mutations in the receptor for advanced glycation endproducts (RAGE) are risk factors for asthma development. Here, we show that RAGE deficiency impairs anti-viral immunity during an early-life infection with pneumonia virus of mice (PVM; a murine analogue of RSV). The elevated viral load was associated with the release of high mobility group box-1 (HMGB1) which triggered airway smooth muscle remodelling in early-life. Re-infection with PVM in later-life induced many of the cardinal features of asthma in the absence of eosinophilic or neutrophilic inflammation. Anti-HMGB1 mitigated both early-life viral disease and asthma-like features, highlighting HMGB1 as a possible novel therapeutic target. DOI: http://dx.doi.org/10.7554/eLife.21199.001 PMID:28099113

  11. Performance verification of network function virtualization in software defined optical transport networks

    NASA Astrophysics Data System (ADS)

    Zhao, Yongli; Hu, Liyazhou; Wang, Wei; Li, Yajie; Zhang, Jie

    2017-01-01

    With the continuous opening of resource acquisition and application, there are a large variety of network hardware appliances deployed as the communication infrastructure. To lunch a new network application always implies to replace the obsolete devices and needs the related space and power to accommodate it, which will increase the energy and capital investment. Network function virtualization1 (NFV) aims to address these problems by consolidating many network equipment onto industry standard elements such as servers, switches and storage. Many types of IT resources have been deployed to run Virtual Network Functions (vNFs), such as virtual switches and routers. Then how to deploy NFV in optical transport networks is a of great importance problem. This paper focuses on this problem, and gives an implementation architecture of NFV-enabled optical transport networks based on Software Defined Optical Networking (SDON) with the procedure of vNFs call and return. Especially, an implementation solution of NFV-enabled optical transport node is designed, and a parallel processing method for NFV-enabled OTN nodes is proposed. To verify the performance of NFV-enabled SDON, the protocol interaction procedures of control function virtualization and node function virtualization are demonstrated on SDON testbed. Finally, the benefits and challenges of the parallel processing method for NFV-enabled OTN nodes are simulated and analyzed.

  12. Large-scale virtual screening on public cloud resources with Apache Spark.

    PubMed

    Capuccini, Marco; Ahmed, Laeeq; Schaal, Wesley; Laure, Erwin; Spjuth, Ola

    2017-01-01

    Structure-based virtual screening is an in-silico method to screen a target receptor against a virtual molecular library. Applying docking-based screening to large molecular libraries can be computationally expensive, however it constitutes a trivially parallelizable task. Most of the available parallel implementations are based on message passing interface, relying on low failure rate hardware and fast network connection. Google's MapReduce revolutionized large-scale analysis, enabling the processing of massive datasets on commodity hardware and cloud resources, providing transparent scalability and fault tolerance at the software level. Open source implementations of MapReduce include Apache Hadoop and the more recent Apache Spark. We developed a method to run existing docking-based screening software on distributed cloud resources, utilizing the MapReduce approach. We benchmarked our method, which is implemented in Apache Spark, docking a publicly available target receptor against [Formula: see text]2.2 M compounds. The performance experiments show a good parallel efficiency (87%) when running in a public cloud environment. Our method enables parallel Structure-based virtual screening on public cloud resources or commodity computer clusters. The degree of scalability that we achieve allows for trying out our method on relatively small libraries first and then to scale to larger libraries. Our implementation is named Spark-VS and it is freely available as open source from GitHub (https://github.com/mcapuccini/spark-vs).Graphical abstract.

  13. Scan line graphics generation on the massively parallel processor

    NASA Technical Reports Server (NTRS)

    Dorband, John E.

    1988-01-01

    Described here is how researchers implemented a scan line graphics generation algorithm on the Massively Parallel Processor (MPP). Pixels are computed in parallel and their results are applied to the Z buffer in large groups. To perform pixel value calculations, facilitate load balancing across the processors and apply the results to the Z buffer efficiently in parallel requires special virtual routing (sort computation) techniques developed by the author especially for use on single-instruction multiple-data (SIMD) architectures.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Painter, J.; McCormick, P.; Krogh, M.

    This paper presents the ACL (Advanced Computing Lab) Message Passing Library. It is a high throughput, low latency communications library, based on Thinking Machines Corp.`s CMMD, upon which message passing applications can be built. The library has been implemented on the Cray T3D, Thinking Machines CM-5, SGI workstations, and on top of PVM.

  15. A Workstation Farm Optimized for Monte Carlo Shell Model Calculations : Alphleet

    NASA Astrophysics Data System (ADS)

    Watanabe, Y.; Shimizu, N.; Haruyama, S.; Honma, M.; Mizusaki, T.; Taketani, A.; Utsuno, Y.; Otsuka, T.

    We have built a workstation farm named ``Alphleet" which consists of 140 COMPAQ's Alpha 21264 CPUs, for Monte Carlo Shell Model (MCSM) calculations. It has achieved more than 90 % scalable performance with 140 CPUs when the MCSM calculation with PVM and 61.2 Gflops of LINPACK.

  16. Virtual Sensor for Kinematic Estimation of Flexible Links in Parallel Robots

    PubMed Central

    Cabanes, Itziar; Mancisidor, Aitziber; Pinto, Charles

    2017-01-01

    The control of flexible link parallel manipulators is still an open area of research, endpoint trajectory tracking being one of the main challenges in this type of robot. The flexibility and deformations of the limbs make the estimation of the Tool Centre Point (TCP) position a challenging one. Authors have proposed different approaches to estimate this deformation and deduce the location of the TCP. However, most of these approaches require expensive measurement systems or the use of high computational cost integration methods. This work presents a novel approach based on a virtual sensor which can not only precisely estimate the deformation of the flexible links in control applications (less than 2% error), but also its derivatives (less than 6% error in velocity and 13% error in acceleration) according to simulation results. The validity of the proposed Virtual Sensor is tested in a Delta Robot, where the position of the TCP is estimated based on the Virtual Sensor measurements with less than a 0.03% of error in comparison with the flexible approach developed in ADAMS Multibody Software. PMID:28832510

  17. Effect of a pre-brush mounthrinse containing triclosan and a copolymer on calculus formation: a three-month clinical study in Thailand.

    PubMed

    Triratana, T; Kraivaphan, P; Tandhachoon, K; Rustogi, K; Volpe, A R; Petrone, M

    1995-01-01

    A three-month, double-blind, parallel clinical study was conducted on a population of Thai adults to evaluate the effect of the twice daily use of a commercially available pre-brush mouthrinse on supragingival calculus formation. The mouthrinse test product contained 0.03% triclosan and 0.13% PVM/MA copolymer with the absence of fluoride. The subjects were initially examined for calculus using the Volpe-Manhold procedure. All subjects received an oral prophylaxis and were assigned to the use of either 1) a triclosan-copolymer mouthrinse, or 2) a matching flavored/colored water placebo mouthrinse. Subjects were instructed to rinse twice daily with 10 cc of the assigned mouthrinse for 1 minute, followed by brushing with the provided toothpaste containing fluoride for 45 seconds. After three months of using the assigned mouthrinse, the subjects were reexamined for calculus formation. The results indicated that the subjects using triclosan/copolymer mouthrinse had 23.17% less supragingival calculus than the placebo mouthrinse subjects. This reduction was statistically significant at the 99% or greater (F = 24.35, p<0.001) level of confidence.

  18. Interleaved EPI diffusion imaging using SPIRiT-based reconstruction with virtual coil compression.

    PubMed

    Dong, Zijing; Wang, Fuyixue; Ma, Xiaodong; Zhang, Zhe; Dai, Erpeng; Yuan, Chun; Guo, Hua

    2018-03-01

    To develop a novel diffusion imaging reconstruction framework based on iterative self-consistent parallel imaging reconstruction (SPIRiT) for multishot interleaved echo planar imaging (iEPI), with computation acceleration by virtual coil compression. As a general approach for autocalibrating parallel imaging, SPIRiT improves the performance of traditional generalized autocalibrating partially parallel acquisitions (GRAPPA) methods in that the formulation with self-consistency is better conditioned, suggesting SPIRiT to be a better candidate in k-space-based reconstruction. In this study, a general SPIRiT framework is adopted to incorporate both coil sensitivity and phase variation information as virtual coils and then is applied to 2D navigated iEPI diffusion imaging. To reduce the reconstruction time when using a large number of coils and shots, a novel shot-coil compression method is proposed for computation acceleration in Cartesian sampling. Simulations and in vivo experiments were conducted to evaluate the performance of the proposed method. Compared with the conventional coil compression, the shot-coil compression achieved higher compression rates with reduced errors. The simulation and in vivo experiments demonstrate that the SPIRiT-based reconstruction outperformed the existing method, realigned GRAPPA, and provided superior images with reduced artifacts. The SPIRiT-based reconstruction with virtual coil compression is a reliable method for high-resolution iEPI diffusion imaging. Magn Reson Med 79:1525-1531, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  19. Efficient operating system level virtualization techniques for cloud resources

    NASA Astrophysics Data System (ADS)

    Ansu, R.; Samiksha; Anju, S.; Singh, K. John

    2017-11-01

    Cloud computing is an advancing technology which provides the servcies of Infrastructure, Platform and Software. Virtualization and Computer utility are the keys of Cloud computing. The numbers of cloud users are increasing day by day. So it is the need of the hour to make resources available on demand to satisfy user requirements. The technique in which resources namely storage, processing power, memory and network or I/O are abstracted is known as Virtualization. For executing the operating systems various virtualization techniques are available. They are: Full System Virtualization and Para Virtualization. In Full Virtualization, the whole architecture of hardware is duplicated virtually. No modifications are required in Guest OS as the OS deals with the VM hypervisor directly. In Para Virtualization, modifications of OS is required to run in parallel with other OS. For the Guest OS to access the hardware, the host OS must provide a Virtual Machine Interface. OS virtualization has many advantages such as migrating applications transparently, consolidation of server, online maintenance of OS and providing security. This paper briefs both the virtualization techniques and discusses the issues in OS level virtualization.

  20. [Virtual microscopy in pathology teaching and postgraduate training (continuing education)].

    PubMed

    Sinn, H P; Andrulis, M; Mogler, C; Schirmacher, P

    2008-11-01

    As with conventional microscopy, virtual microscopy permits histological tissue sections to be viewed on a computer screen with a free choice of viewing areas and a wide range of magnifications. This, combined with the possibility of linking virtual microscopy to E-Learning courses, make virtual microscopy an ideal tool for teaching and postgraduate training in pathology. Uses of virtual microscopy in pathology teaching include blended learning with the presentation of digital teaching slides in the internet parallel to presentation in the histology lab, extending student access to histology slides beyond the lab. Other uses are student self-learning in the Internet, as well as the presentation of virtual slides in the classroom with or without replacing real microscopes. Successful integration of virtual microscopy depends on its embedding in the virtual classroom and the creation of interactive E-learning content. Applications derived from this include the use of virtual microscopy in video clips, podcasts, SCORM modules and the presentation of virtual microscopy using interactive whiteboards in the classroom.

  1. Chronic Neuropsychological Sequelae of Cholinesterase Inhibitors in the Absence of Structural Brain Damage: Two Cases of Acute Poisoning

    PubMed Central

    Roldán-Tapia, Lola; Leyva, Antonia; Laynez, Francisco; Santed, Fernando Sánchez

    2005-01-01

    Here we describe two cases of carbamate poisoning. Patients AMF and PVM were accidentally poisoned by cholinesterase inhibitors. The medical diagnosis in both cases was overcholinergic syndrome, as demonstrated by exposure to cholinesterase inhibitors. The widespread use of cholinesterase inhibitors, especially as pesticides, produces a great number of human poisoning events annually. The main known neurotoxic effect of these substances is cholinesterase inhibition, which causes cholinergic overstimulation. Once AMF and PVM had recovered from acute intoxication, they were subjected to extensive neuropsychological evaluation 3 and 12 months after the poisoning event. These assessments point to a cognitive deficit in attention, memory, perceptual, and motor domains 3 months after intoxication. One year later these sequelae remained, even though the brain magnetic resonance imaging (MRI) and computed tomography (CT) scans were interpreted as being within normal limits. We present these cases as examples of neuropsychological profiles of long-term sequelae related to acute poisoning by cholinesterase inhibitor pesticides and show the usefulness of neuropsychological assessment in detecting central nervous system dysfunction in the absence of biochemical or structural markers. PMID:15929901

  2. High Performance Active Database Management on a Shared-Nothing Parallel Processor

    DTIC Science & Technology

    1998-05-01

    either stored or virtual. A stored node is like a materialized view. It actually contains the specified tuples. A virtual node is like a real view...90292-6695 DL-5 COLUMBIA UNIV/DEPT COMPUTER SCIENCi ATTN: OR GAIL £. KAISER 450 COMPUTER SCIENCE 3LDG 500 WEST 12ÖTH STRSET NEW YORK NY 10027

  3. Optimized Hypervisor Scheduler for Parallel Discrete Event Simulations on Virtual Machine Platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoginath, Srikanth B; Perumalla, Kalyan S

    2013-01-01

    With the advent of virtual machine (VM)-based platforms for parallel computing, it is now possible to execute parallel discrete event simulations (PDES) over multiple virtual machines, in contrast to executing in native mode directly over hardware as is traditionally done over the past decades. While mature VM-based parallel systems now offer new, compelling benefits such as serviceability, dynamic reconfigurability and overall cost effectiveness, the runtime performance of parallel applications can be significantly affected. In particular, most VM-based platforms are optimized for general workloads, but PDES execution exhibits unique dynamics significantly different from other workloads. Here we first present results frommore » experiments that highlight the gross deterioration of the runtime performance of VM-based PDES simulations when executed using traditional VM schedulers, quantitatively showing the bad scaling properties of the scheduler as the number of VMs is increased. The mismatch is fundamental in nature in the sense that any fairness-based VM scheduler implementation would exhibit this mismatch with PDES runs. We also present a new scheduler optimized specifically for PDES applications, and describe its design and implementation. Experimental results obtained from running PDES benchmarks (PHOLD and vehicular traffic simulations) over VMs show over an order of magnitude improvement in the run time of the PDES-optimized scheduler relative to the regular VM scheduler, with over 20 reduction in run time of simulations using up to 64 VMs. The observations and results are timely in the context of emerging systems such as cloud platforms and VM-based high performance computing installations, highlighting to the community the need for PDES-specific support, and the feasibility of significantly reducing the runtime overhead for scalable PDES on VM platforms.« less

  4. Default Parallels Plesk Panel Page

    Science.gov Websites

    services that small businesses want and need. Our software includes key building blocks of cloud service virtualized servers Service Provider Products Parallels® Automation Hosting, SaaS, and cloud computing , the leading hosting automation software. You see this page because there is no Web site at this

  5. Shared virtual memory and generalized speedup

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He; Zhu, Jianping

    1994-01-01

    Generalized speedup is defined as parallel speed over sequential speed. The generalized speedup and its relation with other existing performance metrics, such as traditional speedup, efficiency, scalability, etc., are carefully studied. In terms of the introduced asymptotic speed, it was shown that the difference between the generalized speedup and the traditional speedup lies in the definition of the efficiency of uniprocessor processing, which is a very important issue in shared virtual memory machines. A scientific application was implemented on a KSR-1 parallel computer. Experimental and theoretical results show that the generalized speedup is distinct from the traditional speedup and provides a more reasonable measurement. In the study of different speedups, various causes of superlinear speedup are also presented.

  6. Parallel computation for biological sequence comparison: comparing a portable model to the native model for the Intel Hypercube.

    PubMed

    Nadkarni, P M; Miller, P L

    1991-01-01

    A parallel program for inter-database sequence comparison was developed on the Intel Hypercube using two models of parallel programming. One version was built using machine-specific Hypercube parallel programming commands. The other version was built using Linda, a machine-independent parallel programming language. The two versions of the program provide a case study comparing these two approaches to parallelization in an important biological application area. Benchmark tests with both programs gave comparable results with a small number of processors. As the number of processors was increased, the Linda version was somewhat less efficient. The Linda version was also run without change on Network Linda, a virtual parallel machine running on a network of desktop workstations.

  7. Multilevel Parallelization of AutoDock 4.2.

    PubMed

    Norgan, Andrew P; Coffman, Paul K; Kocher, Jean-Pierre A; Katzmann, David J; Sosa, Carlos P

    2011-04-28

    Virtual (computational) screening is an increasingly important tool for drug discovery. AutoDock is a popular open-source application for performing molecular docking, the prediction of ligand-receptor interactions. AutoDock is a serial application, though several previous efforts have parallelized various aspects of the program. In this paper, we report on a multi-level parallelization of AutoDock 4.2 (mpAD4). Using MPI and OpenMP, AutoDock 4.2 was parallelized for use on MPI-enabled systems and to multithread the execution of individual docking jobs. In addition, code was implemented to reduce input/output (I/O) traffic by reusing grid maps at each node from docking to docking. Performance of mpAD4 was examined on two multiprocessor computers. Using MPI with OpenMP multithreading, mpAD4 scales with near linearity on the multiprocessor systems tested. In situations where I/O is limiting, reuse of grid maps reduces both system I/O and overall screening time. Multithreading of AutoDock's Lamarkian Genetic Algorithm with OpenMP increases the speed of execution of individual docking jobs, and when combined with MPI parallelization can significantly reduce the execution time of virtual screens. This work is significant in that mpAD4 speeds the execution of certain molecular docking workloads and allows the user to optimize the degree of system-level (MPI) and node-level (OpenMP) parallelization to best fit both workloads and computational resources.

  8. A parallel algorithm for switch-level timing simulation on a hypercube multiprocessor

    NASA Technical Reports Server (NTRS)

    Rao, Hariprasad Nannapaneni

    1989-01-01

    The parallel approach to speeding up simulation is studied, specifically the simulation of digital LSI MOS circuitry on the Intel iPSC/2 hypercube. The simulation algorithm is based on RSIM, an event driven switch-level simulator that incorporates a linear transistor model for simulating digital MOS circuits. Parallel processing techniques based on the concepts of Virtual Time and rollback are utilized so that portions of the circuit may be simulated on separate processors, in parallel for as large an increase in speed as possible. A partitioning algorithm is also developed in order to subdivide the circuit for parallel processing.

  9. Improved Learning Efficiency and Increased Student Collaboration through Use of Virtual Microscopy in the Teaching of Human Pathology

    ERIC Educational Resources Information Center

    Braun, Mark W.; Kearns, Katherine D.

    2008-01-01

    The implementation of virtual microscopy in the teaching of pathology at the Bloomington, Indiana extension of the Indiana University School of Medicine permitted the assessment of student attitudes, use and academic performance with respect to this new technology. A gradual and integrated approach allowed the parallel assessment with respect to…

  10. 'Dilute-and-shoot' triple parallel mass spectrometry method for analysis of vitamin D and triacylglycerols in dietary supplements

    USDA-ARS?s Scientific Manuscript database

    A method is demonstrated for analysis of vitamin D-fortified dietary supplements that eliminates virtually all chemical pretreatment prior to analysis, and is referred to as a ‘dilute and shoot’ method. Three mass spectrometers, in parallel, plus a UV detector, an evaporative light scattering detec...

  11. Simulation fidelity of a virtual environment display

    NASA Technical Reports Server (NTRS)

    Nemire, Kenneth; Jacoby, Richard H.; Ellis, Stephen R.

    1994-01-01

    We assessed the degree to which a virtual environment system produced a faithful simulation of three-dimensional space by investigating the influence of a pitched optic array on the perception of gravity-referenced eye level (GREL). We compared the results with those obtained in a physical environment. In a within-subjects factorial design, 12 subjects indicated GREL while viewing virtual three-dimensional arrays at different static orientations. A physical array biased GREL more than did a geometrically identical virtual pitched array. However, addition of two sets of orthogonal parallel lines (a grid) to the virtual pitched array resulted in as large a bias as that obtained with the physical pitched array. The increased bias was caused by longitudinal, but not the transverse, components of the grid. We discuss implications of our results for spatial orientation models and for designs of virtual displays.

  12. Hierarchical virtual screening approaches in small molecule drug discovery.

    PubMed

    Kumar, Ashutosh; Zhang, Kam Y J

    2015-01-01

    Virtual screening has played a significant role in the discovery of small molecule inhibitors of therapeutic targets in last two decades. Various ligand and structure-based virtual screening approaches are employed to identify small molecule ligands for proteins of interest. These approaches are often combined in either hierarchical or parallel manner to take advantage of the strength and avoid the limitations associated with individual methods. Hierarchical combination of ligand and structure-based virtual screening approaches has received noteworthy success in numerous drug discovery campaigns. In hierarchical virtual screening, several filters using ligand and structure-based approaches are sequentially applied to reduce a large screening library to a number small enough for experimental testing. In this review, we focus on different hierarchical virtual screening strategies and their application in the discovery of small molecule modulators of important drug targets. Several virtual screening studies are discussed to demonstrate the successful application of hierarchical virtual screening in small molecule drug discovery. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Parallel computation for biological sequence comparison: comparing a portable model to the native model for the Intel Hypercube.

    PubMed Central

    Nadkarni, P. M.; Miller, P. L.

    1991-01-01

    A parallel program for inter-database sequence comparison was developed on the Intel Hypercube using two models of parallel programming. One version was built using machine-specific Hypercube parallel programming commands. The other version was built using Linda, a machine-independent parallel programming language. The two versions of the program provide a case study comparing these two approaches to parallelization in an important biological application area. Benchmark tests with both programs gave comparable results with a small number of processors. As the number of processors was increased, the Linda version was somewhat less efficient. The Linda version was also run without change on Network Linda, a virtual parallel machine running on a network of desktop workstations. PMID:1807632

  14. Directions in parallel programming: HPF, shared virtual memory and object parallelism in pC++

    NASA Technical Reports Server (NTRS)

    Bodin, Francois; Priol, Thierry; Mehrotra, Piyush; Gannon, Dennis

    1994-01-01

    Fortran and C++ are the dominant programming languages used in scientific computation. Consequently, extensions to these languages are the most popular for programming massively parallel computers. We discuss two such approaches to parallel Fortran and one approach to C++. The High Performance Fortran Forum has designed HPF with the intent of supporting data parallelism on Fortran 90 applications. HPF works by asking the user to help the compiler distribute and align the data structures with the distributed memory modules in the system. Fortran-S takes a different approach in which the data distribution is managed by the operating system and the user provides annotations to indicate parallel control regions. In the case of C++, we look at pC++ which is based on a concurrent aggregate parallel model.

  15. The core legion object model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, M.; Grimshaw, A.

    1996-12-31

    The Legion project at the University of Virginia is an architecture for designing and building system services that provide the illusion of a single virtual machine to users, a virtual machine that provides secure shared object and shared name spaces, application adjustable fault-tolerance, improved response time, and greater throughput. Legion targets wide area assemblies of workstations, supercomputers, and parallel supercomputers, Legion tackles problems not solved by existing workstation based parallel processing tools; the system will enable fault-tolerance, wide area parallel processing, inter-operability, heterogeneity, a single global name space, protection, security, efficient scheduling, and comprehensive resource management. This paper describes themore » core Legion object model, which specifies the composition and functionality of Legion`s core objects-those objects that cooperate to create, locate, manage, and remove objects in the Legion system. The object model facilitates a flexible extensible implementation, provides a single global name space, grants site autonomy to participating organizations, and scales to millions of sites and trillions of objects.« less

  16. An Australian and New Zealand Scoping Study on the Use of 3D Immersive Virtual Worlds in Higher Education

    ERIC Educational Resources Information Center

    Dalgarno, Barney; Lee, Mark J. W.; Carlson, Lauren; Gregory, Sue; Tynan, Belinda

    2011-01-01

    This article describes the research design of, and reports selected findings from, a scoping study aimed at examining current and planned applications of 3D immersive virtual worlds at higher education institutions across Australia and New Zealand. The scoping study is the first of its kind in the region, intended to parallel and complement a…

  17. Time Warp Operating System (TWOS)

    NASA Technical Reports Server (NTRS)

    Bellenot, Steven F.

    1993-01-01

    Designed to support parallel discrete-event simulation, TWOS is complete implementation of Time Warp mechanism - distributed protocol for virtual time synchronization based on process rollback and message annihilation.

  18. Affordance Access Matters: Preschool Children's Learning Progressions While Interacting with Touch-Screen Mathematics Apps

    ERIC Educational Resources Information Center

    Bullock, Emma P.; Shumway, Jessica F.; Watts, Christina M.; Moyer-Packenham, Patricia S.

    2017-01-01

    The purpose of this study was to contribute to the research on mathematics app use by very young children, and specifically mathematics apps for touch-screen mobile devices that contain virtual manipulatives. The study used a convergent parallel mixed methods design, in which quantitative and qualitative data were collected in parallel, analyzed…

  19. Performance Measurement, Visualization and Modeling of Parallel and Distributed Programs

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.; Sarukkai, Sekhar R.; Mehra, Pankaj; Lum, Henry, Jr. (Technical Monitor)

    1994-01-01

    This paper presents a methodology for debugging the performance of message-passing programs on both tightly coupled and loosely coupled distributed-memory machines. The AIMS (Automated Instrumentation and Monitoring System) toolkit, a suite of software tools for measurement and analysis of performance, is introduced and its application illustrated using several benchmark programs drawn from the field of computational fluid dynamics. AIMS includes (i) Xinstrument, a powerful source-code instrumentor, which supports both Fortran77 and C as well as a number of different message-passing libraries including Intel's NX Thinking Machines' CMMD, and PVM; (ii) Monitor, a library of timestamping and trace -collection routines that run on supercomputers (such as Intel's iPSC/860, Delta, and Paragon and Thinking Machines' CM5) as well as on networks of workstations (including Convex Cluster and SparcStations connected by a LAN); (iii) Visualization Kernel, a trace-animation facility that supports source-code clickback, simultaneous visualization of computation and communication patterns, as well as analysis of data movements; (iv) Statistics Kernel, an advanced profiling facility, that associates a variety of performance data with various syntactic components of a parallel program; (v) Index Kernel, a diagnostic tool that helps pinpoint performance bottlenecks through the use of abstract indices; (vi) Modeling Kernel, a facility for automated modeling of message-passing programs that supports both simulation -based and analytical approaches to performance prediction and scalability analysis; (vii) Intrusion Compensator, a utility for recovering true performance from observed performance by removing the overheads of monitoring and their effects on the communication pattern of the program; and (viii) Compatibility Tools, that convert AIMS-generated traces into formats used by other performance-visualization tools, such as ParaGraph, Pablo, and certain AVS/Explorer modules.

  20. A Spectral Element Ocean Model on the Cray T3D: the interannual variability of the Mediterranean Sea general circulation

    NASA Astrophysics Data System (ADS)

    Molcard, A. J.; Pinardi, N.; Ansaloni, R.

    A new numerical model, SEOM (Spectral Element Ocean Model, (Iskandarani et al, 1994)), has been implemented in the Mediterranean Sea. Spectral element methods combine the geometric flexibility of finite element techniques with the rapid convergence rate of spectral schemes. The current version solves the shallow water equations with a fifth (or sixth) order accuracy spectral scheme and about 50.000 nodes. The domain decomposition philosophy makes it possible to exploit the power of parallel machines. The original MIMD master/slave version of SEOM, written in F90 and PVM, has been ported to the Cray T3D. When critical for performance, Cray specific high-performance one-sided communication routines (SHMEM) have been adopted to fully exploit the Cray T3D interprocessor network. Tests performed with highly unstructured and irregular grid, on up to 128 processors, show an almost linear scalability even with unoptimized domain decomposition techniques. Results from various case studies on the Mediterranean Sea are shown, involving realistic coastline geometry, and monthly mean 1000mb winds from the ECMWF's atmospheric model operational analysis from the period January 1987 to December 1994. The simulation results show that variability in the wind forcing considerably affect the circulation dynamics of the Mediterranean Sea.

  1. Time Warp Operating System, Version 2.5.1

    NASA Technical Reports Server (NTRS)

    Bellenot, Steven F.; Gieselman, John S.; Hawley, Lawrence R.; Peterson, Judy; Presley, Matthew T.; Reiher, Peter L.; Springer, Paul L.; Tupman, John R.; Wedel, John J., Jr.; Wieland, Frederick P.; hide

    1993-01-01

    Time Warp Operating System, TWOS, is special purpose computer program designed to support parallel simulation of discrete events. Complete implementation of Time Warp software mechanism, which implements distributed protocol for virtual synchronization based on rollback of processes and annihilation of messages. Supports simulations and other computations in which both virtual time and dynamic load balancing used. Program utilizes underlying resources of operating system. Written in C programming language.

  2. : A Scalable and Transparent System for Simulating MPI Programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perumalla, Kalyan S

    2010-01-01

    is a scalable, transparent system for experimenting with the execution of parallel programs on simulated computing platforms. The level of simulated detail can be varied for application behavior as well as for machine characteristics. Unique features of are repeatability of execution, scalability to millions of simulated (virtual) MPI ranks, scalability to hundreds of thousands of host (real) MPI ranks, portability of the system to a variety of host supercomputing platforms, and the ability to experiment with scientific applications whose source-code is available. The set of source-code interfaces supported by is being expanded to support a wider set of applications, andmore » MPI-based scientific computing benchmarks are being ported. In proof-of-concept experiments, has been successfully exercised to spawn and sustain very large-scale executions of an MPI test program given in source code form. Low slowdowns are observed, due to its use of purely discrete event style of execution, and due to the scalability and efficiency of the underlying parallel discrete event simulation engine, sik. In the largest runs, has been executed on up to 216,000 cores of a Cray XT5 supercomputer, successfully simulating over 27 million virtual MPI ranks, each virtual rank containing its own thread context, and all ranks fully synchronized by virtual time.« less

  3. Cooperative storage of shared files in a parallel computing system with dynamic block size

    DOEpatents

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2015-11-10

    Improved techniques are provided for parallel writing of data to a shared object in a parallel computing system. A method is provided for storing data generated by a plurality of parallel processes to a shared object in a parallel computing system. The method is performed by at least one of the processes and comprises: dynamically determining a block size for storing the data; exchanging a determined amount of the data with at least one additional process to achieve a block of the data having the dynamically determined block size; and writing the block of the data having the dynamically determined block size to a file system. The determined block size comprises, e.g., a total amount of the data to be stored divided by the number of parallel processes. The file system comprises, for example, a log structured virtual parallel file system, such as a Parallel Log-Structured File System (PLFS).

  4. Chip architecture - A revolution brewing

    NASA Astrophysics Data System (ADS)

    Guterl, F.

    1983-07-01

    Techniques being explored by microchip designers and manufacturers to both speed up memory access and instruction execution while protecting memory are discussed. Attention is given to hardwiring control logic, pipelining for parallel processing, devising orthogonal instruction sets for interchangeable instruction fields, and the development of hardware for implementation of virtual memory and multiuser systems to provide memory management and protection. The inclusion of microcode in mainframes eliminated logic circuits that control timing and gating of the CPU. However, improvements in memory architecture have reduced access time to below that needed for instruction execution. Hardwiring the functions as a virtual memory enhances memory protection. Parallelism involves a redundant architecture, which allows identical operations to be performed simultaneously, and can be directed with microcode to avoid abortion of intermediate instructions once on set of instructions has been completed.

  5. Vision-Based Navigation and Parallel Computing

    DTIC Science & Technology

    1990-08-01

    33 5.8. Behizad Kamgar-Parsi and Behrooz Karngar-Parsi,"On Problem 5- lving with Hopfield Neural Networks", CAR-TR-462, CS-TR...Second. the hypercube connections support logarithmic implementations of fundamental parallel algorithms. such as grid permutations and scan...the pose space. It also uses a set of virtual processors to represent an orthogonal projection grid , and projections of the six dimensional pose space

  6. Scalable computing for evolutionary genomics.

    PubMed

    Prins, Pjotr; Belhachemi, Dominique; Möller, Steffen; Smant, Geert

    2012-01-01

    Genomic data analysis in evolutionary biology is becoming so computationally intensive that analysis of multiple hypotheses and scenarios takes too long on a single desktop computer. In this chapter, we discuss techniques for scaling computations through parallelization of calculations, after giving a quick overview of advanced programming techniques. Unfortunately, parallel programming is difficult and requires special software design. The alternative, especially attractive for legacy software, is to introduce poor man's parallelization by running whole programs in parallel as separate processes, using job schedulers. Such pipelines are often deployed on bioinformatics computer clusters. Recent advances in PC virtualization have made it possible to run a full computer operating system, with all of its installed software, on top of another operating system, inside a "box," or virtual machine (VM). Such a VM can flexibly be deployed on multiple computers, in a local network, e.g., on existing desktop PCs, and even in the Cloud, to create a "virtual" computer cluster. Many bioinformatics applications in evolutionary biology can be run in parallel, running processes in one or more VMs. Here, we show how a ready-made bioinformatics VM image, named BioNode, effectively creates a computing cluster, and pipeline, in a few steps. This allows researchers to scale-up computations from their desktop, using available hardware, anytime it is required. BioNode is based on Debian Linux and can run on networked PCs and in the Cloud. Over 200 bioinformatics and statistical software packages, of interest to evolutionary biology, are included, such as PAML, Muscle, MAFFT, MrBayes, and BLAST. Most of these software packages are maintained through the Debian Med project. In addition, BioNode contains convenient configuration scripts for parallelizing bioinformatics software. Where Debian Med encourages packaging free and open source bioinformatics software through one central project, BioNode encourages creating free and open source VM images, for multiple targets, through one central project. BioNode can be deployed on Windows, OSX, Linux, and in the Cloud. Next to the downloadable BioNode images, we provide tutorials online, which empower bioinformaticians to install and run BioNode in different environments, as well as information for future initiatives, on creating and building such images.

  7. Virtual fringe projection system with nonparallel illumination based on iteration

    NASA Astrophysics Data System (ADS)

    Zhou, Duo; Wang, Zhangying; Gao, Nan; Zhang, Zonghua; Jiang, Xiangqian

    2017-06-01

    Fringe projection profilometry has been widely applied in many fields. To set up an ideal measuring system, a virtual fringe projection technique has been studied to assist in the design of hardware configurations. However, existing virtual fringe projection systems use parallel illumination and have a fixed optical framework. This paper presents a virtual fringe projection system with nonparallel illumination. Using an iterative method to calculate intersection points between rays and reference planes or object surfaces, the proposed system can simulate projected fringe patterns and captured images. A new explicit calibration method has been presented to validate the precision of the system. Simulated results indicate that the proposed iterative method outperforms previous systems. Our virtual system can be applied to error analysis, algorithm optimization, and help operators to find ideal system parameter settings for actual measurements.

  8. PRAIS: Distributed, real-time knowledge-based systems made easy

    NASA Technical Reports Server (NTRS)

    Goldstein, David G.

    1990-01-01

    This paper discusses an architecture for real-time, distributed (parallel) knowledge-based systems called the Parallel Real-time Artificial Intelligence System (PRAIS). PRAIS strives for transparently parallelizing production (rule-based) systems, even when under real-time constraints. PRAIS accomplishes these goals by incorporating a dynamic task scheduler, operating system extensions for fact handling, and message-passing among multiple copies of CLIPS executing on a virtual blackboard. This distributed knowledge-based system tool uses the portability of CLIPS and common message-passing protocols to operate over a heterogeneous network of processors.

  9. Distributed Virtual System (DIVIRS) Project

    NASA Technical Reports Server (NTRS)

    Schorr, Herbert; Neuman, B. Clifford

    1993-01-01

    As outlined in our continuation proposal 92-ISI-50R (revised) on contract NCC 2-539, we are (1) developing software, including a system manager and a job manager, that will manage available resources and that will enable programmers to program parallel applications in terms of a virtual configuration of processors, hiding the mapping to physical nodes; (2) developing communications routines that support the abstractions implemented in item one; (3) continuing the development of file and information systems based on the virtual system model; and (4) incorporating appropriate security measures to allow the mechanisms developed in items 1 through 3 to be used on an open network. The goal throughout our work is to provide a uniform model that can be applied to both parallel and distributed systems. We believe that multiprocessor systems should exist in the context of distributed systems, allowing them to be more easily shared by those that need them. Our work provides the mechanisms through which nodes on multiprocessors are allocated to jobs running within the distributed system and the mechanisms through which files needed by those jobs can be located and accessed.

  10. DIstributed VIRtual System (DIVIRS) project

    NASA Technical Reports Server (NTRS)

    Schorr, Herbert; Neuman, B. Clifford

    1994-01-01

    As outlined in our continuation proposal 92-ISI-. OR (revised) on NASA cooperative agreement NCC2-539, we are (1) developing software, including a system manager and a job manager, that will manage available resources and that will enable programmers to develop and execute parallel applications in terms of a virtual configuration of processors, hiding the mapping to physical nodes; (2) developing communications routines that support the abstractions implemented in item one; (3) continuing the development of file and information systems based on the Virtual System Model; and (4) incorporating appropriate security measures to allow the mechanisms developed in items 1 through 3 to be used on an open network. The goal throughout our work is to provide a uniform model that can be applied to both parallel and distributed systems. We believe that multiprocessor systems should exist in the context of distributed systems, allowing them to be more easily shared by those that need them. Our work provides the mechanisms through which nodes on multiprocessors are allocated to jobs running within the distributed system and the mechanisms through which files needed by those jobs can be located and accessed.

  11. DIstributed VIRtual System (DIVIRS) project

    NASA Technical Reports Server (NTRS)

    Schorr, Herbert; Neuman, Clifford B.

    1995-01-01

    As outlined in our continuation proposal 92-ISI-50R (revised) on NASA cooperative agreement NCC2-539, we are (1) developing software, including a system manager and a job manager, that will manage available resources and that will enable programmers to develop and execute parallel applications in terms of a virtual configuration of processors, hiding the mapping to physical nodes; (2) developing communications routines that support the abstractions implemented in item one; (3) continuing the development of file and information systems based on the Virtual System Model; and (4) incorporating appropriate security measures to allow the mechanisms developed in items 1 through 3 to be used on an open network. The goal throughout our work is to provide a uniform model that can be applied to both parallel and distributed systems. We believe that multiprocessor systems should exist in the context of distributed systems, allowing them to be more easily shared by those that need them. Our work provides the mechanisms through which nodes on multiprocessors are allocated to jobs running within the distributed system and the mechanisms through which files needed by those jobs can be located and accessed.

  12. Distributed Virtual System (DIVIRS) project

    NASA Technical Reports Server (NTRS)

    Schorr, Herbert; Neuman, B. Clifford

    1993-01-01

    As outlined in the continuation proposal 92-ISI-50R (revised) on NASA cooperative agreement NCC 2-539, the investigators are developing software, including a system manager and a job manager, that will manage available resources and that will enable programmers to develop and execute parallel applications in terms of a virtual configuration of processors, hiding the mapping to physical nodes; developing communications routines that support the abstractions implemented; continuing the development of file and information systems based on the Virtual System Model; and incorporating appropriate security measures to allow the mechanisms developed to be used on an open network. The goal throughout the work is to provide a uniform model that can be applied to both parallel and distributed systems. The authors believe that multiprocessor systems should exist in the context of distributed systems, allowing them to be more easily shared by those that need them. The work provides the mechanisms through which nodes on multiprocessors are allocated to jobs running within the distributed system and the mechanisms through which files needed by those jobs can be located and accessed.

  13. A framework for grand scale parallelization of the combined finite discrete element method in 2d

    NASA Astrophysics Data System (ADS)

    Lei, Z.; Rougier, E.; Knight, E. E.; Munjiza, A.

    2014-09-01

    Within the context of rock mechanics, the Combined Finite-Discrete Element Method (FDEM) has been applied to many complex industrial problems such as block caving, deep mining techniques (tunneling, pillar strength, etc.), rock blasting, seismic wave propagation, packing problems, dam stability, rock slope stability, rock mass strength characterization problems, etc. The reality is that most of these were accomplished in a 2D and/or single processor realm. In this work a hardware independent FDEM parallelization framework has been developed using the Virtual Parallel Machine for FDEM, (V-FDEM). With V-FDEM, a parallel FDEM software can be adapted to different parallel architecture systems ranging from just a few to thousands of cores.

  14. Metacomputing on Commodity Computers

    DTIC Science & Technology

    1999-05-01

    on NOWs, and this has contributed to the popularity of systems such as PVM [59], MPI [67], Linda [33], and TreadMarks [2]. 26 Challenges Given that...presents the performance of Calypso and Persistent Linda (PLinda) [77] programs and compares how they can tolerate failures. A biological pattern...adds fault tolerance to Linda programs by using light-weight transac- tions, whereas Calypso uses the combination of eager scheduling and two-phase

  15. 78 FR 934 - 37 Wilton Road, Milford LLC, and 282 Route 101 LLC, PVM Commercial Center, LLC; Notice of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-07

    ... printed on the eLibrary link of Commission's Web site at http://www.ferc.gov/docs-filing/elibrary.asp... Commission's Web site under http://www.ferc.gov/docs-filing/efiling.asp . Commenters can submit brief comments up to 6,000 characters, without prior registration, using the eComment system at http://www.ferc...

  16. Parallel computing for probabilistic fatigue analysis

    NASA Technical Reports Server (NTRS)

    Sues, Robert H.; Lua, Yuan J.; Smith, Mark D.

    1993-01-01

    This paper presents the results of Phase I research to investigate the most effective parallel processing software strategies and hardware configurations for probabilistic structural analysis. We investigate the efficiency of both shared and distributed-memory architectures via a probabilistic fatigue life analysis problem. We also present a parallel programming approach, the virtual shared-memory paradigm, that is applicable across both types of hardware. Using this approach, problems can be solved on a variety of parallel configurations, including networks of single or multiprocessor workstations. We conclude that it is possible to effectively parallelize probabilistic fatigue analysis codes; however, special strategies will be needed to achieve large-scale parallelism to keep large number of processors busy and to treat problems with the large memory requirements encountered in practice. We also conclude that distributed-memory architecture is preferable to shared-memory for achieving large scale parallelism; however, in the future, the currently emerging hybrid-memory architectures will likely be optimal.

  17. Program For Parallel Discrete-Event Simulation

    NASA Technical Reports Server (NTRS)

    Beckman, Brian C.; Blume, Leo R.; Geiselman, John S.; Presley, Matthew T.; Wedel, John J., Jr.; Bellenot, Steven F.; Diloreto, Michael; Hontalas, Philip J.; Reiher, Peter L.; Weiland, Frederick P.

    1991-01-01

    User does not have to add any special logic to aid in synchronization. Time Warp Operating System (TWOS) computer program is special-purpose operating system designed to support parallel discrete-event simulation. Complete implementation of Time Warp mechanism. Supports only simulations and other computations designed for virtual time. Time Warp Simulator (TWSIM) subdirectory contains sequential simulation engine interface-compatible with TWOS. TWOS and TWSIM written in, and support simulations in, C programming language.

  18. Visualization and simulated surgery of the left ventricle in the virtual pathological heart of the Virtual Physiological Human

    PubMed Central

    McFarlane, N. J. B.; Lin, X.; Zhao, Y.; Clapworthy, G. J.; Dong, F.; Redaelli, A.; Parodi, O.; Testi, D.

    2011-01-01

    Ischaemic heart failure remains a significant health and economic problem worldwide. This paper presents a user-friendly software system that will form a part of the virtual pathological heart of the Virtual Physiological Human (VPH2) project, currently being developed under the European Commission Virtual Physiological Human (VPH) programme. VPH2 is an integrated medicine project, which will create a suite of modelling, simulation and visualization tools for patient-specific prediction and planning in cases of post-ischaemic left ventricular dysfunction. The work presented here describes a three-dimensional interactive visualization for simulating left ventricle restoration surgery, comprising the operations of cutting, stitching and patching, and for simulating the elastic deformation of the ventricle to its post-operative shape. This will supply the quantitative measurements required for the post-operative prediction tools being developed in parallel in the same project. PMID:22670207

  19. Turning Virtual Reality into Reality: A Checklist to Ensure Virtual Reality Studies of Eating Behavior and Physical Activity Parallel the Real World

    PubMed Central

    Tal, Aner; Wansink, Brian

    2011-01-01

    Virtual reality (VR) provides a potentially powerful tool for researchers seeking to investigate eating and physical activity. Some unique conditions are necessary to ensure that the psychological processes that influence real eating behavior also influence behavior in VR environments. Accounting for these conditions is critical if VR-assisted research is to accurately reflect real-world situations. The current work discusses key considerations VR researchers must take into account to ensure similar psychological functioning in virtual and actual reality and does so by focusing on the process of spontaneous mental simulation. Spontaneous mental simulation is prevalent under real-world conditions but may be absent under VR conditions, potentially leading to differences in judgment and behavior between virtual and actual reality. For simulation to occur, the virtual environment must be perceived as being available for action. A useful chart is supplied as a reference to help researchers to investigate eating and physical activity more effectively. PMID:21527088

  20. Turning virtual reality into reality: a checklist to ensure virtual reality studies of eating behavior and physical activity parallel the real world.

    PubMed

    Tal, Aner; Wansink, Brian

    2011-03-01

    Virtual reality (VR) provides a potentially powerful tool for researchers seeking to investigate eating and physical activity. Some unique conditions are necessary to ensure that the psychological processes that influence real eating behavior also influence behavior in VR environments. Accounting for these conditions is critical if VR-assisted research is to accurately reflect real-world situations. The current work discusses key considerations VR researchers must take into account to ensure similar psychological functioning in virtual and actual reality and does so by focusing on the process of spontaneous mental simulation. Spontaneous mental simulation is prevalent under real-world conditions but may be absent under VR conditions, potentially leading to differences in judgment and behavior between virtual and actual reality. For simulation to occur, the virtual environment must be perceived as being available for action. A useful chart is supplied as a reference to help researchers to investigate eating and physical activity more effectively. © 2011 Diabetes Technology Society.

  1. Accelerating Virtual High-Throughput Ligand Docking: current technology and case study on a petascale supercomputer.

    PubMed

    Ellingson, Sally R; Dakshanamurthy, Sivanesan; Brown, Milton; Smith, Jeremy C; Baudry, Jerome

    2014-04-25

    In this paper we give the current state of high-throughput virtual screening. We describe a case study of using a task-parallel MPI (Message Passing Interface) version of Autodock4 [1], [2] to run a virtual high-throughput screen of one-million compounds on the Jaguar Cray XK6 Supercomputer at Oak Ridge National Laboratory. We include a description of scripts developed to increase the efficiency of the predocking file preparation and postdocking analysis. A detailed tutorial, scripts, and source code for this MPI version of Autodock4 are available online at http://www.bio.utk.edu/baudrylab/autodockmpi.htm.

  2. Distributed computing methodology for training neural networks in an image-guided diagnostic application.

    PubMed

    Plagianakos, V P; Magoulas, G D; Vrahatis, M N

    2006-03-01

    Distributed computing is a process through which a set of computers connected by a network is used collectively to solve a single problem. In this paper, we propose a distributed computing methodology for training neural networks for the detection of lesions in colonoscopy. Our approach is based on partitioning the training set across multiple processors using a parallel virtual machine. In this way, interconnected computers of varied architectures can be used for the distributed evaluation of the error function and gradient values, and, thus, training neural networks utilizing various learning methods. The proposed methodology has large granularity and low synchronization, and has been implemented and tested. Our results indicate that the parallel virtual machine implementation of the training algorithms developed leads to considerable speedup, especially when large network architectures and training sets are used.

  3. Trapping virtual pores by crystal retro-engineering

    NASA Astrophysics Data System (ADS)

    Little, Marc A.; Briggs, Michael E.; Jones, James T. A.; Schmidtmann, Marc; Hasell, Tom; Chong, Samantha Y.; Jelfs, Kim E.; Chen, Linjiang; Cooper, Andrew I.

    2015-02-01

    Stable guest-free porous molecular crystals are uncommon. By contrast, organic molecular crystals with guest-occupied cavities are frequently observed, but these cavities tend to be unstable and collapse on removal of the guests—this feature has been referred to as ‘virtual porosity’. Here, we show how we have trapped the virtual porosity in an unstable low-density organic molecular crystal by introducing a second molecule that matches the size and shape of the unstable voids. We call this strategy ‘retro-engineering’ because it parallels organic retrosynthetic analysis, and it allows the metastable two-dimensional hexagonal pore structure in an organic solvate to be trapped in a binary cocrystal. Unlike the crystal with virtual porosity, the cocrystal material remains single crystalline and porous after removal of guests by heating.

  4. Design and optimization of production parameters for boric acid crystals with the crystallization process in an MSMPR crystallizer using FBRM® and PVM® technologies

    NASA Astrophysics Data System (ADS)

    Kutluay, Sinan; Şahin, Ömer; Ceyhan, A. Abdullah; İzgi, M. Sait

    2017-06-01

    In crystallization studies, newly developed technologies, such as Focused Beam Reflectance Measurement (FBRM) and Particle Vision and Measurement (PVM) are applied for determining on-line monitoring of a representation of the Chord Length Distribution (CLD) and observe the photographs of crystals respectively; moreover recently they are widely used. Properly installed, the FBRM ensures on-line determination of the CLD, which is statistically associated to the Crystal Size Distribution (CSD). In industrial crystallization, CSD and mean crystal size as well as external habit and internal structure are important characteristics for further use of the crystals. In this paper, the effect of residence time, stirring speed, feeding rate, supersaturation level and the polyelectrolytes such as anionic polyacrylamide (APAM) and non-ionic polyacrylamide (NPAM) on the CLD as well as the shape of boric acid crystals were investigated by using the FBRM G600 and the PVM V819 probes respectively in an MSMPR (Mixed Suspension Mixed Product Removal) crystallizer. The CSD and kinetic data were determined experimentally using continuous MSMPR crystallizer running at steady state. The population density of nuclei, the nucleation rate and the growth rate were determined from the experimental population balance distribution when the steady state was reached.

  5. Cytoplasmic remodeling of erythrocyte raft lipids during infection by the human malaria parasite Plasmodium falciparum

    PubMed Central

    Murphy, Sean C.; Fernandez-Pol, Sebastian; Chung, Paul H.; Prasanna Murthy, S. N.; Milne, Stephen B.; Salomao, Marcela; Brown, H. Alex; Lomasney, Jon W.; Mohandas, Narla

    2007-01-01

    Studies of detergent-resistant membrane (DRM) rafts in mature erythrocytes have facilitated identification of proteins that regulate formation of endovacuolar structures such as the parasitophorous vacuolar membrane (PVM) induced by the malaria parasite Plasmodium falciparum. However, analyses of raft lipids have remained elusive because detergents interfere with lipid detection. Here, we use primaquine to perturb the erythrocyte membrane and induce detergent-free buoyant vesicles, which are enriched in cholesterol and major raft proteins flotillin and stomatin and contain low levels of cytoskeleton, all characteristics of raft microdomains. Lipid mass spectrometry revealed that phosphatidylethanolamine and phosphatidylglycerol are depleted in endovesicles while phosphoinositides are highly enriched, suggesting raft-based endovesiculation can be achieved by simple (non–receptor-mediated) mechanical perturbation of the erythrocyte plasma membrane and results in sorting of inner leaflet phospholipids. Live-cell imaging of lipid-specific protein probes showed that phosphatidylinositol (4,5) bisphosphate (PIP2) is highly concentrated in primaquine-induced vesicles, confirming that it is an erythrocyte raft lipid. However, the malarial PVM lacks PIP2, although another raft lipid, phosphatidylserine, is readily detected. Thus, different remodeling/sorting of cytoplasmic raft phospholipids may occur in distinct endovacuoles. Importantly, erythrocyte raft lipids recruited to the invasion junction by mechanical stimulation may be remodeled by the malaria parasite to establish blood-stage infection. PMID:17526861

  6. Local Epidermal Growth Factor Receptor Signaling Mediates the Systemic Pathogenic Effects of Staphylococcus aureus Toxic Shock Syndrome.

    PubMed

    Breshears, Laura M; Gillman, Aaron N; Stach, Christopher S; Schlievert, Patrick M; Peterson, Marnie L

    2016-01-01

    Secreted factors of Staphylococcus aureus can activate host signaling from the epidermal growth factor receptor (EGFR). The superantigen toxic shock syndrome toxin-1 (TSST-1) contributes to mucosal cytokine production through a disintegrin and metalloproteinase (ADAM)-mediated shedding of EGFR ligands and subsequent EGFR activation. The secreted hemolysin, α-toxin, can also induce EGFR signaling and directly interacts with ADAM10, a sheddase of EGFR ligands. The current work explores the role of EGFR signaling in menstrual toxic shock syndrome (mTSS), a disease mediated by TSST-1. The data presented show that TSST-1 and α-toxin induce ADAM- and EGFR-dependent cytokine production from human vaginal epithelial cells. TSST-1 and α-toxin also induce cytokine production from an ex vivo porcine vaginal mucosa (PVM) model. EGFR signaling is responsible for the majority of IL-8 production from PVM in response to secreted toxins and live S. aureus. Finally, data are presented demonstrating that inhibition of EGFR signaling with the EGFR-specific tyrosine kinase inhibitor AG1478 significantly increases survival in a rabbit model of mTSS. These data indicate that EGFR signaling is critical for progression of an S. aureus exotoxin-mediated disease and may represent an attractive host target for therapeutics.

  7. The nonstructural proteins of Pneumoviruses are remarkably distinct in substrate diversity and specificity.

    PubMed

    Ribaudo, Michael; Barik, Sailen

    2017-11-06

    Interferon (IFN) inhibits viruses by inducing several hundred cellular genes, aptly named 'interferon (IFN)-stimulated genes' (ISGs). The only two RNA viruses of the Pneumovirus genus of the Paramyxoviridae family, namely Respiratory Syncytial Virus (RSV) and Pneumonia Virus of Mice (PVM), each encode two nonstructural (NS) proteins that share no sequence similarity but yet suppress IFN. Since suppression of IFN underlies the ability of these viruses to replicate in the host cells, the mechanism of such suppression has become an important area of research. This Short Report is an important extension of our previous efforts in defining this mechanism. We show that, like their PVM counterparts, the RSV NS proteins also target multiple members of the ISG family. While significantly extending the substrate repertoire of the RSV NS proteins, these results, unexpectedly, also reveal that the target preferences of the NS proteins of the two viruses are entirely different. This is surprising since the two Pneumoviruses are phylogenetically close with similar genome organization and gene function, and the NS proteins of both also serve as suppressors of host IFN response. The finding that the NS proteins of the two highly similar viruses suppress entirely different members of the ISG family raises intriguing questions of pneumoviral NS evolution and mechanism of action.

  8. New views of the Toxoplasma gondii parasitophorous vacuole as revealed by Helium Ion Microscopy (HIM).

    PubMed

    de Souza, Wanderley; Attias, Marcia

    2015-07-01

    The Helium Ion Microscope (HIM) is a new technology that uses a highly focused helium ion beam to scan and interact with the sample, which is not coated. The images have resolution and depth of field superior to field emission scanning electron microscopes. In this paper, we used HIM to study LLC-MK2 cells infected with Toxoplasma gondii. These samples were chemically fixed and, after critical point drying, were scraped with adhesive tape to expose the inner structure of the cell and parasitophorous vacuoles. We confirmed some of the previous findings made by field emission-scanning electron microscopy and showed that the surface of the parasite is rich in structures suggestive of secretion, that the nanotubules of the intravacuolar network (IVN) are not always straight, and that bifurcations are less frequent than previously thought. Fusion of the tubules with the parasite membrane or the parasitophorous vacuole membrane (PVM) was also infrequent. Tiny adhesive links were observed for the first time connecting the IVN tubules. The PVM showed openings of various sizes that even allowed the observation of endoplasmic reticulum membranes in the cytoplasm of the host cell. These findings are discussed in relation to current knowledge on the cell biology of T. gondii. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. In Vitro Ability of a Novel Nanohydroxyapatite Oral Rinse to Occlude Dentine Tubules

    PubMed Central

    Hill, Robert G.; Chen, Xiaohui; Gillam, David G.

    2015-01-01

    Objectives. The aim of the study was to investigate the ability of a novel nanohydroxyapatite (nHA) desensitizing oral rinse to occlude dentine tubules compared to selected commercially available desensitizing oral rinses. Methods. 25 caries-free extracted molars were sectioned into 1 mm thick dentine discs. The dentine discs (n = 25) were etched with 6% citric acid for 2 minutes and rinsed with distilled water, prior to a 30-second application of test and control oral rinses. Evaluation was by (1) Scanning Electron Microscopy (SEM) of the dentine surface and (2) fluid flow measurements through a dentine disc. Results. Most of the oral rinses failed to adequately cover the dentine surface apart from the nHa oral rinse. However the hydroxyapatite, 1.4% potassium oxalate, and arginine/PVM/MA copolymer oral rinses, appeared to be relatively more effective than the nHA test and negative control rinses (potassium nitrate) in relation to a reduction in fluid flow measurements. Conclusions. Although the novel nHA oral rinse demonstrated the ability to occlude the dentine tubules and reduce the fluid flow measurements, some of the other oral rinses appeared to demonstrate a statistically significant reduction in fluid flow through the dentine disc, in particular the arginine/PVM/MA copolymer oral rinse. PMID:26161093

  10. Incremental Parallelization of Non-Data-Parallel Programs Using the Charon Message-Passing Library

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.

    2000-01-01

    Message passing is among the most popular techniques for parallelizing scientific programs on distributed-memory architectures. The reasons for its success are wide availability (MPI), efficiency, and full tuning control provided to the programmer. A major drawback, however, is that incremental parallelization, as offered by compiler directives, is not generally possible, because all data structures have to be changed throughout the program simultaneously. Charon remedies this situation through mappings between distributed and non-distributed data. It allows breaking up the parallelization into small steps, guaranteeing correctness at every stage. Several tools are available to help convert legacy codes into high-performance message-passing programs. They usually target data-parallel applications, whose loops carrying most of the work can be distributed among all processors without much dependency analysis. Others do a full dependency analysis and then convert the code virtually automatically. Even more toolkits are available that aid construction from scratch of message passing programs. None, however, allows piecemeal translation of codes with complex data dependencies (i.e. non-data-parallel programs) into message passing codes. The Charon library (available in both C and Fortran) provides incremental parallelization capabilities by linking legacy code arrays with distributed arrays. During the conversion process, non-distributed and distributed arrays exist side by side, and simple mapping functions allow the programmer to switch between the two in any location in the program. Charon also provides wrapper functions that leave the structure of the legacy code intact, but that allow execution on truly distributed data. Finally, the library provides a rich set of communication functions that support virtually all patterns of remote data demands in realistic structured grid scientific programs, including transposition, nearest-neighbor communication, pipelining, gather/scatter, and redistribution. At the end of the conversion process most intermediate Charon function calls will have been removed, the non-distributed arrays will have been deleted, and virtually the only remaining Charon functions calls are the high-level, highly optimized communications. Distribution of the data is under complete control of the programmer, although a wide range of useful distributions is easily available through predefined functions. A crucial aspect of the library is that it does not allocate space for distributed arrays, but accepts programmer-specified memory. This has two major consequences. First, codes parallelized using Charon do not suffer from encapsulation; user data is always directly accessible. This provides high efficiency, and also retains the possibility of using message passing directly for highly irregular communications. Second, non-distributed arrays can be interpreted as (trivial) distributions in the Charon sense, which allows them to be mapped to truly distributed arrays, and vice versa. This is the mechanism that enables incremental parallelization. In this paper we provide a brief introduction of the library and then focus on the actual steps in the parallelization process, using some representative examples from, among others, the NAS Parallel Benchmarks. We show how a complicated two-dimensional pipeline-the prototypical non-data-parallel algorithm- can be constructed with ease. To demonstrate the flexibility of the library, we give examples of the stepwise, efficient parallel implementation of nonlocal boundary conditions common in aircraft simulations, as well as the construction of the sequence of grids required for multigrid.

  11. A Distributed Parallel Genetic Algorithm of Placement Strategy for Virtual Machines Deployment on Cloud Platform

    PubMed Central

    Dong, Yu-Shuang; Xu, Gao-Chao; Fu, Xiao-Dong

    2014-01-01

    The cloud platform provides various services to users. More and more cloud centers provide infrastructure as the main way of operating. To improve the utilization rate of the cloud center and to decrease the operating cost, the cloud center provides services according to requirements of users by sharding the resources with virtualization. Considering both QoS for users and cost saving for cloud computing providers, we try to maximize performance and minimize energy cost as well. In this paper, we propose a distributed parallel genetic algorithm (DPGA) of placement strategy for virtual machines deployment on cloud platform. It executes the genetic algorithm parallelly and distributedly on several selected physical hosts in the first stage. Then it continues to execute the genetic algorithm of the second stage with solutions obtained from the first stage as the initial population. The solution calculated by the genetic algorithm of the second stage is the optimal one of the proposed approach. The experimental results show that the proposed placement strategy of VM deployment can ensure QoS for users and it is more effective and more energy efficient than other placement strategies on the cloud platform. PMID:25097872

  12. A distributed parallel genetic algorithm of placement strategy for virtual machines deployment on cloud platform.

    PubMed

    Dong, Yu-Shuang; Xu, Gao-Chao; Fu, Xiao-Dong

    2014-01-01

    The cloud platform provides various services to users. More and more cloud centers provide infrastructure as the main way of operating. To improve the utilization rate of the cloud center and to decrease the operating cost, the cloud center provides services according to requirements of users by sharding the resources with virtualization. Considering both QoS for users and cost saving for cloud computing providers, we try to maximize performance and minimize energy cost as well. In this paper, we propose a distributed parallel genetic algorithm (DPGA) of placement strategy for virtual machines deployment on cloud platform. It executes the genetic algorithm parallelly and distributedly on several selected physical hosts in the first stage. Then it continues to execute the genetic algorithm of the second stage with solutions obtained from the first stage as the initial population. The solution calculated by the genetic algorithm of the second stage is the optimal one of the proposed approach. The experimental results show that the proposed placement strategy of VM deployment can ensure QoS for users and it is more effective and more energy efficient than other placement strategies on the cloud platform.

  13. Coil Compression for Accelerated Imaging with Cartesian Sampling

    PubMed Central

    Zhang, Tao; Pauly, John M.; Vasanawala, Shreyas S.; Lustig, Michael

    2012-01-01

    MRI using receiver arrays with many coil elements can provide high signal-to-noise ratio and increase parallel imaging acceleration. At the same time, the growing number of elements results in larger datasets and more computation in the reconstruction. This is of particular concern in 3D acquisitions and in iterative reconstructions. Coil compression algorithms are effective in mitigating this problem by compressing data from many channels into fewer virtual coils. In Cartesian sampling there often are fully sampled k-space dimensions. In this work, a new coil compression technique for Cartesian sampling is presented that exploits the spatially varying coil sensitivities in these non-subsampled dimensions for better compression and computation reduction. Instead of directly compressing in k-space, coil compression is performed separately for each spatial location along the fully-sampled directions, followed by an additional alignment process that guarantees the smoothness of the virtual coil sensitivities. This important step provides compatibility with autocalibrating parallel imaging techniques. Its performance is not susceptible to artifacts caused by a tight imaging fieldof-view. High quality compression of in-vivo 3D data from a 32 channel pediatric coil into 6 virtual coils is demonstrated. PMID:22488589

  14. Multi-Block Parallel Navier-Stokes Simulation of Unsteady Wind Tunnel and Ground Interference Effects

    DTIC Science & Technology

    2001-09-01

    coefficient and propulsive efficiency showed that these parameters are virtually the same for both TE conditions (cT 0 40 and η 0 21). As a conclusion...difference in the way the two codes work, they yielded virtually the same solution. This shows that, for a reasonably small time step, whether the boundary... Biblioteca Sao Jose dos Campos - SP - Brazil iab@bibl.ita.cta.br 5. Prof. Max F. Platzer Chair, Department of Aeronautics & Astronautics - Naval

  15. Suppressing correlations in massively parallel simulations of lattice models

    NASA Astrophysics Data System (ADS)

    Kelling, Jeffrey; Ódor, Géza; Gemming, Sibylle

    2017-11-01

    For lattice Monte Carlo simulations parallelization is crucial to make studies of large systems and long simulation time feasible, while sequential simulations remain the gold-standard for correlation-free dynamics. Here, various domain decomposition schemes are compared, concluding with one which delivers virtually correlation-free simulations on GPUs. Extensive simulations of the octahedron model for 2 + 1 dimensional Kardar-Parisi-Zhang surface growth, which is very sensitive to correlation in the site-selection dynamics, were performed to show self-consistency of the parallel runs and agreement with the sequential algorithm. We present a GPU implementation providing a speedup of about 30 × over a parallel CPU implementation on a single socket and at least 180 × with respect to the sequential reference.

  16. Dynamic modeling of parallel robots for computed-torque control implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Codourey, A.

    1998-12-01

    In recent years, increased interest in parallel robots has been observed. Their control with modern theory, such as the computed-torque method, has, however, been restrained, essentially due to the difficulty in establishing a simple dynamic model that can be calculated in real time. In this paper, a simple method based on the virtual work principle is proposed for modeling parallel robots. The mass matrix of the robot, needed for decoupling control strategies, does not explicitly appear in the formulation; however, it can be computed separately, based on kinetic energy considerations. The method is applied to the DELTA parallel robot, leadingmore » to a very efficient model that has been implemented in a real-time computed-torque control algorithm.« less

  17. A unified framework for building high performance DVEs

    NASA Astrophysics Data System (ADS)

    Lei, Kaibin; Ma, Zhixia; Xiong, Hua

    2011-10-01

    A unified framework for integrating PC cluster based parallel rendering with distributed virtual environments (DVEs) is presented in this paper. While various scene graphs have been proposed in DVEs, it is difficult to enable collaboration of different scene graphs. This paper proposes a technique for non-distributed scene graphs with the capability of object and event distribution. With the increase of graphics data, DVEs require more powerful rendering ability. But general scene graphs are inefficient in parallel rendering. The paper also proposes a technique to connect a DVE and a PC cluster based parallel rendering environment. A distributed multi-player video game is developed to show the interaction of different scene graphs and the parallel rendering performance on a large tiled display wall.

  18. Discovery of novel human acrosin inhibitors by virtual screening

    NASA Astrophysics Data System (ADS)

    Liu, Xuefei; Dong, Guoqiang; Zhang, Jue; Qi, Jingjing; Zheng, Canhui; Zhou, Youjun; Zhu, Ju; Sheng, Chunquan; Lü, Jiaguo

    2011-10-01

    Human acrosin is an attractive target for the discovery of male contraceptive drugs. For the first time, structure-based drug design was applied to discover structurally diverse human acrosin inhibitors. A parallel virtual screening strategy in combination with pharmacophore-based and docking-based techniques was used to screen the SPECS database. From 16 compounds selected by virtual screening, a total of 10 compounds were found to be human acrosin inhibitors. Compound 2 was found to be the most potent hit (IC50 = 14 μM) and its binding mode was investigated by molecular dynamics simulations. The hit interacted with human acrosin mainly through hydrophobic and hydrogen-bonding interactions, which provided a good starting structure for further optimization studies.

  19. iTOUGH2 V6.5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finsterle, Stefan A.

    2010-11-01

    iTOUGH2 (inverse TOUGH2) provides inverse modeling capabilities for TOUGH2, a simulator for multi-dimensional , multi-phase, multi-component, non-isothermal flow and transport in fractured porous media. It performs sensitivity analysis, parameter estimation, and uncertainty propagation, analysis in geosciences and reservoir engineering and other application areas. It supports a number of different combination of fluids and components [equation-of-state (EOS) modules]. In addition, the optimization routines implemented in iTOUGH2 can also be used or sensitivity analysis, automatic model calibration, and uncertainty quantification of any external code that uses text-based input and output files. This link is achieved by means of the PEST application programmingmore » interface. iTOUGH2 solves the inverse problem by minimizing a non-linear objective function of the weighted differences between model output and the corresponding observations. Multiple minimization algorithms (derivative fee, gradient-based and second-order; local and global) are available. iTOUGH2 also performs Latin Hypercube Monte Carlos simulation for uncertainty propagation analysis. A detailed residual and error analysis is provided. This upgrade includes new EOS modules (specifically EOS7c, ECO2N and TMVOC), hysteretic relative permeability and capillary pressure functions and the PEST API. More details can be found at http://esd.lbl.gov/iTOUGH2 and the publications cited there. Hardware Req.: Multi-platform; Related/auxiliary software PVM (if running in parallel).« less

  20. A parallel algorithm for the two-dimensional time fractional diffusion equation with implicit difference method.

    PubMed

    Gong, Chunye; Bao, Weimin; Tang, Guojian; Jiang, Yuewen; Liu, Jie

    2014-01-01

    It is very time consuming to solve fractional differential equations. The computational complexity of two-dimensional fractional differential equation (2D-TFDE) with iterative implicit finite difference method is O(M(x)M(y)N(2)). In this paper, we present a parallel algorithm for 2D-TFDE and give an in-depth discussion about this algorithm. A task distribution model and data layout with virtual boundary are designed for this parallel algorithm. The experimental results show that the parallel algorithm compares well with the exact solution. The parallel algorithm on single Intel Xeon X5540 CPU runs 3.16-4.17 times faster than the serial algorithm on single CPU core. The parallel efficiency of 81 processes is up to 88.24% compared with 9 processes on a distributed memory cluster system. We do think that the parallel computing technology will become a very basic method for the computational intensive fractional applications in the near future.

  1. Parallel Optimization of 3D Cardiac Electrophysiological Model Using GPU

    PubMed Central

    Xia, Yong; Zhang, Henggui

    2015-01-01

    Large-scale 3D virtual heart model simulations are highly demanding in computational resources. This imposes a big challenge to the traditional computation resources based on CPU environment, which already cannot meet the requirement of the whole computation demands or are not easily available due to expensive costs. GPU as a parallel computing environment therefore provides an alternative to solve the large-scale computational problems of whole heart modeling. In this study, using a 3D sheep atrial model as a test bed, we developed a GPU-based simulation algorithm to simulate the conduction of electrical excitation waves in the 3D atria. In the GPU algorithm, a multicellular tissue model was split into two components: one is the single cell model (ordinary differential equation) and the other is the diffusion term of the monodomain model (partial differential equation). Such a decoupling enabled realization of the GPU parallel algorithm. Furthermore, several optimization strategies were proposed based on the features of the virtual heart model, which enabled a 200-fold speedup as compared to a CPU implementation. In conclusion, an optimized GPU algorithm has been developed that provides an economic and powerful platform for 3D whole heart simulations. PMID:26581957

  2. Parallel Optimization of 3D Cardiac Electrophysiological Model Using GPU.

    PubMed

    Xia, Yong; Wang, Kuanquan; Zhang, Henggui

    2015-01-01

    Large-scale 3D virtual heart model simulations are highly demanding in computational resources. This imposes a big challenge to the traditional computation resources based on CPU environment, which already cannot meet the requirement of the whole computation demands or are not easily available due to expensive costs. GPU as a parallel computing environment therefore provides an alternative to solve the large-scale computational problems of whole heart modeling. In this study, using a 3D sheep atrial model as a test bed, we developed a GPU-based simulation algorithm to simulate the conduction of electrical excitation waves in the 3D atria. In the GPU algorithm, a multicellular tissue model was split into two components: one is the single cell model (ordinary differential equation) and the other is the diffusion term of the monodomain model (partial differential equation). Such a decoupling enabled realization of the GPU parallel algorithm. Furthermore, several optimization strategies were proposed based on the features of the virtual heart model, which enabled a 200-fold speedup as compared to a CPU implementation. In conclusion, an optimized GPU algorithm has been developed that provides an economic and powerful platform for 3D whole heart simulations.

  3. A Stochastic Spiking Neural Network for Virtual Screening.

    PubMed

    Morro, A; Canals, V; Oliver, A; Alomar, M L; Galan-Prado, F; Ballester, P J; Rossello, J L

    2018-04-01

    Virtual screening (VS) has become a key computational tool in early drug design and screening performance is of high relevance due to the large volume of data that must be processed to identify molecules with the sought activity-related pattern. At the same time, the hardware implementations of spiking neural networks (SNNs) arise as an emerging computing technique that can be applied to parallelize processes that normally present a high cost in terms of computing time and power. Consequently, SNN represents an attractive alternative to perform time-consuming processing tasks, such as VS. In this brief, we present a smart stochastic spiking neural architecture that implements the ultrafast shape recognition (USR) algorithm achieving two order of magnitude of speed improvement with respect to USR software implementations. The neural system is implemented in hardware using field-programmable gate arrays allowing a highly parallelized USR implementation. The results show that, due to the high parallelization of the system, millions of compounds can be checked in reasonable times. From these results, we can state that the proposed architecture arises as a feasible methodology to efficiently enhance time-consuming data-mining processes such as 3-D molecular similarity search.

  4. Optimized R functions for analysis of ecological community data using the R virtual laboratory (RvLab)

    PubMed Central

    Varsos, Constantinos; Patkos, Theodore; Pavloudi, Christina; Gougousis, Alexandros; Ijaz, Umer Zeeshan; Filiopoulou, Irene; Pattakos, Nikolaos; Vanden Berghe, Edward; Fernández-Guerra, Antonio; Faulwetter, Sarah; Chatzinikolaou, Eva; Pafilis, Evangelos; Bekiari, Chryssoula; Doerr, Martin; Arvanitidis, Christos

    2016-01-01

    Abstract Background Parallel data manipulation using R has previously been addressed by members of the R community, however most of these studies produce ad hoc solutions that are not readily available to the average R user. Our targeted users, ranging from the expert ecologist/microbiologists to computational biologists, often experience difficulties in finding optimal ways to exploit the full capacity of their computational resources. In addition, improving performance of commonly used R scripts becomes increasingly difficult especially with large datasets. Furthermore, the implementations described here can be of significant interest to expert bioinformaticians or R developers. Therefore, our goals can be summarized as: (i) description of a complete methodology for the analysis of large datasets by combining capabilities of diverse R packages, (ii) presentation of their application through a virtual R laboratory (RvLab) that makes execution of complex functions and visualization of results easy and readily available to the end-user. New information In this paper, the novelty stems from implementations of parallel methodologies which rely on the processing of data on different levels of abstraction and the availability of these processes through an integrated portal. Parallel implementation R packages, such as the pbdMPI (Programming with Big Data – Interface to MPI) package, are used to implement Single Program Multiple Data (SPMD) parallelization on primitive mathematical operations, allowing for interplay with functions of the vegan package. The dplyr and RPostgreSQL R packages are further integrated offering connections to dataframe like objects (databases) as secondary storage solutions whenever memory demands exceed available RAM resources. The RvLab is running on a PC cluster, using version 3.1.2 (2014-10-31) on a x86_64-pc-linux-gnu (64-bit) platform, and offers an intuitive virtual environmet interface enabling users to perform analysis of ecological and microbial communities based on optimized vegan functions. A beta version of the RvLab is available after registration at: https://portal.lifewatchgreece.eu/ PMID:27932907

  5. Optimized R functions for analysis of ecological community data using the R virtual laboratory (RvLab).

    PubMed

    Varsos, Constantinos; Patkos, Theodore; Oulas, Anastasis; Pavloudi, Christina; Gougousis, Alexandros; Ijaz, Umer Zeeshan; Filiopoulou, Irene; Pattakos, Nikolaos; Vanden Berghe, Edward; Fernández-Guerra, Antonio; Faulwetter, Sarah; Chatzinikolaou, Eva; Pafilis, Evangelos; Bekiari, Chryssoula; Doerr, Martin; Arvanitidis, Christos

    2016-01-01

    Parallel data manipulation using R has previously been addressed by members of the R community, however most of these studies produce ad hoc solutions that are not readily available to the average R user. Our targeted users, ranging from the expert ecologist/microbiologists to computational biologists, often experience difficulties in finding optimal ways to exploit the full capacity of their computational resources. In addition, improving performance of commonly used R scripts becomes increasingly difficult especially with large datasets. Furthermore, the implementations described here can be of significant interest to expert bioinformaticians or R developers. Therefore, our goals can be summarized as: (i) description of a complete methodology for the analysis of large datasets by combining capabilities of diverse R packages, (ii) presentation of their application through a virtual R laboratory (RvLab) that makes execution of complex functions and visualization of results easy and readily available to the end-user. In this paper, the novelty stems from implementations of parallel methodologies which rely on the processing of data on different levels of abstraction and the availability of these processes through an integrated portal. Parallel implementation R packages, such as the pbdMPI (Programming with Big Data - Interface to MPI) package, are used to implement Single Program Multiple Data (SPMD) parallelization on primitive mathematical operations, allowing for interplay with functions of the vegan package. The dplyr and RPostgreSQL R packages are further integrated offering connections to dataframe like objects (databases) as secondary storage solutions whenever memory demands exceed available RAM resources. The RvLab is running on a PC cluster, using version 3.1.2 (2014-10-31) on a x86_64-pc-linux-gnu (64-bit) platform, and offers an intuitive virtual environmet interface enabling users to perform analysis of ecological and microbial communities based on optimized vegan functions. A beta version of the RvLab is available after registration at: https://portal.lifewatchgreece.eu/.

  6. Parallel reduced-instruction-set-computer architecture for real-time symbolic pattern matching

    NASA Astrophysics Data System (ADS)

    Parson, Dale E.

    1991-03-01

    This report discusses ongoing work on a parallel reduced-instruction- set-computer (RISC) architecture for automatic production matching. The PRIOPS compiler takes advantage of the memoryless character of automatic processing by translating a program's collection of automatic production tests into an equivalent combinational circuit-a digital circuit without memory, whose outputs are immediate functions of its inputs. The circuit provides a highly parallel, fine-grain model of automatic matching. The compiler then maps the combinational circuit onto RISC hardware. The heart of the processor is an array of comparators capable of testing production conditions in parallel, Each comparator attaches to private memory that contains virtual circuit nodes-records of the current state of nodes and busses in the combinational circuit. All comparator memories hold identical information, allowing simultaneous update for a single changing circuit node and simultaneous retrieval of different circuit nodes by different comparators. Along with the comparator-based logic unit is a sequencer that determines the current combination of production-derived comparisons to try, based on the combined success and failure of previous combinations of comparisons. The memoryless nature of automatic matching allows the compiler to designate invariant memory addresses for virtual circuit nodes, and to generate the most effective sequences of comparison test combinations. The result is maximal utilization of parallel hardware, indicating speed increases and scalability beyond that found for course-grain, multiprocessor approaches to concurrent Rete matching. Future work will consider application of this RISC architecture to the standard (controlled) Rete algorithm, where search through memory dominates portions of matching.

  7. Jacobian-free approximate solvers for hyperbolic systems: Application to relativistic magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Castro, Manuel J.; Gallardo, José M.; Marquina, Antonio

    2017-10-01

    We present recent advances in PVM (Polynomial Viscosity Matrix) methods based on internal approximations to the absolute value function, and compare them with Chebyshev-based PVM solvers. These solvers only require a bound on the maximum wave speed, so no spectral decomposition is needed. Another important feature of the proposed methods is that they are suitable to be written in Jacobian-free form, in which only evaluations of the physical flux are used. This is particularly interesting when considering systems for which the Jacobians involve complex expressions, e.g., the relativistic magnetohydrodynamics (RMHD) equations. On the other hand, the proposed Jacobian-free solvers have also been extended to the case of approximate DOT (Dumbser-Osher-Toro) methods, which can be regarded as simple and efficient approximations to the classical Osher-Solomon method, sharing most of it interesting features and being applicable to general hyperbolic systems. To test the properties of our schemes a number of numerical experiments involving the RMHD equations are presented, both in one and two dimensions. The obtained results are in good agreement with those found in the literature and show that our schemes are robust and accurate, running stable under a satisfactory time step restriction. It is worth emphasizing that, although this work focuses on RMHD, the proposed schemes are suitable to be applied to general hyperbolic systems.

  8. [Differences in momentum development when standing up from a chair between elderly with and without frequent falls history].

    PubMed

    Guzmán, Rodrigo Antonio; Prado, Hugo Enrique; Porcel Melián, Helvio; Cordier, Benoit

    2009-01-01

    The momentum of the upper body (UB) during transfer sit-to-stand (STS) could be sensitive to the deterioration of dynamic postural control, and also the risk of falls. The aim of this study is to quantify the differences in the momentum development on UB during the STS in a sample of fall and no-fall elderly subjects. MATERIAL AND MEHODS: The sample consisted of twenty three voluntary elderly subjects (n=23), six elderly adults with antecedents of frequent falls (more than two within a year period) and seventeen without histories of frequent falls. Through a motion analysis system we registered the kinematics of UB during STS, from which we calculated the momentum of UB. The determined analysis variables were: the maximum values of the vertical (P(V)M) and horizontal (P(H)M) lineal momenta, the minimum (L(Max)) and maximum (L(Min)) values of the angular momentum and maximum trunk flexion (thetaM(UB)). No difference was observed in P(H)M, L(Max) and L(Min) (P>0.05) between both groups. However, a significant difference was found for the variable P(V)M (P=0.03) and thetaM(UB) (P=0.03) between both groups. We can conclude that, for the sample studied, the frequent fall condition relates to a smaller capacity to develop vertical momentum and increase flexion of the upper body.

  9. Mirror-image-induced magnetic modes.

    PubMed

    Xifré-Pérez, Elisabet; Shi, Lei; Tuzer, Umut; Fenollosa, Roberto; Ramiro-Manzano, Fernando; Quidant, Romain; Meseguer, Francisco

    2013-01-22

    Reflection in a mirror changes the handedness of the real world, and right-handed objects turn left-handed and vice versa (M. Gardner, The Ambidextrous Universe, Penguin Books, 1964). Also, we learn from electromagnetism textbooks that a flat metallic mirror transforms an electric charge into a virtual opposite charge. Consequently, the mirror image of a magnet is another parallel virtual magnet as the mirror image changes both the charge sign and the curl handedness. Here we report the dramatic modification in the optical response of a silicon nanocavity induced by the interaction with its image through a flat metallic mirror. The system of real and virtual dipoles can be interpreted as an effective magnetic dipole responsible for a strong enhancement of the cavity scattering cross section.

  10. Internet gratifications and internet addiction: on the uses and abuses of new media.

    PubMed

    Song, Indeok; LaRose, Robert; Eastin, Matthew S; Lin, Carolyn A

    2004-08-01

    Internet addiction has been identified as a pathological behavior, but its symptoms may be found in normal populations, placing it within the scope of conventional theories of media attendance. The present study drew upon fresh conceptualizations of gratifications specific to the Internet to uncover seven gratification factors: Virtual Community, Information Seeking, Aesthetic Experience, Monetary Compensation, Diversion, Personal Status, and Relationship Maintenance. With no parallel in prior research, Virtual Community might be termed a "new" gratification. Virtual Community, Monetary Compensation, Diversion, and Personal Status gratifications accounted for 28% of the variance in Internet Addiction Tendency. The relationship between Internet addiction and gratifications was discussed in terms of the formation of media habits and the distinction between content and process gratifications.

  11. Virtual Manufacturing Techniques Designed and Applied to Manufacturing Activities in the Manufacturing Integration and Technology Branch

    NASA Technical Reports Server (NTRS)

    Shearrow, Charles A.

    1999-01-01

    One of the identified goals of EM3 is to implement virtual manufacturing by the time the year 2000 has ended. To realize this goal of a true virtual manufacturing enterprise the initial development of a machinability database and the infrastructure must be completed. This will consist of the containment of the existing EM-NET problems and developing machine, tooling, and common materials databases. To integrate the virtual manufacturing enterprise with normal day to day operations the development of a parallel virtual manufacturing machinability database, virtual manufacturing database, virtual manufacturing paradigm, implementation/integration procedure, and testable verification models must be constructed. Common and virtual machinability databases will include the four distinct areas of machine tools, available tooling, common machine tool loads, and a materials database. The machine tools database will include the machine envelope, special machine attachments, tooling capacity, location within NASA-JSC or with a contractor, and availability/scheduling. The tooling database will include available standard tooling, custom in-house tooling, tool properties, and availability. The common materials database will include materials thickness ranges, strengths, types, and their availability. The virtual manufacturing databases will consist of virtual machines and virtual tooling directly related to the common and machinability databases. The items to be completed are the design and construction of the machinability databases, virtual manufacturing paradigm for NASA-JSC, implementation timeline, VNC model of one bridge mill and troubleshoot existing software and hardware problems with EN4NET. The final step of this virtual manufacturing project will be to integrate other production sites into the databases bringing JSC's EM3 into a position of becoming a clearing house for NASA's digital manufacturing needs creating a true virtual manufacturing enterprise.

  12. DOVIS: an implementation for high-throughput virtual screening using AutoDock.

    PubMed

    Zhang, Shuxing; Kumar, Kamal; Jiang, Xiaohui; Wallqvist, Anders; Reifman, Jaques

    2008-02-27

    Molecular-docking-based virtual screening is an important tool in drug discovery that is used to significantly reduce the number of possible chemical compounds to be investigated. In addition to the selection of a sound docking strategy with appropriate scoring functions, another technical challenge is to in silico screen millions of compounds in a reasonable time. To meet this challenge, it is necessary to use high performance computing (HPC) platforms and techniques. However, the development of an integrated HPC system that makes efficient use of its elements is not trivial. We have developed an application termed DOVIS that uses AutoDock (version 3) as the docking engine and runs in parallel on a Linux cluster. DOVIS can efficiently dock large numbers (millions) of small molecules (ligands) to a receptor, screening 500 to 1,000 compounds per processor per day. Furthermore, in DOVIS, the docking session is fully integrated and automated in that the inputs are specified via a graphical user interface, the calculations are fully integrated with a Linux cluster queuing system for parallel processing, and the results can be visualized and queried. DOVIS removes most of the complexities and organizational problems associated with large-scale high-throughput virtual screening, and provides a convenient and efficient solution for AutoDock users to use this software in a Linux cluster platform.

  13. Get the LED Out.

    ERIC Educational Resources Information Center

    Jewett, John W., Jr.

    1991-01-01

    Describes science demonstrations with light-emitting diodes that include electrical concepts of resistance, direct and alternating current, sine wave versus square wave, series and parallel circuits, and Faraday's Law; optics concepts of real and virtual images, photoresistance, and optical communication; and modern physics concepts of spectral…

  14. Evaluation of the accuracy of the Rotating Parallel Ray Omnidirectional Integration for instantaneous pressure reconstruction from the measured pressure gradient

    NASA Astrophysics Data System (ADS)

    Moreto, Jose; Liu, Xiaofeng

    2017-11-01

    The accuracy of the Rotating Parallel Ray omnidirectional integration for pressure reconstruction from the measured pressure gradient (Liu et al., AIAA paper 2016-1049) is evaluated against both the Circular Virtual Boundary omnidirectional integration (Liu and Katz, 2006 and 2013) and the conventional Poisson equation approach. Dirichlet condition at one boundary point and Neumann condition at all other boundary points are applied to the Poisson solver. A direct numerical simulation database of isotropic turbulence flow (JHTDB), with a homogeneously distributed random noise added to the entire field of DNS pressure gradient, is used to assess the performance of the methods. The random noise, generated by the Matlab function Rand, has a magnitude varying randomly within the range of +/-40% of the maximum DNS pressure gradient. To account for the effect of the noise distribution pattern on the reconstructed pressure accuracy, a total of 1000 different noise distributions achieved by using different random number seeds are involved in the evaluation. Final results after averaging the 1000 realizations show that the error of the reconstructed pressure normalized by the DNS pressure variation range is 0.15 +/-0.07 for the Poisson equation approach, 0.028 +/-0.003 for the Circular Virtual Boundary method and 0.027 +/-0.003 for the Rotating Parallel Ray method, indicating the robustness of the Rotating Parallel Ray method in pressure reconstruction. Sponsor: The San Diego State University UGP program.

  15. DOVIS 2.0: an efficient and easy to use parallel virtual screening tool based on AutoDock 4.0.

    PubMed

    Jiang, Xiaohui; Kumar, Kamal; Hu, Xin; Wallqvist, Anders; Reifman, Jaques

    2008-09-08

    Small-molecule docking is an important tool in studying receptor-ligand interactions and in identifying potential drug candidates. Previously, we developed a software tool (DOVIS) to perform large-scale virtual screening of small molecules in parallel on Linux clusters, using AutoDock 3.05 as the docking engine. DOVIS enables the seamless screening of millions of compounds on high-performance computing platforms. In this paper, we report significant advances in the software implementation of DOVIS 2.0, including enhanced screening capability, improved file system efficiency, and extended usability. To keep DOVIS up-to-date, we upgraded the software's docking engine to the more accurate AutoDock 4.0 code. We developed a new parallelization scheme to improve runtime efficiency and modified the AutoDock code to reduce excessive file operations during large-scale virtual screening jobs. We also implemented an algorithm to output docked ligands in an industry standard format, sd-file format, which can be easily interfaced with other modeling programs. Finally, we constructed a wrapper-script interface to enable automatic rescoring of docked ligands by arbitrarily selected third-party scoring programs. The significance of the new DOVIS 2.0 software compared with the previous version lies in its improved performance and usability. The new version makes the computation highly efficient by automating load balancing, significantly reducing excessive file operations by more than 95%, providing outputs that conform to industry standard sd-file format, and providing a general wrapper-script interface for rescoring of docked ligands. The new DOVIS 2.0 package is freely available to the public under the GNU General Public License.

  16. Simultaneous fluoroscopic and nuclear imaging: impact of collimator choice on nuclear image quality.

    PubMed

    van der Velden, Sandra; Beijst, Casper; Viergever, Max A; de Jong, Hugo W A M

    2017-01-01

    X-ray-guided oncological interventions could benefit from the availability of simultaneously acquired nuclear images during the procedure. To this end, a real-time, hybrid fluoroscopic and nuclear imaging device, consisting of an X-ray c-arm combined with gamma imaging capability, is currently being developed (Beijst C, Elschot M, Viergever MA, de Jong HW. Radiol. 2015;278:232-238). The setup comprises four gamma cameras placed adjacent to the X-ray tube. The four camera views are used to reconstruct an intermediate three-dimensional image, which is subsequently converted to a virtual nuclear projection image that overlaps with the X-ray image. The purpose of the present simulation study is to evaluate the impact of gamma camera collimator choice (parallel hole versus pinhole) on the quality of the virtual nuclear image. Simulation studies were performed with a digital image quality phantom including realistic noise and resolution effects, with a dynamic frame acquisition time of 1 s and a total activity of 150 MBq. Projections were simulated for 3, 5, and 7 mm pinholes and for three parallel hole collimators (low-energy all-purpose (LEAP), low-energy high-resolution (LEHR) and low-energy ultra-high-resolution (LEUHR)). Intermediate reconstruction was performed with maximum likelihood expectation-maximization (MLEM) with point spread function (PSF) modeling. In the virtual projection derived therefrom, contrast, noise level, and detectability were determined and compared with the ideal projection, that is, as if a gamma camera were located at the position of the X-ray detector. Furthermore, image deformations and spatial resolution were quantified. Additionally, simultaneous fluoroscopic and nuclear images of a sphere phantom were acquired with a physical prototype system and compared with the simulations. For small hot spots, contrast is comparable for all simulated collimators. Noise levels are, however, 3 to 8 times higher in pinhole geometries than in parallel hole geometries. This results in higher contrast-to-noise ratios for parallel hole geometries. Smaller spheres can thus be detected with parallel hole collimators than with pinhole collimators (17 mm vs 28 mm). Pinhole geometries show larger image deformations than parallel hole geometries. Spatial resolution varied between 1.25 cm for the 3 mm pinhole and 4 cm for the LEAP collimator. The simulation method was successfully validated by the experiments with the physical prototype. A real-time hybrid fluoroscopic and nuclear imaging device is currently being developed. Image quality of nuclear images obtained with different collimators was compared in terms of contrast, noise, and detectability. Parallel hole collimators showed lower noise and better detectability than pinhole collimators. © 2016 American Association of Physicists in Medicine.

  17. Opposed-flow virtual cyclone for particle concentration

    DOEpatents

    Rader, Daniel J.; Torczynski, John R.

    2000-12-05

    An opposed-flow virtual cyclone for aerosol collation which can accurately collect, classify, and concentrate (enrich) particles in a specific size range. The opposed-flow virtual cyclone is a variation on the virtual cyclone and has its inherent advantages (no-impact particle separation in a simple geometry), while providing a more robust design for concentrating particles in a flow-through type system. The opposed-flow virtual cyclone consists of two geometrically similar virtual cyclones arranged such that their inlet jets are inwardly directed and symmetrically opposed relative to a plane of symmetry located between the two inlet slits. A top plate bounds both jets on the "top" side of the inlets, while the other or lower wall curves "down" and away from each inlet jet. Each inlet jet will follow the adjacent lower wall as it turns away, and that particles will be transferred away from the wall and towards the symmetry plane by centrifugal action. After turning, the two jets merge smoothly along the symmetry line and flow parallel to it through the throat. Particles are transferred from the main flows, across a dividing streamline, and into a central recirculating region, where particle concentrations become greatly increased relative to the main stream.

  18. Shadow Mode Assessment Using Realistic Technologies for the National Airspace (SMART NAS)

    NASA Technical Reports Server (NTRS)

    Kopardekar, Parimal H.

    2014-01-01

    Develop a simulation and modeling capability that includes: (a) Assessment of multiple parallel universes, (b) Accepts data feeds, (c) Allows for live virtual constructive distribute environment, (d) Enables integrated examinations of concepts, algorithms, technologies and National Airspace System (NAS) architectures.

  19. Credit WCT. Original 21" x 2A" color negative is housed ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Credit WCT. Original 2-1" x 2-A" color negative is housed in the JPL Photography Laboratory, Pasadena, California. The mixing pot of the 150-gallon (Size 16-PVM) Baker-Perkins vertical mixer appears in its lowered position, exposing the mixer paddles. JPL employees Harold "Andy" Anderson and Ron Wright in protective clothing demonstrate how to scrape mixed propellant from mixer blades (JPL negative JPL10284BC, 27 January 1989) - Jet Propulsion Laboratory Edwards Facility, Mixer, Edwards Air Force Base, Boron, Kern County, CA

  20. Prototype architecture for a VLSI level zero processing system. [Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Shi, Jianfei; Grebowsky, Gerald J.; Horner, Ward P.; Chesney, James R.

    1989-01-01

    The prototype architecture and implementation of a high-speed level zero processing (LZP) system are discussed. Due to the new processing algorithm and VLSI technology, the prototype LZP system features compact size, low cost, high processing throughput, and easy maintainability and increased reliability. Though extensive control functions have been done by hardware, the programmability of processing tasks makes it possible to adapt the system to different data formats and processing requirements. It is noted that the LZP system can handle up to 8 virtual channels and 24 sources with combined data volume of 15 Gbytes per orbit. For greater demands, multiple LZP systems can be configured in parallel, each called a processing channel and assigned a subset of virtual channels. The telemetry data stream will be steered into different processing channels in accordance with their virtual channel IDs. This super system can cope with a virtually unlimited number of virtual channels and sources. In the near future, it is expected that new disk farms with data rate exceeding 150 Mbps will be available from commercial vendors due to the advance in disk drive technology.

  1. [Hodge and his planes].

    PubMed

    van Gijn, Jan; Gijselhart, Joost P

    2010-01-01

    Hugh Lenox Hodge (1796-1873) was professor of obstetrics at the University of Pennsylvania for more than 25 years. He divided the birth canal into four virtual and parallel planes through pelvic protuberances, a method still widely used. He also developed a pessary that is now mainly used in stress incontinence.

  2. A class Hierarchical, object-oriented approach to virtual memory management

    NASA Technical Reports Server (NTRS)

    Russo, Vincent F.; Campbell, Roy H.; Johnston, Gary M.

    1989-01-01

    The Choices family of operating systems exploits class hierarchies and object-oriented programming to facilitate the construction of customized operating systems for shared memory and networked multiprocessors. The software is being used in the Tapestry laboratory to study the performance of algorithms, mechanisms, and policies for parallel systems. Described here are the architectural design and class hierarchy of the Choices virtual memory management system. The software and hardware mechanisms and policies of a virtual memory system implement a memory hierarchy that exploits the trade-off between response times and storage capacities. In Choices, the notion of a memory hierarchy is captured by abstract classes. Concrete subclasses of those abstractions implement a virtual address space, segmentation, paging, physical memory management, secondary storage, and remote (that is, networked) storage. Captured in the notion of a memory hierarchy are classes that represent memory objects. These classes provide a storage mechanism that contains encapsulated data and have methods to read or write the memory object. Each of these classes provides specializations to represent the memory hierarchy.

  3. Virtual geotechnical laboratory experiments using a simulator

    NASA Astrophysics Data System (ADS)

    Penumadu, Dayakar; Zhao, Rongda; Frost, David

    2000-04-01

    The details of a test simulator that provides a realistic environment for performing virtual laboratory experimentals in soil mechanics is presented. A computer program Geo-Sim that can be used to perform virtual experiments, and allow for real-time observations of material response is presented. The results of experiments, for a given set of input parameters, are obtained with the test simulator using well-trained artificial neural-network-based soil models for different soil types and stress paths. Multimedia capabilities are integrated in Geo-Sim, using software that links and controls a laser disc player with a real-time parallel processing ability. During the simulation of a virtual experiment, relevant portions of the video image of a previously recorded test on an actual soil specimen are dispalyed along with the graphical presentation of response from the feedforward ANN model predictions. The pilot simulator developed to date includes all aspects related to performing a triaxial test on cohesionless soil under undrained and drained conditions. The benefits of the test simulator are also presented.

  4. Ligand and structure based virtual screening strategies for hit-finding and optimization of hepatitis C virus (HCV) inhibitors.

    PubMed

    Melagraki, G; Afantitis, A

    2011-01-01

    Virtual Screening (VS) has experienced increased attention into the recent years due to the large datasets made available, the development of advanced VS techniques and the encouraging fact that VS has contributed to the discovery of several compounds that have either reached the market or entered clinical trials. Hepatitis C Virus (HCV) nonstructural protein 5B (NS5B) has become an attractive target for the development of antiviral drugs and many small molecules have been explored as possible HCV NS5B inhibitors. In parallel with experimental practices, VS can serve as a valuable tool in the identification of novel effective inhibitors. Different techniques and workflows have been reported in literature with the goal to prioritize possible potent hits. In this context, different virtual screening strategies have been deployed for the identification of novel Hepatitis C Virus (HCV) inhibitors. This work reviews recent applications of virtual screening in an effort to identify novel potent HCV inhibitors.

  5. Quality-by-Design (QbD): An integrated process analytical technology (PAT) approach for a dynamic pharmaceutical co-precipitation process characterization and process design space development.

    PubMed

    Wu, Huiquan; White, Maury; Khan, Mansoor A

    2011-02-28

    The aim of this work was to develop an integrated process analytical technology (PAT) approach for a dynamic pharmaceutical co-precipitation process characterization and design space development. A dynamic co-precipitation process by gradually introducing water to the ternary system of naproxen-Eudragit L100-alcohol was monitored at real-time in situ via Lasentec FBRM and PVM. 3D map of count-time-chord length revealed three distinguishable process stages: incubation, transition, and steady-state. The effects of high risk process variables (slurry temperature, stirring rate, and water addition rate) on both derived co-precipitation process rates and final chord-length-distribution were evaluated systematically using a 3(3) full factorial design. Critical process variables were identified via ANOVA for both transition and steady state. General linear models (GLM) were then used for parameter estimation for each critical variable. Clear trends about effects of each critical variable during transition and steady state were found by GLM and were interpreted using fundamental process principles and Nyvlt's transfer model. Neural network models were able to link process variables with response variables at transition and steady state with R(2) of 0.88-0.98. PVM images evidenced nucleation and crystal growth. Contour plots illustrated design space via critical process variables' ranges. It demonstrated the utility of integrated PAT approach for QbD development. Published by Elsevier B.V.

  6. Selfconsistent vibrational and free electron kinetics for CO2 dissociation in cold plasmas

    NASA Astrophysics Data System (ADS)

    Capitelli, Mario

    2016-09-01

    The activation of CO2 by cold plasmas is receiving new theoretical interest thanks to two European groups. The Bogaerts group developed a global model for the activation of CO2 trying to reproduce the experimental values for DBD and microwave discharges. The approach of Pietanza et al was devoted to understand the dependence of electron energy distribution function (eedf) of pure CO2 on the presence of concentrations of electronically and vibrationally excited states taken as parameter. To understand the importance of the vibrational excitation in the dissociation process Pietanza et al compared an upper limit to the dissociation process from a pure vibrational mechanism (PVM) with the corresponding electron impact dissociation rate, the prevalence of the two models depending on the reduced electric field and on the choice of the electron molecule cross section database. Improvement of the Pietanza et al model is being considered by coupling the time dependent Boltzmann solver with the non equilibrium vibrational kinetics of asymmetric mode and with simplified plasma chemistry kinetics describing the ionization/recombination process and the excitation-deexcitation of a metastable level at 10.5eV. A new PVM mechanism is also considered. Preliminary results, for both discharge and post discharge conditions, emphasize the action of superelastic collisions involving both vibrationally and electronically excited states in affecting the eedf. The new results can be used to plan a road map for future developments of numerical codes for rationalizing existing experimental values, as well as, for indicating new experimental situations.

  7. Service learning: Priority 4 Paws mobile surgical service for shelter animals.

    PubMed

    Freeman, Lynetta J; Ferguson, Nancy; Litster, Annette; Arighi, Mimi

    2013-01-01

    The increasing attention given to competencies needed to enter the workforce has revealed a need for veterinary students to gain more experience in performing small-animal elective surgery before graduation. In addition, guidelines for standards of care for shelter animals recommend that all dogs and cats should be spayed or neutered before adoption. Teaching surgical skills while serving the needs of local animal shelters represents an ideal service-learning opportunity. Following a pilot study and the benchmarking of other programs, an elective course in shelter medicine and surgery was created at Purdue University College of Veterinary Medicine (PVM) to allow senior DVM students an opportunity to spend 2 weeks on a mobile surgery unit (Priority 4 Paws) and 1 week at an animal shelter. With financial assistance from sponsors and donors, PVM purchased and equipped a mobile surgery unit, hired a full-time veterinarian and a registered veterinary technician, and established relationships with 12 animal shelters. From July 30, 2012, to March 22, 2013, 1,941 spays and neuters were performed with excellent postsurgical outcomes while training 33 veterinary students on rotation and 26 veterinary technician students. The program was well accepted by both students and the shelters being served. The Priority 4 Paws program is an example of an integrated, community-based service-learning opportunity that not only helps to improve the surgical skills of veterinary students but also helps to meet an identified community need.

  8. Plasmid mediated antimicrobial resistance in Ontario isolates of Actinobacillus (Haemophilus) pleuropneumoniae.

    PubMed Central

    Gilbride, K A; Rosendal, S; Brunton, J L

    1989-01-01

    The genetic basis of antimicrobial resistance in Ontario isolates of Actinobacillus (Haemophilus) pleuropneumoniae was studied. Two Ontario isolates of A. pleuropneumoniae were found to be resistant to sulfonamides (Su), streptomycin (Sm) and ampicillin (Amp). Resistance to Su and Sm was specified by a 2.3 megadalton (Mdal) plasmid which appeared to be identical to pVM104, which has been described in isolates of A. pleuropneumoniae from South Dakota. Southern hybridization showed that the 2.3 Mdal Su Sm plasmid was highly related to those Hinc II fragments of RSF1010 known to carry the Su Sm genes, but was unrelated to the remainder of this Salmonella resistance plasmid. Resistance to Su and Amp was specified by a 3.5 Mdal plasmid and appeared identical to pVM105 previously reported. The beta-lactamase enzyme had an isoelectric point of approximately 9.0. Southern hybridization showed no relationship to the TEM beta-lactamase. A third isolate of A. pleuropneumoniae was found to be resistant to chloramphenicol (Cm), Su and Sm by virtue of a 3.0 Mdal plasmid which specified a chloramphenicol acetyl transferase. We conclude that resistance to Su, Sm, Amp and Cm is mediated by small plasmids in A. pleuropneumoniae. Although the Su and Sm resistance determinants are highly related to those found in Enterobacteriaceae, the plasmids themselves and the beta-lactamase determinant are different. Images Fig. 1. Fig. 2. Fig. 3. Fig. 4. PMID:2914226

  9. A Configuration Framework and Implementation for the Least Privilege Separation Kernel

    DTIC Science & Technology

    2010-12-01

    The Altova Web site states that virtualization software, Parallels for Mac and Wine , is required for running it on MacOS and RedHat Linux...University of Singapore Singapore 28. Tan Lai Poh National University of Singapore Singapore 29. Quek Chee Luan Defence Science & Technology Agency Singapore

  10. Parallel Compilation on Virtual Machines in a Development Cloud Environment

    DTIC Science & Technology

    2013-09-01

    the potential impact of a possible course of action. 1-2 2. Approach We performed a simple experiment to determine whether the multiple CPUs...PERFORMING ORGANIZATION NAME(S) AND ADDRESSES 8. PERFORMING ORGANIZATION REPORT NUMBER D-4996 H13 -001206 Institute for Defense Analyses 4850 Mark

  11. Automated Vectorization of Decision-Based Algorithms

    NASA Technical Reports Server (NTRS)

    James, Mark

    2006-01-01

    Virtually all existing vectorization algorithms are designed to only analyze the numeric properties of an algorithm and distribute those elements across multiple processors. This advances the state of the practice because it is the only known system, at the time of this reporting, that takes high-level statements and analyzes them for their decision properties and converts them to a form that allows them to automatically be executed in parallel. The software takes a high-level source program that describes a complex decision- based condition and rewrites it as a disjunctive set of component Boolean relations that can then be executed in parallel. This is important because parallel architectures are becoming more commonplace in conventional systems and they have always been present in NASA flight systems. This technology allows one to take existing condition-based code and automatically vectorize it so it naturally decomposes across parallel architectures.

  12. Extensions to the Parallel Real-Time Artificial Intelligence System (PRAIS) for fault-tolerant heterogeneous cycle-stealing reasoning

    NASA Technical Reports Server (NTRS)

    Goldstein, David

    1991-01-01

    Extensions to an architecture for real-time, distributed (parallel) knowledge-based systems called the Parallel Real-time Artificial Intelligence System (PRAIS) are discussed. PRAIS strives for transparently parallelizing production (rule-based) systems, even under real-time constraints. PRAIS accomplished these goals (presented at the first annual C Language Integrated Production System (CLIPS) conference) by incorporating a dynamic task scheduler, operating system extensions for fact handling, and message-passing among multiple copies of CLIPS executing on a virtual blackboard. This distributed knowledge-based system tool uses the portability of CLIPS and common message-passing protocols to operate over a heterogeneous network of processors. Results using the original PRAIS architecture over a network of Sun 3's, Sun 4's and VAX's are presented. Mechanisms using the producer-consumer model to extend the architecture for fault-tolerance and distributed truth maintenance initiation are also discussed.

  13. Global Detection of Live Virtual Machine Migration Based on Cellular Neural Networks

    PubMed Central

    Xie, Kang; Yang, Yixian; Zhang, Ling; Jing, Maohua; Xin, Yang; Li, Zhongxian

    2014-01-01

    In order to meet the demands of operation monitoring of large scale, autoscaling, and heterogeneous virtual resources in the existing cloud computing, a new method of live virtual machine (VM) migration detection algorithm based on the cellular neural networks (CNNs), is presented. Through analyzing the detection process, the parameter relationship of CNN is mapped as an optimization problem, in which improved particle swarm optimization algorithm based on bubble sort is used to solve the problem. Experimental results demonstrate that the proposed method can display the VM migration processing intuitively. Compared with the best fit heuristic algorithm, this approach reduces the processing time, and emerging evidence has indicated that this new approach is affordable to parallelism and analog very large scale integration (VLSI) implementation allowing the VM migration detection to be performed better. PMID:24959631

  14. Global detection of live virtual machine migration based on cellular neural networks.

    PubMed

    Xie, Kang; Yang, Yixian; Zhang, Ling; Jing, Maohua; Xin, Yang; Li, Zhongxian

    2014-01-01

    In order to meet the demands of operation monitoring of large scale, autoscaling, and heterogeneous virtual resources in the existing cloud computing, a new method of live virtual machine (VM) migration detection algorithm based on the cellular neural networks (CNNs), is presented. Through analyzing the detection process, the parameter relationship of CNN is mapped as an optimization problem, in which improved particle swarm optimization algorithm based on bubble sort is used to solve the problem. Experimental results demonstrate that the proposed method can display the VM migration processing intuitively. Compared with the best fit heuristic algorithm, this approach reduces the processing time, and emerging evidence has indicated that this new approach is affordable to parallelism and analog very large scale integration (VLSI) implementation allowing the VM migration detection to be performed better.

  15. CloudMC: a cloud computing application for Monte Carlo simulation.

    PubMed

    Miras, H; Jiménez, R; Miras, C; Gomà, C

    2013-04-21

    This work presents CloudMC, a cloud computing application-developed in Windows Azure®, the platform of the Microsoft® cloud-for the parallelization of Monte Carlo simulations in a dynamic virtual cluster. CloudMC is a web application designed to be independent of the Monte Carlo code in which the simulations are based-the simulations just need to be of the form: input files → executable → output files. To study the performance of CloudMC in Windows Azure®, Monte Carlo simulations with penelope were performed on different instance (virtual machine) sizes, and for different number of instances. The instance size was found to have no effect on the simulation runtime. It was also found that the decrease in time with the number of instances followed Amdahl's law, with a slight deviation due to the increase in the fraction of non-parallelizable time with increasing number of instances. A simulation that would have required 30 h of CPU on a single instance was completed in 48.6 min when executed on 64 instances in parallel (speedup of 37 ×). Furthermore, the use of cloud computing for parallel computing offers some advantages over conventional clusters: high accessibility, scalability and pay per usage. Therefore, it is strongly believed that cloud computing will play an important role in making Monte Carlo dose calculation a reality in future clinical practice.

  16. Scalable Visual Analytics of Massive Textual Datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnan, Manoj Kumar; Bohn, Shawn J.; Cowley, Wendy E.

    2007-04-01

    This paper describes the first scalable implementation of text processing engine used in Visual Analytics tools. These tools aid information analysts in interacting with and understanding large textual information content through visual interfaces. By developing parallel implementation of the text processing engine, we enabled visual analytics tools to exploit cluster architectures and handle massive dataset. The paper describes key elements of our parallelization approach and demonstrates virtually linear scaling when processing multi-gigabyte data sets such as Pubmed. This approach enables interactive analysis of large datasets beyond capabilities of existing state-of-the art visual analytics tools.

  17. Virtual Mirrors

    ERIC Educational Resources Information Center

    Greenslade, Thomas B., Jr.

    2010-01-01

    The multiple-reflection photograph in Fig. 1 was taken in an elevator on board the cruise ship Norwegian Jade in March 2008. Three of the four walls of the elevator were mirrored, allowing me to see the combination of two standard arrangements of plane mirrors: two mirrors set at 90 degrees to each other and two parallel mirrors. Optical phenomena…

  18. Louisiana forests

    Treesearch

    Herbert S. Sternitzke

    1965-01-01

    The total amount of forest land in Louisiana is virtually the same today as it was a decade ago. But its distribution has changed noticeably. In the Delta, for example, forest acreage is still declining; between 1954 and 1964, it dropped some 7 percent, thus closely paralleling trends in the Delta sections of neighboring Arkansas and Mississippi. Outside the Delta,...

  19. The Impact of New Information Technology on Bureaucratic Organizational Culture

    ERIC Educational Resources Information Center

    Givens, Mark A.

    2011-01-01

    Virtual work environments (VWEs) have been used in the private sector for more than a decade, but the United States Marine Corps (USMC), as a whole, has not yet taken advantage of associated benefits. The USMC construct parallels the bureaucratic organizational culture and uses an antiquated information technology (IT) infrastructure. During an…

  20. The Pet Connection. Use of pets as sentinels to better integrate data on endocrine health effects of persistent environmental contaminants.

    EPA Science Inventory

    Many pets, cats in particular, spend virtually all their lives within the family domicile, thus paralleling their owner’s low-level but chronic exposure to a variety of indoor contaminants. Owing to their shorter life-spans and shorter latency periods, associations between contam...

  1. The Parallel Information Universe

    ERIC Educational Resources Information Center

    Eisenberg, Mike

    2008-01-01

    The Web 2.0 "buzz" starts with new technologies such as virtual worlds, cell phones and handheld devices that offer 24/7 web access, tagging, social networks, and blogs and brings together various web capabilities in unique combinations. Web 2.0, however, is about much more than the technology--it is about a change in focus to participation, user…

  2. Merlin - Massively parallel heterogeneous computing

    NASA Technical Reports Server (NTRS)

    Wittie, Larry; Maples, Creve

    1989-01-01

    Hardware and software for Merlin, a new kind of massively parallel computing system, are described. Eight computers are linked as a 300-MIPS prototype to develop system software for a larger Merlin network with 16 to 64 nodes, totaling 600 to 3000 MIPS. These working prototypes help refine a mapped reflective memory technique that offers a new, very general way of linking many types of computer to form supercomputers. Processors share data selectively and rapidly on a word-by-word basis. Fast firmware virtual circuits are reconfigured to match topological needs of individual application programs. Merlin's low-latency memory-sharing interfaces solve many problems in the design of high-performance computing systems. The Merlin prototypes are intended to run parallel programs for scientific applications and to determine hardware and software needs for a future Teraflops Merlin network.

  3. Evaluation of the power consumption of a high-speed parallel robot

    NASA Astrophysics Data System (ADS)

    Han, Gang; Xie, Fugui; Liu, Xin-Jun

    2018-06-01

    An inverse dynamic model of a high-speed parallel robot is established based on the virtual work principle. With this dynamic model, a new evaluation method is proposed to measure the power consumption of the robot during pick-and-place tasks. The power vector is extended in this method and used to represent the collinear velocity and acceleration of the moving platform. Afterward, several dynamic performance indices, which are homogenous and possess obvious physical meanings, are proposed. These indices can evaluate the power input and output transmissibility of the robot in a workspace. The distributions of the power input and output transmissibility of the high-speed parallel robot are derived with these indices and clearly illustrated in atlases. Furtherly, a low-power-consumption workspace is selected for the robot.

  4. Sparse-view photoacoustic tomography using virtual parallel-projections and spatially adaptive filtering

    NASA Astrophysics Data System (ADS)

    Wang, Yihan; Lu, Tong; Wan, Wenbo; Liu, Lingling; Zhang, Songhe; Li, Jiao; Zhao, Huijuan; Gao, Feng

    2018-02-01

    To fully realize the potential of photoacoustic tomography (PAT) in preclinical and clinical applications, rapid measurements and robust reconstructions are needed. Sparse-view measurements have been adopted effectively to accelerate the data acquisition. However, since the reconstruction from the sparse-view sampling data is challenging, both of the effective measurement and the appropriate reconstruction should be taken into account. In this study, we present an iterative sparse-view PAT reconstruction scheme where a virtual parallel-projection concept matching for the proposed measurement condition is introduced to help to achieve the "compressive sensing" procedure of the reconstruction, and meanwhile the spatially adaptive filtering fully considering the a priori information of the mutually similar blocks existing in natural images is introduced to effectively recover the partial unknown coefficients in the transformed domain. Therefore, the sparse-view PAT images can be reconstructed with higher quality compared with the results obtained by the universal back-projection (UBP) algorithm in the same sparse-view cases. The proposed approach has been validated by simulation experiments, which exhibits desirable performances in image fidelity even from a small number of measuring positions.

  5. VO-KOREL: A Fourier Disentangling Service of the Virtual Observatory

    NASA Astrophysics Data System (ADS)

    Škoda, Petr; Hadrava, Petr; Fuchs, Jan

    2012-04-01

    VO-KOREL is a web service exploiting the technology of the Virtual Observatory for providing astronomers with the intuitive graphical front-end and distributed computing back-end running the most recent version of the Fourier disentangling code KOREL. The system integrates the ideas of the e-shop basket, conserving the privacy of every user by transfer encryption and access authentication, with features of laboratory notebook, allowing the easy housekeeping of both input parameters and final results, as well as it explores a newly emerging technology of cloud computing. While the web-based front-end allows the user to submit data and parameter files, edit parameters, manage a job list, resubmit or cancel running jobs and mainly watching the text and graphical results of a disentangling process, the main part of the back-end is a simple job queue submission system executing in parallel multiple instances of the FORTRAN code KOREL. This may be easily extended for GRID-based deployment on massively parallel computing clusters. The short introduction into underlying technologies is given, briefly mentioning advantages as well as bottlenecks of the design used.

  6. Teaching Basic Field Skills Using Screen-Based Virtual Reality Landscapes

    NASA Astrophysics Data System (ADS)

    Houghton, J.; Robinson, A.; Gordon, C.; Lloyd, G. E. E.; Morgan, D. J.

    2016-12-01

    We are using screen-based virtual reality landscapes, created using the Unity 3D game engine, to augment the training geoscience students receive in preparing for fieldwork. Students explore these landscapes as they would real ones, interacting with virtual outcrops to collect data, determine location, and map the geology. Skills for conducting field geological surveys - collecting, plotting and interpreting data; time management and decision making - are introduced interactively and intuitively. As with real landscapes, the virtual landscapes are open-ended terrains with embedded data. This means the game does not structure student interaction with the information as it is through experience the student learns the best methods to work successfully and efficiently. These virtual landscapes are not replacements for geological fieldwork rather virtual spaces between classroom and field in which to train and reinforcement essential skills. Importantly, these virtual landscapes offer accessible parallel provision for students unable to visit, or fully partake in visiting, the field. The project has received positive feedback from both staff and students. Results show students find it easier to focus on learning these basic field skills in a classroom, rather than field setting, and make the same mistakes as when learning in the field, validating the realistic nature of the virtual experience and providing opportunity to learn from these mistakes. The approach also saves time, and therefore resources, in the field as basic skills are already embedded. 70% of students report increased confidence with how to map boundaries and 80% have found the virtual training a useful experience. We are also developing landscapes based on real places with 3D photogrammetric outcrops, and a virtual urban landscape in which Engineering Geology students can conduct a site investigation. This project is a collaboration between the University of Leeds and Leeds College of Art, UK, and all our virtual landscapes are freely available online at www.see.leeds.ac.uk/virtual-landscapes/.

  7. Means and method of balancing multi-cylinder reciprocating machines

    DOEpatents

    Corey, John A.; Walsh, Michael M.

    1985-01-01

    A virtual balancing axis arrangement is described for multi-cylinder reciprocating piston machines for effectively balancing out imbalanced forces and minimizing residual imbalance moments acting on the crankshaft of such machines without requiring the use of additional parallel-arrayed balancing shafts or complex and expensive gear arrangements. The novel virtual balancing axis arrangement is capable of being designed into multi-cylinder reciprocating piston and crankshaft machines for substantially reducing vibrations induced during operation of such machines with only minimal number of additional component parts. Some of the required component parts may be available from parts already required for operation of auxiliary equipment, such as oil and water pumps used in certain types of reciprocating piston and crankshaft machine so that by appropriate location and dimensioning in accordance with the teachings of the invention, the virtual balancing axis arrangement can be built into the machine at little or no additional cost.

  8. Selection, application, and validation of a set of molecular descriptors for nuclear receptor ligands.

    PubMed

    Stewart, Eugene L; Brown, Peter J; Bentley, James A; Willson, Timothy M

    2004-08-01

    A methodology for the selection and validation of nuclear receptor ligand chemical descriptors is described. After descriptors for a targeted chemical space were selected, a virtual screening methodology utilizing this space was formulated for the identification of potential NR ligands from our corporate collection. Using simple descriptors and our virtual screening method, we are able to quickly identify potential NR ligands from a large collection of compounds. As validation of the virtual screening procedure, an 8, 000-membered NR targeted set and a 24, 000-membered diverse control set of compounds were selected from our in-house general screening collection and screened in parallel across a number of orphan NR FRET assays. For the two assays that provided at least one hit per set by the established minimum pEC(50) for activity, the results showed a 2-fold increase in the hit-rate of the targeted compound set over the diverse set.

  9. Tomography for two-dimensional gas temperature distribution based on TDLAS

    NASA Astrophysics Data System (ADS)

    Luo, Can; Wang, Yunchu; Xing, Fei

    2018-03-01

    Based on tunable diode laser absorption spectroscopy (TDLAS), the tomography is used to reconstruct the combustion gas temperature distribution. The effects of number of rays, number of grids, and spacing of rays on the temperature reconstruction results for parallel ray are researched. The reconstruction quality is proportional to the ray number. The quality tends to be smoother when the ray number exceeds a certain value. The best quality is achieved when η is between 0.5 and 1. A virtual ray method combined with the reconstruction algorithms is tested. It is found that virtual ray method is effective to improve the accuracy of reconstruction results, compared with the original method. The linear interpolation method and cubic spline interpolation method, are used to improve the calculation accuracy of virtual ray absorption value. According to the calculation results, cubic spline interpolation is better. Moreover, the temperature distribution of a TBCC combustion chamber is used to validate those conclusions.

  10. Finding Tropical Cyclones on a Cloud Computing Cluster: Using Parallel Virtualization for Large-Scale Climate Simulation Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hasenkamp, Daren; Sim, Alexander; Wehner, Michael

    Extensive computing power has been used to tackle issues such as climate changes, fusion energy, and other pressing scientific challenges. These computations produce a tremendous amount of data; however, many of the data analysis programs currently only run a single processor. In this work, we explore the possibility of using the emerging cloud computing platform to parallelize such sequential data analysis tasks. As a proof of concept, we wrap a program for analyzing trends of tropical cyclones in a set of virtual machines (VMs). This approach allows the user to keep their familiar data analysis environment in the VMs, whilemore » we provide the coordination and data transfer services to ensure the necessary input and output are directed to the desired locations. This work extensively exercises the networking capability of the cloud computing systems and has revealed a number of weaknesses in the current cloud system software. In our tests, we are able to scale the parallel data analysis job to a modest number of VMs and achieve a speedup that is comparable to running the same analysis task using MPI. However, compared to MPI based parallelization, the cloud-based approach has a number of advantages. The cloud-based approach is more flexible because the VMs can capture arbitrary software dependencies without requiring the user to rewrite their programs. The cloud-based approach is also more resilient to failure; as long as a single VM is running, it can make progress while as soon as one MPI node fails the whole analysis job fails. In short, this initial work demonstrates that a cloud computing system is a viable platform for distributed scientific data analyses traditionally conducted on dedicated supercomputing systems.« less

  11. Randomized Clinical Trial of Virtual Reality Simulation Training for Transvaginal Gynecologic Ultrasound Skills.

    PubMed

    Chao, Coline; Chalouhi, Gihad E; Bouhanna, Philippe; Ville, Yves; Dommergues, Marc

    2015-09-01

    To compare the impact of virtual reality simulation training and theoretical teaching on the ability of inexperienced trainees to produce adequate virtual transvaginal ultrasound images. We conducted a randomized controlled trial with parallel groups. Participants included inexperienced residents starting a training program in Paris. The intervention consisted of 40 minutes of virtual reality simulation training using a haptic transvaginal simulator versus 40 minutes of conventional teaching including a conference with slides and videos and answers to the students' questions. The outcome was a 19-point image quality score calculated from a set of 4 images (sagittal and coronal views of the uterus and left and right ovaries) produced by trainees immediately after the intervention, using the same simulator on which a new virtual patient had been uploaded. Experts assessed the outcome on stored images, presented in a random order, 2 months after the trial was completed. They were blinded to group assignment. The hypothesis was an improved outcome in the intervention group. Randomization was 1 to 1. The mean score was significantly greater in the simulation group (n = 16; mean score, 12; SEM, 0.8) than the control group (n = 18; mean score, 9; SEM, 1.0; P= .0302). The quality of virtual vaginal images produced by inexperienced trainees was greater immediately after a single virtual reality simulation training session than after a single theoretical teaching session. © 2015 by the American Institute of Ultrasound in Medicine.

  12. High Performance Input/Output for Parallel Computer Systems

    NASA Technical Reports Server (NTRS)

    Ligon, W. B.

    1996-01-01

    The goal of our project is to study the I/O characteristics of parallel applications used in Earth Science data processing systems such as Regional Data Centers (RDCs) or EOSDIS. Our approach is to study the runtime behavior of typical programs and the effect of key parameters of the I/O subsystem both under simulation and with direct experimentation on parallel systems. Our three year activity has focused on two items: developing a test bed that facilitates experimentation with parallel I/O, and studying representative programs from the Earth science data processing application domain. The Parallel Virtual File System (PVFS) has been developed for use on a number of platforms including the Tiger Parallel Architecture Workbench (TPAW) simulator, The Intel Paragon, a cluster of DEC Alpha workstations, and the Beowulf system (at CESDIS). PVFS provides considerable flexibility in configuring I/O in a UNIX- like environment. Access to key performance parameters facilitates experimentation. We have studied several key applications fiom levels 1,2 and 3 of the typical RDC processing scenario including instrument calibration and navigation, image classification, and numerical modeling codes. We have also considered large-scale scientific database codes used to organize image data.

  13. A parallel adaptive mesh refinement algorithm

    NASA Technical Reports Server (NTRS)

    Quirk, James J.; Hanebutte, Ulf R.

    1993-01-01

    Over recent years, Adaptive Mesh Refinement (AMR) algorithms which dynamically match the local resolution of the computational grid to the numerical solution being sought have emerged as powerful tools for solving problems that contain disparate length and time scales. In particular, several workers have demonstrated the effectiveness of employing an adaptive, block-structured hierarchical grid system for simulations of complex shock wave phenomena. Unfortunately, from the parallel algorithm developer's viewpoint, this class of scheme is quite involved; these schemes cannot be distilled down to a small kernel upon which various parallelizing strategies may be tested. However, because of their block-structured nature such schemes are inherently parallel, so all is not lost. In this paper we describe the method by which Quirk's AMR algorithm has been parallelized. This method is built upon just a few simple message passing routines and so it may be implemented across a broad class of MIMD machines. Moreover, the method of parallelization is such that the original serial code is left virtually intact, and so we are left with just a single product to support. The importance of this fact should not be underestimated given the size and complexity of the original algorithm.

  14. Embodied information behavior, mixed reality and big data

    NASA Astrophysics Data System (ADS)

    West, Ruth; Parola, Max J.; Jaycen, Amelia R.; Lueg, Christopher P.

    2015-03-01

    A renaissance in the development of virtual (VR), augmented (AR), and mixed reality (MR) technologies with a focus on consumer and industrial applications is underway. As data becomes ubiquitous in our lives, a need arises to revisit the role of our bodies, explicitly in relation to data or information. Our observation is that VR/AR/MR technology development is a vision of the future framed in terms of promissory narratives. These narratives develop alongside the underlying enabling technologies and create new use contexts for virtual experiences. It is a vision rooted in the combination of responsive, interactive, dynamic, sharable data streams, and augmentation of the physical senses for capabilities beyond those normally humanly possible. In parallel to the varied definitions of information and approaches to elucidating information behavior, a myriad of definitions and methods of measuring and understanding presence in virtual experiences exist. These and other ideas will be tested by designers, developers and technology adopters as the broader ecology of head-worn devices for virtual experiences evolves in order to reap the full potential and benefits of these emerging technologies.

  15. Role of Open Source Tools and Resources in Virtual Screening for Drug Discovery.

    PubMed

    Karthikeyan, Muthukumarasamy; Vyas, Renu

    2015-01-01

    Advancement in chemoinformatics research in parallel with availability of high performance computing platform has made handling of large scale multi-dimensional scientific data for high throughput drug discovery easier. In this study we have explored publicly available molecular databases with the help of open-source based integrated in-house molecular informatics tools for virtual screening. The virtual screening literature for past decade has been extensively investigated and thoroughly analyzed to reveal interesting patterns with respect to the drug, target, scaffold and disease space. The review also focuses on the integrated chemoinformatics tools that are capable of harvesting chemical data from textual literature information and transform them into truly computable chemical structures, identification of unique fragments and scaffolds from a class of compounds, automatic generation of focused virtual libraries, computation of molecular descriptors for structure-activity relationship studies, application of conventional filters used in lead discovery along with in-house developed exhaustive PTC (Pharmacophore, Toxicophores and Chemophores) filters and machine learning tools for the design of potential disease specific inhibitors. A case study on kinase inhibitors is provided as an example.

  16. B cells are not essential for Lactobacillus-mediated protection against lethal pneumovirus infection*

    PubMed Central

    Percopo, Caroline M.; Dyer, Kimberly D.; Garcia-Crespo, Katia E.; Gabryszewski, Stanislaw J.; Shaffer, Arthur L.; Domachowske, Joseph B.; Rosenberg, Helene F.

    2014-01-01

    We have shown previously that priming of respiratory mucosa with live Lactobacillus species promotes robust and prolonged survival from an otherwise lethal infection with pneumonia virus of mice (PVM), a property known as heterologous immunity. Lactobacillus-priming results in a moderate reduction in virus recovery and a dramatic reduction in virus-induced proinflammatory cytokine production; the precise mechanisms underlying these findings remain to be elucidated. As B cells have been shown to promote heterologous immunity against respiratory virus pathogens under similar conditions, here we explore the role of B cells in Lactobacillus-mediated protection against acute pneumovirus infection. We found that Lactobacillus-primed mice feature elevated levels of airway immunoglobulins IgG, IgA and IgM and lung tissues with dense, B cell (B220+) enriched peribronchial and perivascular infiltrates with germinal centers consistent with descriptions of bronchus-associated lymphoid tissue. No B cells were detected in lung tissue of Lactobacillus-primed B-cell deficient μMT mice or Jh mice, and Lactobacillus-primed μMT mice had no characteristic infiltrates or airway immunoglobulins. Nonetheless, we observed diminished virus recovery and profound suppression of virus-induced proinflammatory cytokines CCL2, IFN-gamma, and CXCL10 in both wild-type and Lactobacillus-primed μMT mice. Furthermore, L. plantarum-primed, B-cell deficient μMT and Jh mice were fully protected from an otherwise lethal PVM infection, as were their respective wild-types. We conclude that B cells are dispensable for Lactobacillus-mediated heterologous immunity and were not crucial for promoting survival in response to an otherwise lethal pneumovirus infection. PMID:24748495

  17. Highly parallel sparse Cholesky factorization

    NASA Technical Reports Server (NTRS)

    Gilbert, John R.; Schreiber, Robert

    1990-01-01

    Several fine grained parallel algorithms were developed and compared to compute the Cholesky factorization of a sparse matrix. The experimental implementations are on the Connection Machine, a distributed memory SIMD machine whose programming model conceptually supplies one processor per data element. In contrast to special purpose algorithms in which the matrix structure conforms to the connection structure of the machine, the focus is on matrices with arbitrary sparsity structure. The most promising algorithm is one whose inner loop performs several dense factorizations simultaneously on a 2-D grid of processors. Virtually any massively parallel dense factorization algorithm can be used as the key subroutine. The sparse code attains execution rates comparable to those of the dense subroutine. Although at present architectural limitations prevent the dense factorization from realizing its potential efficiency, it is concluded that a regular data parallel architecture can be used efficiently to solve arbitrarily structured sparse problems. A performance model is also presented and it is used to analyze the algorithms.

  18. Full Parallel Implementation of an All-Electron Four-Component Dirac-Kohn-Sham Program.

    PubMed

    Rampino, Sergio; Belpassi, Leonardo; Tarantelli, Francesco; Storchi, Loriano

    2014-09-09

    A full distributed-memory implementation of the Dirac-Kohn-Sham (DKS) module of the program BERTHA (Belpassi et al., Phys. Chem. Chem. Phys. 2011, 13, 12368-12394) is presented, where the self-consistent field (SCF) procedure is replicated on all the parallel processes, each process working on subsets of the global matrices. The key feature of the implementation is an efficient procedure for switching between two matrix distribution schemes, one (integral-driven) optimal for the parallel computation of the matrix elements and another (block-cyclic) optimal for the parallel linear algebra operations. This approach, making both CPU-time and memory scalable with the number of processors used, virtually overcomes at once both time and memory barriers associated with DKS calculations. Performance, portability, and numerical stability of the code are illustrated on the basis of test calculations on three gold clusters of increasing size, an organometallic compound, and a perovskite model. The calculations are performed on a Beowulf and a BlueGene/Q system.

  19. GPU based cloud system for high-performance arrhythmia detection with parallel k-NN algorithm.

    PubMed

    Tae Joon Jun; Hyun Ji Park; Hyuk Yoo; Young-Hak Kim; Daeyoung Kim

    2016-08-01

    In this paper, we propose an GPU based Cloud system for high-performance arrhythmia detection. Pan-Tompkins algorithm is used for QRS detection and we optimized beat classification algorithm with K-Nearest Neighbor (K-NN). To support high performance beat classification on the system, we parallelized beat classification algorithm with CUDA to execute the algorithm on virtualized GPU devices on the Cloud system. MIT-BIH Arrhythmia database is used for validation of the algorithm. The system achieved about 93.5% of detection rate which is comparable to previous researches while our algorithm shows 2.5 times faster execution time compared to CPU only detection algorithm.

  20. nextPARS: parallel probing of RNA structures in Illumina

    PubMed Central

    Saus, Ester; Willis, Jesse R.; Pryszcz, Leszek P.; Hafez, Ahmed; Llorens, Carlos; Himmelbauer, Heinz

    2018-01-01

    RNA molecules play important roles in virtually every cellular process. These functions are often mediated through the adoption of specific structures that enable RNAs to interact with other molecules. Thus, determining the secondary structures of RNAs is central to understanding their function and evolution. In recent years several sequencing-based approaches have been developed that allow probing structural features of thousands of RNA molecules present in a sample. Here, we describe nextPARS, a novel Illumina-based implementation of in vitro parallel probing of RNA structures. Our approach achieves comparable accuracy to previous implementations, while enabling higher throughput and sample multiplexing. PMID:29358234

  1. LAVA web-based remote simulation: enhancements for education and technology innovation

    NASA Astrophysics Data System (ADS)

    Lee, Sang Il; Ng, Ka Chun; Orimoto, Takashi; Pittenger, Jason; Horie, Toshi; Adam, Konstantinos; Cheng, Mosong; Croffie, Ebo H.; Deng, Yunfei; Gennari, Frank E.; Pistor, Thomas V.; Robins, Garth; Williamson, Mike V.; Wu, Bo; Yuan, Lei; Neureuther, Andrew R.

    2001-09-01

    The Lithography Analysis using Virtual Access (LAVA) web site at http://cuervo.eecs.berkeley.edu/Volcano/ has been enhanced with new optical and deposition applets, graphical infrastructure and linkage to parallel execution on networks of workstations. More than ten new graphical user interface applets have been designed to support education, illustrate novel concepts from research, and explore usage of parallel machines. These applets have been improved through feedback and classroom use. Over the last year LAVA provided industry and other academic communities 1,300 session and 700 rigorous simulations per month among the SPLAT, SAMPLE2D, SAMPLE3D, TEMPEST, STORM, and BEBS simulators.

  2. NASA Tech Briefs, January 2006

    NASA Technical Reports Server (NTRS)

    2006-01-01

    Topics covered include: Semiautonomous Avionics-and-Sensors System for a UAV; Biomimetic/Optical Sensors for Detecting Bacterial Species; System Would Detect Foreign-Object Damage in Turbofan Engine; Detection of Water Hazards for Autonomous Robotic Vehicles; Fuel Cells Utilizing Oxygen From Air at Low Pressures; Hybrid Ion-Detector/Data-Acquisition System for a TOF-MS; Spontaneous-Desorption Ionizer for a TOF-MS; Equipment for On-Wafer Testing From 220 to 325 GHz; Computing Isentropic Flow Properties of Air/R-134a Mixtures; Java Mission Evaluation Workstation System; Using a Quadtree Algorithm To Assess Line of Sight; Software for Automated Generation of Cartesian Meshes; Optics Program Modified for Multithreaded Parallel Computing; Programs for Testing Processor-in-Memory Computing Systems; PVM Enhancement for Beowulf Multiple-Processor Nodes; Ion-Exclusion Chromatography for Analyzing Organics in Water; Selective Plasma Deposition of Fluorocarbon Films on SAMs; Water-Based Pressure-Sensitive Paints; System Finds Horizontal Location of Center of Gravity; Predicting Tail Buffet Loads of a Fighter Airplane; Water Containment Systems for Testing High-Speed Flywheels; Vapor-Compression Heat Pumps for Operation Aboard Spacecraft; Multistage Electrophoretic Separators; Recovering Residual Xenon Propellant for an Ion Propulsion System; Automated Solvent Seaming of Large Polyimide Membranes; Manufacturing Precise, Lightweight Paraboloidal Mirrors; Analysis of Membrane Lipids of Airborne Micro-Organisms; Noninvasive Diagnosis of Coronary Artery Disease Using 12-Lead High-Frequency Electrocardiograms; Dual-Laser-Pulse Ignition; Enhanced-Contrast Viewing of White-Hot Objects in Furnaces; Electrically Tunable Terahertz Quantum-Cascade Lasers; Few-Mode Whispering-Gallery-Mode Resonators; Conflict-Aware Scheduling Algorithm; and Real-Time Diagnosis of Faults Using a Bank of Kalman Filters.

  3. Closed-Loop Task Difficulty Adaptation during Virtual Reality Reach-to-Grasp Training Assisted with an Exoskeleton for Stroke Rehabilitation

    PubMed Central

    Grimm, Florian; Naros, Georgios; Gharabaghi, Alireza

    2016-01-01

    Stroke patients with severe motor deficits of the upper extremity may practice rehabilitation exercises with the assistance of a multi-joint exoskeleton. Although this technology enables intensive task-oriented training, it may also lead to slacking when the assistance is too supportive. Preserving the engagement of the patients while providing “assistance-as-needed” during the exercises, therefore remains an ongoing challenge. We applied a commercially available seven degree-of-freedom arm exoskeleton to provide passive gravity compensation during task-oriented training in a virtual environment. During this 4-week pilot study, five severely affected chronic stroke patients performed reach-to-grasp exercises resembling activities of daily living. The subjects received virtual reality feedback from their three-dimensional movements. The level of difficulty for the exercise was adjusted by a performance-dependent real-time adaptation algorithm. The goal of this algorithm was the automated improvement of the range of motion. In the course of 20 training and feedback sessions, this unsupervised adaptive training concept led to a progressive increase of the virtual training space (p < 0.001) in accordance with the subjects' abilities. This learning curve was paralleled by a concurrent improvement of real world kinematic parameters, i.e., range of motion (p = 0.008), accuracy of movement (p = 0.01), and movement velocity (p < 0.001). Notably, these kinematic gains were paralleled by motor improvements such as increased elbow movement (p = 0.001), grip force (p < 0.001), and upper extremity Fugl-Meyer-Assessment score from 14.3 ± 5 to 16.9 ± 6.1 (p = 0.026). Combining gravity-compensating assistance with adaptive closed-loop feedback in virtual reality provides customized rehabilitation environments for severely affected stroke patients. This approach may facilitate motor learning by progressively challenging the subject in accordance with the individual capacity for functional restoration. It might be necessary to apply concurrent restorative interventions to translate these improvements into relevant functional gains of severely motor impaired patients in activities of daily living. PMID:27895550

  4. Closed-Loop Task Difficulty Adaptation during Virtual Reality Reach-to-Grasp Training Assisted with an Exoskeleton for Stroke Rehabilitation.

    PubMed

    Grimm, Florian; Naros, Georgios; Gharabaghi, Alireza

    2016-01-01

    Stroke patients with severe motor deficits of the upper extremity may practice rehabilitation exercises with the assistance of a multi-joint exoskeleton. Although this technology enables intensive task-oriented training, it may also lead to slacking when the assistance is too supportive. Preserving the engagement of the patients while providing "assistance-as-needed" during the exercises, therefore remains an ongoing challenge. We applied a commercially available seven degree-of-freedom arm exoskeleton to provide passive gravity compensation during task-oriented training in a virtual environment. During this 4-week pilot study, five severely affected chronic stroke patients performed reach-to-grasp exercises resembling activities of daily living. The subjects received virtual reality feedback from their three-dimensional movements. The level of difficulty for the exercise was adjusted by a performance-dependent real-time adaptation algorithm. The goal of this algorithm was the automated improvement of the range of motion. In the course of 20 training and feedback sessions, this unsupervised adaptive training concept led to a progressive increase of the virtual training space ( p < 0.001) in accordance with the subjects' abilities. This learning curve was paralleled by a concurrent improvement of real world kinematic parameters, i.e., range of motion ( p = 0.008), accuracy of movement ( p = 0.01), and movement velocity ( p < 0.001). Notably, these kinematic gains were paralleled by motor improvements such as increased elbow movement ( p = 0.001), grip force ( p < 0.001), and upper extremity Fugl-Meyer-Assessment score from 14.3 ± 5 to 16.9 ± 6.1 ( p = 0.026). Combining gravity-compensating assistance with adaptive closed-loop feedback in virtual reality provides customized rehabilitation environments for severely affected stroke patients. This approach may facilitate motor learning by progressively challenging the subject in accordance with the individual capacity for functional restoration. It might be necessary to apply concurrent restorative interventions to translate these improvements into relevant functional gains of severely motor impaired patients in activities of daily living.

  5. Closed-form dynamics of a hexarot parallel manipulator by means of the principle of virtual work

    NASA Astrophysics Data System (ADS)

    Pedrammehr, Siamak; Nahavandi, Saeid; Abdi, Hamid

    2018-04-01

    In this research, a systematic approach to solving the inverse dynamics of hexarot manipulators is addressed using the methodology of virtual work. For the first time, a closed form of the mathematical formulation of the standard dynamic model is presented for this class of mechanisms. An efficient algorithm for solving this closed-form dynamic model of the mechanism is developed and it is used to simulate the dynamics of the system for different trajectories. Validation of the proposed model is performed using SimMechanics and it is shown that the results of the proposed mathematical model match with the results obtained by the SimMechanics model.

  6. 1001 Ways to run AutoDock Vina for virtual screening

    NASA Astrophysics Data System (ADS)

    Jaghoori, Mohammad Mahdi; Bleijlevens, Boris; Olabarriaga, Silvia D.

    2016-03-01

    Large-scale computing technologies have enabled high-throughput virtual screening involving thousands to millions of drug candidates. It is not trivial, however, for biochemical scientists to evaluate the technical alternatives and their implications for running such large experiments. Besides experience with the molecular docking tool itself, the scientist needs to learn how to run it on high-performance computing (HPC) infrastructures, and understand the impact of the choices made. Here, we review such considerations for a specific tool, AutoDock Vina, and use experimental data to illustrate the following points: (1) an additional level of parallelization increases virtual screening throughput on a multi-core machine; (2) capturing of the random seed is not enough (though necessary) for reproducibility on heterogeneous distributed computing systems; (3) the overall time spent on the screening of a ligand library can be improved by analysis of factors affecting execution time per ligand, including number of active torsions, heavy atoms and exhaustiveness. We also illustrate differences among four common HPC infrastructures: grid, Hadoop, small cluster and multi-core (virtual machine on the cloud). Our analysis shows that these platforms are suitable for screening experiments of different sizes. These considerations can guide scientists when choosing the best computing platform and set-up for their future large virtual screening experiments.

  7. 1001 Ways to run AutoDock Vina for virtual screening.

    PubMed

    Jaghoori, Mohammad Mahdi; Bleijlevens, Boris; Olabarriaga, Silvia D

    2016-03-01

    Large-scale computing technologies have enabled high-throughput virtual screening involving thousands to millions of drug candidates. It is not trivial, however, for biochemical scientists to evaluate the technical alternatives and their implications for running such large experiments. Besides experience with the molecular docking tool itself, the scientist needs to learn how to run it on high-performance computing (HPC) infrastructures, and understand the impact of the choices made. Here, we review such considerations for a specific tool, AutoDock Vina, and use experimental data to illustrate the following points: (1) an additional level of parallelization increases virtual screening throughput on a multi-core machine; (2) capturing of the random seed is not enough (though necessary) for reproducibility on heterogeneous distributed computing systems; (3) the overall time spent on the screening of a ligand library can be improved by analysis of factors affecting execution time per ligand, including number of active torsions, heavy atoms and exhaustiveness. We also illustrate differences among four common HPC infrastructures: grid, Hadoop, small cluster and multi-core (virtual machine on the cloud). Our analysis shows that these platforms are suitable for screening experiments of different sizes. These considerations can guide scientists when choosing the best computing platform and set-up for their future large virtual screening experiments.

  8. Rapid prototyping 3D virtual world interfaces within a virtual factory environment

    NASA Technical Reports Server (NTRS)

    Kosta, Charles Paul; Krolak, Patrick D.

    1993-01-01

    On-going work into user requirements analysis using CLIPS (NASA/JSC) expert systems as an intelligent event simulator has led to research into three-dimensional (3D) interfaces. Previous work involved CLIPS and two-dimensional (2D) models. Integral to this work was the development of the University of Massachusetts Lowell parallel version of CLIPS, called PCLIPS. This allowed us to create both a Software Bus and a group problem-solving environment for expert systems development. By shifting the PCLIPS paradigm to use the VEOS messaging protocol we have merged VEOS (HlTL/Seattle) and CLIPS into a distributed virtual worlds prototyping environment (VCLIPS). VCLIPS uses the VEOS protocol layer to allow multiple experts to cooperate on a single problem. We have begun to look at the control of a virtual factory. In the virtual factory there are actors and objects as found in our Lincoln Logs Factory of the Future project. In this artificial reality architecture there are three VCLIPS entities in action. One entity is responsible for display and user events in the 3D virtual world. Another is responsible for either simulating the virtual factory or communicating with the real factory. The third is a user interface expert. The interface expert maps user input levels, within the current prototype, to control information for the factory. The interface to the virtual factory is based on a camera paradigm. The graphics subsystem generates camera views of the factory on standard X-Window displays. The camera allows for view control and object control. Control or the factory is accomplished by the user reaching into the camera views to perform object interactions. All communication between the separate CLIPS expert systems is done through VEOS.

  9. The Impact of Cognitive Styles on Design Students' Spatial Knowledge from Virtual Environments

    ERIC Educational Resources Information Center

    Yildirm, Isil; Zengel, Rengin

    2014-01-01

    In parallel with the technological developments dominating usage of digital tools in science and education, caused the transform of knowledge in new ways. The reflection of these integration is seen in design discipline as its active role in this circle whether in practice or in the era of education, Benefit from the capabilities of new…

  10. An Overview of High Performance Computing and Challenges for the Future

    ScienceCinema

    Google Tech Talks

    2017-12-09

    In this talk we examine how high performance computing has changed over the last 10-year and look toward the future in terms of trends. These changes have had and will continue to have a major impact on our software. A new generation of software libraries and lgorithms are needed for the effective and reliable use of (wide area) dynamic, distributed and parallel environments. Some of the software and algorithm challenges have already been encountered, such as management of communication and memory hierarchies through a combination of compile--time and run--time techniques, but the increased scale of computation, depth of memory hierarchies, range of latencies, and increased run--time environment variability will make these problems much harder. We will focus on the redesign of software to fit multicore architectures. Speaker: Jack Dongarra University of Tennessee Oak Ridge National Laboratory University of Manchester Jack Dongarra received a Bachelor of Science in Mathematics from Chicago State University in 1972 and a Master of Science in Computer Science from the Illinois Institute of Technology in 1973. He received his Ph.D. in Applied Mathematics from the University of New Mexico in 1980. He worked at the Argonne National Laboratory until 1989, becoming a senior scientist. He now holds an appointment as University Distinguished Professor of Computer Science in the Electrical Engineering and Computer Science Department at the University of Tennessee, has the position of a Distinguished Research Staff member in the Computer Science and Mathematics Division at Oak Ridge National Laboratory (ORNL), Turing Fellow in the Computer Science and Mathematics Schools at the University of Manchester, and an Adjunct Professor in the Computer Science Department at Rice University. He specializes in numerical algorithms in linear algebra, parallel computing, the use of advanced-computer architectures, programming methodology, and tools for parallel computers. His research includes the development, testing and documentation of high quality mathematical software. He has contributed to the design and implementation of the following open source software packages and systems: EISPACK, LINPACK, the BLAS, LAPACK, ScaLAPACK, Netlib, PVM, MPI, NetSolve, Top500, ATLAS, and PAPI. He has published approximately 200 articles, papers, reports and technical memoranda and he is coauthor of several books. He was awarded the IEEE Sid Fernbach Award in 2004 for his contributions in the application of high performance computers using innovative approaches. He is a Fellow of the AAAS, ACM, and the IEEE and a member of the National Academy of Engineering.

  11. An Overview of High Performance Computing and Challenges for the Future

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Google Tech Talks

    In this talk we examine how high performance computing has changed over the last 10-year and look toward the future in terms of trends. These changes have had and will continue to have a major impact on our software. A new generation of software libraries and lgorithms are needed for the effective and reliable use of (wide area) dynamic, distributed and parallel environments. Some of the software and algorithm challenges have already been encountered, such as management of communication and memory hierarchies through a combination of compile--time and run--time techniques, but the increased scale of computation, depth of memory hierarchies,more » range of latencies, and increased run--time environment variability will make these problems much harder. We will focus on the redesign of software to fit multicore architectures. Speaker: Jack Dongarra University of Tennessee Oak Ridge National Laboratory University of Manchester Jack Dongarra received a Bachelor of Science in Mathematics from Chicago State University in 1972 and a Master of Science in Computer Science from the Illinois Institute of Technology in 1973. He received his Ph.D. in Applied Mathematics from the University of New Mexico in 1980. He worked at the Argonne National Laboratory until 1989, becoming a senior scientist. He now holds an appointment as University Distinguished Professor of Computer Science in the Electrical Engineering and Computer Science Department at the University of Tennessee, has the position of a Distinguished Research Staff member in the Computer Science and Mathematics Division at Oak Ridge National Laboratory (ORNL), Turing Fellow in the Computer Science and Mathematics Schools at the University of Manchester, and an Adjunct Professor in the Computer Science Department at Rice University. He specializes in numerical algorithms in linear algebra, parallel computing, the use of advanced-computer architectures, programming methodology, and tools for parallel computers. His research includes the development, testing and documentation of high quality mathematical software. He has contributed to the design and implementation of the following open source software packages and systems: EISPACK, LINPACK, the BLAS, LAPACK, ScaLAPACK, Netlib, PVM, MPI, NetSolve, Top500, ATLAS, and PAPI. He has published approximately 200 articles, papers, reports and technical memoranda and he is coauthor of several books. He was awarded the IEEE Sid Fernbach Award in 2004 for his contributions in the application of high performance computers using innovative approaches. He is a Fellow of the AAAS, ACM, and the IEEE and a member of the National Academy of Engineering.« less

  12. Parallel, distributed and GPU computing technologies in single-particle electron microscopy

    PubMed Central

    Schmeisser, Martin; Heisen, Burkhard C.; Luettich, Mario; Busche, Boris; Hauer, Florian; Koske, Tobias; Knauber, Karl-Heinz; Stark, Holger

    2009-01-01

    Most known methods for the determination of the structure of macromolecular complexes are limited or at least restricted at some point by their computational demands. Recent developments in information technology such as multicore, parallel and GPU processing can be used to overcome these limitations. In particular, graphics processing units (GPUs), which were originally developed for rendering real-time effects in computer games, are now ubiquitous and provide unprecedented computational power for scientific applications. Each parallel-processing paradigm alone can improve overall performance; the increased computational performance obtained by combining all paradigms, unleashing the full power of today’s technology, makes certain applications feasible that were previously virtually impossible. In this article, state-of-the-art paradigms are introduced, the tools and infrastructure needed to apply these paradigms are presented and a state-of-the-art infrastructure and solution strategy for moving scientific applications to the next generation of computer hardware is outlined. PMID:19564686

  13. Parallel, distributed and GPU computing technologies in single-particle electron microscopy.

    PubMed

    Schmeisser, Martin; Heisen, Burkhard C; Luettich, Mario; Busche, Boris; Hauer, Florian; Koske, Tobias; Knauber, Karl-Heinz; Stark, Holger

    2009-07-01

    Most known methods for the determination of the structure of macromolecular complexes are limited or at least restricted at some point by their computational demands. Recent developments in information technology such as multicore, parallel and GPU processing can be used to overcome these limitations. In particular, graphics processing units (GPUs), which were originally developed for rendering real-time effects in computer games, are now ubiquitous and provide unprecedented computational power for scientific applications. Each parallel-processing paradigm alone can improve overall performance; the increased computational performance obtained by combining all paradigms, unleashing the full power of today's technology, makes certain applications feasible that were previously virtually impossible. In this article, state-of-the-art paradigms are introduced, the tools and infrastructure needed to apply these paradigms are presented and a state-of-the-art infrastructure and solution strategy for moving scientific applications to the next generation of computer hardware is outlined.

  14. A robot arm simulation with a shared memory multiprocessor machine

    NASA Technical Reports Server (NTRS)

    Kim, Sung-Soo; Chuang, Li-Ping

    1989-01-01

    A parallel processing scheme for a single chain robot arm is presented for high speed computation on a shared memory multiprocessor. A recursive formulation that is derived from a virtual work form of the d'Alembert equations of motion is utilized for robot arm dynamics. A joint drive system that consists of a motor rotor and gears is included in the arm dynamics model, in order to take into account gyroscopic effects due to the spinning of the rotor. The fine grain parallelism of mechanical and control subsystem models is exploited, based on independent computation associated with bodies, joint drive systems, and controllers. Efficiency and effectiveness of the parallel scheme are demonstrated through simulations of a telerobotic manipulator arm. Two different mechanical subsystem models, i.e., with and without gyroscopic effects, are compared, to show the trade-off between efficiency and accuracy.

  15. Inflated speedups in parallel simulations via malloc()

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1990-01-01

    Discrete-event simulation programs make heavy use of dynamic memory allocation in order to support simulation's very dynamic space requirements. When programming in C one is likely to use the malloc() routine. However, a parallel simulation which uses the standard Unix System V malloc() implementation may achieve an overly optimistic speedup, possibly superlinear. An alternate implementation provided on some (but not all systems) can avoid the speedup anomaly, but at the price of significantly reduced available free space. This is especially severe on most parallel architectures, which tend not to support virtual memory. It is shown how a simply implemented user-constructed interface to malloc() can both avoid artificially inflated speedups, and make efficient use of the dynamic memory space. The interface simply catches blocks on the basis of their size. The problem is demonstrated empirically, and the effectiveness of the solution is shown both empirically and analytically.

  16. The AIS-5000 parallel processor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmitt, L.A.; Wilson, S.S.

    1988-05-01

    The AIS-5000 is a commercially available massively parallel processor which has been designed to operate in an industrial environment. It has fine-grained parallelism with up to 1024 processing elements arranged in a single-instruction multiple-data (SIMD) architecture. The processing elements are arranged in a one-dimensional chain that, for computer vision applications, can be as wide as the image itself. This architecture has superior cost/performance characteristics than two-dimensional mesh-connected systems. The design of the processing elements and their interconnections as well as the software used to program the system allow a wide variety of algorithms and applications to be implemented. In thismore » paper, the overall architecture of the system is described. Various components of the system are discussed, including details of the processing elements, data I/O pathways and parallel memory organization. A virtual two-dimensional model for programming image-based algorithms for the system is presented. This model is supported by the AIS-5000 hardware and software and allows the system to be treated as a full-image-size, two-dimensional, mesh-connected parallel processor. Performance bench marks are given for certain simple and complex functions.« less

  17. Finite Element Methods for real-time Haptic Feedback of Soft-Tissue Models in Virtual Reality Simulators

    NASA Technical Reports Server (NTRS)

    Frank, Andreas O.; Twombly, I. Alexander; Barth, Timothy J.; Smith, Jeffrey D.; Dalton, Bonnie P. (Technical Monitor)

    2001-01-01

    We have applied the linear elastic finite element method to compute haptic force feedback and domain deformations of soft tissue models for use in virtual reality simulators. Our results show that, for virtual object models of high-resolution 3D data (>10,000 nodes), haptic real time computations (>500 Hz) are not currently possible using traditional methods. Current research efforts are focused in the following areas: 1) efficient implementation of fully adaptive multi-resolution methods and 2) multi-resolution methods with specialized basis functions to capture the singularity at the haptic interface (point loading). To achieve real time computations, we propose parallel processing of a Jacobi preconditioned conjugate gradient method applied to a reduced system of equations resulting from surface domain decomposition. This can effectively be achieved using reconfigurable computing systems such as field programmable gate arrays (FPGA), thereby providing a flexible solution that allows for new FPGA implementations as improved algorithms become available. The resulting soft tissue simulation system would meet NASA Virtual Glovebox requirements and, at the same time, provide a generalized simulation engine for any immersive environment application, such as biomedical/surgical procedures or interactive scientific applications.

  18. Bioinformatics Pipelines for Targeted Resequencing and Whole-Exome Sequencing of Human and Mouse Genomes: A Virtual Appliance Approach for Instant Deployment

    PubMed Central

    Saeed, Isaam; Wong, Stephen Q.; Mar, Victoria; Goode, David L.; Caramia, Franco; Doig, Ken; Ryland, Georgina L.; Thompson, Ella R.; Hunter, Sally M.; Halgamuge, Saman K.; Ellul, Jason; Dobrovic, Alexander; Campbell, Ian G.; Papenfuss, Anthony T.; McArthur, Grant A.; Tothill, Richard W.

    2014-01-01

    Targeted resequencing by massively parallel sequencing has become an effective and affordable way to survey small to large portions of the genome for genetic variation. Despite the rapid development in open source software for analysis of such data, the practical implementation of these tools through construction of sequencing analysis pipelines still remains a challenging and laborious activity, and a major hurdle for many small research and clinical laboratories. We developed TREVA (Targeted REsequencing Virtual Appliance), making pre-built pipelines immediately available as a virtual appliance. Based on virtual machine technologies, TREVA is a solution for rapid and efficient deployment of complex bioinformatics pipelines to laboratories of all sizes, enabling reproducible results. The analyses that are supported in TREVA include: somatic and germline single-nucleotide and insertion/deletion variant calling, copy number analysis, and cohort-based analyses such as pathway and significantly mutated genes analyses. TREVA is flexible and easy to use, and can be customised by Linux-based extensions if required. TREVA can also be deployed on the cloud (cloud computing), enabling instant access without investment overheads for additional hardware. TREVA is available at http://bioinformatics.petermac.org/treva/. PMID:24752294

  19. An artificial reality environment for remote factory control and monitoring

    NASA Technical Reports Server (NTRS)

    Kosta, Charles Paul; Krolak, Patrick D.

    1993-01-01

    Work has begun on the merger of two well known systems, VEOS (HITLab) and CLIPS (NASA). In the recent past, the University of Massachusetts Lowell developed a parallel version of NASA CLIPS, called P-CLIPS. This modification allows users to create smaller expert systems which are able to communicate with each other to jointly solve problems. With the merger of a VEOS message system, PCLIPS-V can now act as a group of entities working within VEOS. To display the 3D virtual world we have been using a graphics package called HOOPS, from Ithaca Software. The artificial reality environment we have set up contains actors and objects as found in our Lincoln Logs Factory of the Future project. The environment allows us to view and control the objects within the virtual world. All communication between the separate CLIPS expert systems is done through VEOS. A graphical renderer generates camera views on X-Windows devices; Head Mounted Devices are not required. This allows more people to make use of this technology. We are experimenting with different types of virtual vehicles to give the user a sense that he or she is actually moving around inside the factory looking ahead through windows and virtual monitors.

  20. Using DGGE and 16S rRNA gene sequence analysis to evaluate changes in oral bacterial composition.

    PubMed

    Chen, Zhou; Trivedi, Harsh M; Chhun, Nok; Barnes, Virginia M; Saxena, Deepak; Xu, Tao; Li, Yihong

    2011-01-01

    To investigate whether a standard dental prophylaxis followed by tooth brushing with an antibacterial dentifrice will affect the oral bacterial community, as determined by denaturing gradient gel electrophoresis (DGGE) combined with 16S rRNA gene sequence analysis. Twenty-four healthy adults were instructed to brush their teeth using commercial dentifrice for 1 week during a washout period. An initial set of pooled supragingival plaque samples was collected from each participant at baseline (0 h) before prophylaxis treatment. The subjects were given a clinical examination and dental prophylaxis and asked to brush for 1 min with a dentifrice containing 0.3% triclosan, 2.0% PVM/MA copolymer and 0.243% sodium fluoride (Colgate Total). On the following day, a second set of pooled supragingival plaque samples (24 h) was collected. Total bacterial genomic DNA was isolated from the samples. Differences in the microbial composition before and after the prophylactic procedure and tooth brushing were assessed by comparing the DGGE profiles and 16S rRNA gene segments sequence analysis. Two distinct clusters of DGGE profiles were found, suggesting that a shift in the microbial composition had occurred 24 h after the prophylaxis and brushing. A detailed sequencing analysis of 16S rRNA gene segments further identified 6 phyla and 29 genera, including known and unknown bacterial species. Importantly, an increase in bacterial diversity was observed after 24 h, including members of the Streptococcaceae family, Prevotella, Corynebacterium, TM7 and other commensal bacteria. The results suggest that the use of a standard prophylaxis followed by the use of the dentifrice containing 0.3% triclosan, 2.0% PVM/MA copolymer and 0.243% sodium fluoride may promote a healthier composition within the oral bacterial community.

  1. The Rhoptry Proteins ROP18 and ROP5 Mediate Toxoplasma gondii Evasion of the Murine, But Not the Human, Interferon-Gamma Response

    PubMed Central

    Niedelman, Wendy; Gold, Daniel A.; Rosowski, Emily E.; Sprokholt, Joris K.; Lim, Daniel; Farid Arenas, Ailan; Melo, Mariane B.; Spooner, Eric; Yaffe, Michael B.; Saeij, Jeroen P. J.

    2012-01-01

    The obligate intracellular parasite Toxoplasma gondii secretes effector proteins into the host cell that manipulate the immune response allowing it to establish a chronic infection. Crosses between the types I, II and III strains, which are prevalent in North America and Europe, have identified several secreted effectors that determine strain differences in mouse virulence. The polymorphic rhoptry protein kinase ROP18 was recently shown to determine the difference in virulence between type I and III strains by phosphorylating and inactivating the interferon-γ (IFNγ)-induced immunity-related GTPases (IRGs) that promote killing by disrupting the parasitophorous vacuole membrane (PVM) in murine cells. The polymorphic pseudokinase ROP5 determines strain differences in virulence through an unknown mechanism. Here we report that ROP18 can only inhibit accumulation of the IRGs on the PVM of strains that also express virulent ROP5 alleles. In contrast, specific ROP5 alleles can reduce IRG coating even in the absence of ROP18 expression and can directly interact with one or more IRGs. We further show that the allelic combination of ROP18 and ROP5 also determines IRG evasion and virulence of strains belonging to other lineages besides types I, II and III. However, neither ROP18 nor ROP5 markedly affect survival in IFNγ-activated human cells, which lack the multitude of IRGs present in murine cells. These findings suggest that ROP18 and ROP5 have specifically evolved to block the IRGs and are unlikely to have effects in species that do not have the IRG system, such as humans. PMID:22761577

  2. Effect of a single brushing with two Zn-containing toothpastes on VSC in morning breath: a 12 h, randomized, double-blind, cross-over clinical study.

    PubMed

    Young, A; Jonski, G

    2011-12-01

    This randomized, double-blind, 12 h clinical study tested the effect of a single brushing with two Zn-containing toothpastes on volatile sulfur compound (VSC) levels in morning breath. The following toothpastes were each tested by all 28 participants: A-Zn toothpaste, B--experimental toothpaste (Zn citrate + PVM/MA copolymer) and C--control toothpaste without Zn. The evening prior to test days participants brushed their teeth for 2 min with 1 g toothpaste. 12 h later and prior to eating or performing oral hygiene, morning breath levels of VSC (H(2)S, CH(3)SH) were analysed by gas chromatography. Subjects then rinsed for 30 s with 5 ml cysteine and breath samples were analysed for H(2)S (H(2)S(cys)). Median VSC (area under the curve) values were compared for A, B and C and the effects of A and B on VSC were compared with C. Toothpaste B was more effective than both toothpastes A and C in reducing H(2)S, CH(3)SH and H(2)S(cys) (p < 0.05). Compared with toothpaste C, toothpastes A and B reduced H(2)S by 35% and 68%, respectively (p = 0.003), and CH(3)SH by 12% and 47%, respectively (p = 0.002). Toothpaste B reduced H(2)S(cys) by 48% compared with toothpaste C (p = 0.001). It is suggested that the superior effect of the experimental toothpaste was most likely due to a higher Zn concentration combined with longer retention of Zn due to the PVM/MA copolymer.

  3. Real-time monitoring of the mechanism of ibuprofen-cationic dextran crystanule formation using crystallization process informatics system (CryPRINS).

    PubMed

    Abioye, Amos Olusegun; Chi, George Tangyie; Simone, Elena; Nagy, Zoltan

    2016-07-25

    One step aqueous melt-crystallization and in situ granulation was utilized to produce ibuprofen-cationic dextran [diethylaminoethyl dextran (Ddex)] conjugate crystanules without the use of surfactants or organic solvents. This study investigates the mechanism of in situ granulation-induced crystanule formation using ibuprofen (Ibu) and Ddex. Laboratory scale batch aqueous crystallization system containing in situ monitoring probes for particle vision measurement (PVM), UV-vis measurement and focused beam reflectance measurements (FBRM) was adapted using pre-defined formulation and process parameters. Pure ibuprofen showed nucleation domain between 25 and 64°C, producing minicrystals with onset of melting at 76°C and enthalpy of fusion (ΔH) of 26.22kJ/mol. On the other hand Ibu-Ddex crystanules showed heterogeneous nucleation which produced spherical core-shell structure. PVM images suggest that internalization of ibuprofen in Ddex corona occurred during the melting phase (before nucleation) which inhibited crystal growth inside the Ddex corona. The remarkable decrease in ΔH of the crystanules from 26.22 to 11.96kJ/mol and the presence of broad overlapping DSC thermogram suggests formation of ibuprofen-Ddex complex and crystalline-amorphous transformation. However Raman and FTIR spectra did not show any significant chemical interaction between ibuprofen and Ddex. A significant increase in dissolution efficiency from 45 to 81% within 24h and reduced burst release provide evidence for potential application of crystanules in controlled drug delivery systems. It was evident that in situ granulation of ibuprofen inhibited the aqueous crystallization process. It was concluded that in situ granulation-aqueous crystallization technique is a novel unit operation with potential application in continuous pharmaceutical processing. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Evaluation of feature-based 3-d registration of probabilistic volumetric scenes

    NASA Astrophysics Data System (ADS)

    Restrepo, Maria I.; Ulusoy, Ali O.; Mundy, Joseph L.

    2014-12-01

    Automatic estimation of the world surfaces from aerial images has seen much attention and progress in recent years. Among current modeling technologies, probabilistic volumetric models (PVMs) have evolved as an alternative representation that can learn geometry and appearance in a dense and probabilistic manner. Recent progress, in terms of storage and speed, achieved in the area of volumetric modeling, opens the opportunity to develop new frameworks that make use of the PVM to pursue the ultimate goal of creating an entire map of the earth, where one can reason about the semantics and dynamics of the 3-d world. Aligning 3-d models collected at different time-instances constitutes an important step for successful fusion of large spatio-temporal information. This paper evaluates how effectively probabilistic volumetric models can be aligned using robust feature-matching techniques, while considering different scenarios that reflect the kind of variability observed across aerial video collections from different time instances. More precisely, this work investigates variability in terms of discretization, resolution and sampling density, errors in the camera orientation, and changes in illumination and geographic characteristics. All results are given for large-scale, outdoor sites. In order to facilitate the comparison of the registration performance of PVMs to that of other 3-d reconstruction techniques, the registration pipeline is also carried out using Patch-based Multi-View Stereo (PMVS) algorithm. Registration performance is similar for scenes that have favorable geometry and the appearance characteristics necessary for high quality reconstruction. In scenes containing trees, such as a park, or many buildings, such as a city center, registration performance is significantly more accurate when using the PVM.

  5. Architectural constructs of Ampex DST

    NASA Technical Reports Server (NTRS)

    Johnson, Clay

    1993-01-01

    The DST 800 automated library is a high performance, automated tape storage system, developed by AMPEX, providing mass storage to host systems. Physical Volume Manager (PVM) is a volume server which supports either a DST 800, DST 600 stand alone tape drive, or a combination of DST 800 and DST 600 subsystems. The objective of the PVM is to provide the foundation support to allow automated and operator assisted access to the DST cartridges with continuous operation. A second objective is to create a data base about the media, its location, and its usage so that the quality and utilization of the media on which specific data is recorded and the performance of the storage system may be managed. The DST tape drive architecture and media provides several unique functions that enhance the ability to achieve high media space utilization and fast access. Access times are enhanced through the implementation of multiple areas (called system zones) on the media where the media may be unloaded. This reduces positioning time in loading and unloading the cartridge. Access times are also reduced through high speed positioning in excess of 800 megabytes per second. A DST cartridge can be partitioned into fixed size units which can be reclaimed for rewriting without invalidating other recorded data on the tape cartridge. Most tape management systems achieve space reclamation by deleting an entire tape volume, then allowing users to request a 'scratch tape' or 'nonspecific' volume when they wish to record data to tape. Physical cartridge sizes of 25, 75, or 165 gigabytes will make this existing process inefficient or unusable. The DST cartridge partitioning capability provides an efficient mechanism for addressing the tape space utilization problem.

  6. A distributed version of the NASA Engine Performance Program

    NASA Technical Reports Server (NTRS)

    Cours, Jeffrey T.; Curlett, Brian P.

    1993-01-01

    Distributed NEPP, a version of the NASA Engine Performance Program, uses the original NEPP code but executes it in a distributed computer environment. Multiple workstations connected by a network increase the program's speed and, more importantly, the complexity of the cases it can handle in a reasonable time. Distributed NEPP uses the public domain software package, called Parallel Virtual Machine, allowing it to execute on clusters of machines containing many different architectures. It includes the capability to link with other computers, allowing them to process NEPP jobs in parallel. This paper discusses the design issues and granularity considerations that entered into programming Distributed NEPP and presents the results of timing runs.

  7. Collective network for computer structures

    DOEpatents

    Blumrich, Matthias A; Coteus, Paul W; Chen, Dong; Gara, Alan; Giampapa, Mark E; Heidelberger, Philip; Hoenicke, Dirk; Takken, Todd E; Steinmacher-Burow, Burkhard D; Vranas, Pavlos M

    2014-01-07

    A system and method for enabling high-speed, low-latency global collective communications among interconnected processing nodes. The global collective network optimally enables collective reduction operations to be performed during parallel algorithm operations executing in a computer structure having a plurality of the interconnected processing nodes. Router devices are included that interconnect the nodes of the network via links to facilitate performance of low-latency global processing operations at nodes of the virtual network. The global collective network may be configured to provide global barrier and interrupt functionality in asynchronous or synchronized manner. When implemented in a massively-parallel supercomputing structure, the global collective network is physically and logically partitionable according to the needs of a processing algorithm.

  8. Collective network for computer structures

    DOEpatents

    Blumrich, Matthias A [Ridgefield, CT; Coteus, Paul W [Yorktown Heights, NY; Chen, Dong [Croton On Hudson, NY; Gara, Alan [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Hoenicke, Dirk [Ossining, NY; Takken, Todd E [Brewster, NY; Steinmacher-Burow, Burkhard D [Wernau, DE; Vranas, Pavlos M [Bedford Hills, NY

    2011-08-16

    A system and method for enabling high-speed, low-latency global collective communications among interconnected processing nodes. The global collective network optimally enables collective reduction operations to be performed during parallel algorithm operations executing in a computer structure having a plurality of the interconnected processing nodes. Router devices ate included that interconnect the nodes of the network via links to facilitate performance of low-latency global processing operations at nodes of the virtual network and class structures. The global collective network may be configured to provide global barrier and interrupt functionality in asynchronous or synchronized manner. When implemented in a massively-parallel supercomputing structure, the global collective network is physically and logically partitionable according to needs of a processing algorithm.

  9. Performance Studies on Distributed Virtual Screening

    PubMed Central

    Krüger, Jens; de la Garza, Luis; Kohlbacher, Oliver; Nagel, Wolfgang E.

    2014-01-01

    Virtual high-throughput screening (vHTS) is an invaluable method in modern drug discovery. It permits screening large datasets or databases of chemical structures for those structures binding possibly to a drug target. Virtual screening is typically performed by docking code, which often runs sequentially. Processing of huge vHTS datasets can be parallelized by chunking the data because individual docking runs are independent of each other. The goal of this work is to find an optimal splitting maximizing the speedup while considering overhead and available cores on Distributed Computing Infrastructures (DCIs). We have conducted thorough performance studies accounting not only for the runtime of the docking itself, but also for structure preparation. Performance studies were conducted via the workflow-enabled science gateway MoSGrid (Molecular Simulation Grid). As input we used benchmark datasets for protein kinases. Our performance studies show that docking workflows can be made to scale almost linearly up to 500 concurrent processes distributed even over large DCIs, thus accelerating vHTS campaigns significantly. PMID:25032219

  10. Application of QSAR and shape pharmacophore modeling approaches for targeted chemical library design.

    PubMed

    Ebalunode, Jerry O; Zheng, Weifan; Tropsha, Alexander

    2011-01-01

    Optimization of chemical library composition affords more efficient identification of hits from biological screening experiments. The optimization could be achieved through rational selection of reagents used in combinatorial library synthesis. However, with a rapid advent of parallel synthesis methods and availability of millions of compounds synthesized by many vendors, it may be more efficient to design targeted libraries by means of virtual screening of commercial compound collections. This chapter reviews the application of advanced cheminformatics approaches such as quantitative structure-activity relationships (QSAR) and pharmacophore modeling (both ligand and structure based) for virtual screening. Both approaches rely on empirical SAR data to build models; thus, the emphasis is placed on achieving models of the highest rigor and external predictive power. We present several examples of successful applications of both approaches for virtual screening to illustrate their utility. We suggest that the expert use of both QSAR and pharmacophore models, either independently or in combination, enables users to achieve targeted libraries enriched with experimentally confirmed hit compounds.

  11. Virtual-system-coupled adaptive umbrella sampling to compute free-energy landscape for flexible molecular docking.

    PubMed

    Higo, Junichi; Dasgupta, Bhaskar; Mashimo, Tadaaki; Kasahara, Kota; Fukunishi, Yoshifumi; Nakamura, Haruki

    2015-07-30

    A novel enhanced conformational sampling method, virtual-system-coupled adaptive umbrella sampling (V-AUS), was proposed to compute 300-K free-energy landscape for flexible molecular docking, where a virtual degrees of freedom was introduced to control the sampling. This degree of freedom interacts with the biomolecular system. V-AUS was applied to complex formation of two disordered amyloid-β (Aβ30-35 ) peptides in a periodic box filled by an explicit solvent. An interpeptide distance was defined as the reaction coordinate, along which sampling was enhanced. A uniform conformational distribution was obtained covering a wide interpeptide distance ranging from the bound to unbound states. The 300-K free-energy landscape was characterized by thermodynamically stable basins of antiparallel and parallel β-sheet complexes and some other complex forms. Helices were frequently observed, when the two peptides contacted loosely or fluctuated freely without interpeptide contacts. We observed that V-AUS converged to uniform distribution more effectively than conventional AUS sampling did. © 2015 Wiley Periodicals, Inc.

  12. Dots and dashes: art, virtual reality, and the telegraph

    NASA Astrophysics Data System (ADS)

    Ruzanka, Silvia; Chang, Ben

    2009-02-01

    Dots and Dashes is a virtual reality artwork that explores online romance over the telegraph, based on Ella Cheever Thayer's novel Wired Love - a Romance in Dots and Dashes (an Old Story Told in a New Way)1. The uncanny similarities between this story and the world of today's virtual environments provides the springboard for an exploration of a wealth of anxieties and dreams, including the construction of identities in an electronically mediated environment, the shifting boundaries between the natural and machine worlds, and the spiritual dimensions of science and technology. In this paper we examine the parallels between the telegraph networks and our current conceptions of cyberspace, as well as unique social and cultural impacts specific to the telegraph. These include the new opportunities and roles available to women in the telegraph industry and the connection between the telegraph and the Spiritualist movement. We discuss the development of the artwork, its structure and aesthetics, and the technical development of the work.

  13. The interplays among technology and content, immersant and VE

    NASA Astrophysics Data System (ADS)

    Song, Meehae; Gromala, Diane; Shaw, Chris; Barnes, Steven J.

    2010-01-01

    The research program aims to explore and examine the fine balance necessary for maintaining the interplays between technology and the immersant, including identifying qualities that contribute to creating and maintaining a sense of "presence" and "immersion" in an immersive virtual reality (IVR) experience. Building upon and extending previous work, we compare sitting meditation with walking meditation in a virtual environment (VE). The Virtual Meditative Walk, a new work-in-progress, integrates VR and biofeedback technologies with a self-directed, uni-directional treadmill. As immersants learn how to meditate while walking, robust, real-time biofeedback technology continuously measures breathing, skin conductance and heart rate. The physiological states of the immersant will in turn affect the audio and stereoscopic visual media through shutter glasses. We plan to test the potential benefits and limitations of this physically active form of meditation with data from a sitting form of meditation. A mixed-methods approach to testing user outcomes parallels the knowledge bases of the collaborative team: a physician, computer scientists and artists.

  14. Global tree network for computing structures enabling global processing operations

    DOEpatents

    Blumrich; Matthias A.; Chen, Dong; Coteus, Paul W.; Gara, Alan G.; Giampapa, Mark E.; Heidelberger, Philip; Hoenicke, Dirk; Steinmacher-Burow, Burkhard D.; Takken, Todd E.; Vranas, Pavlos M.

    2010-01-19

    A system and method for enabling high-speed, low-latency global tree network communications among processing nodes interconnected according to a tree network structure. The global tree network enables collective reduction operations to be performed during parallel algorithm operations executing in a computer structure having a plurality of the interconnected processing nodes. Router devices are included that interconnect the nodes of the tree via links to facilitate performance of low-latency global processing operations at nodes of the virtual tree and sub-tree structures. The global operations performed include one or more of: broadcast operations downstream from a root node to leaf nodes of a virtual tree, reduction operations upstream from leaf nodes to the root node in the virtual tree, and point-to-point message passing from any node to the root node. The global tree network is configurable to provide global barrier and interrupt functionality in asynchronous or synchronized manner, and, is physically and logically partitionable.

  15. An accurate reactive power control study in virtual flux droop control

    NASA Astrophysics Data System (ADS)

    Wang, Aimeng; Zhang, Jia

    2017-12-01

    This paper investigates the problem of reactive power sharing based on virtual flux droop method. Firstly, flux droop control method is derived, where complicated multiple feedback loops and parameter regulation are avoided. Then, the reasons for inaccurate reactive power sharing are theoretically analyzed. Further, a novel reactive power control scheme is proposed which consists of three parts: compensation control, voltage recovery control and flux droop control. Finally, the proposed reactive power control strategy is verified in a simplified microgrid model with two parallel DGs. The simulation results show that the proposed control scheme can achieve accurate reactive power sharing and zero deviation of voltage. Meanwhile, it has some advantages of simple control and excellent dynamic and static performance.

  16. Optimized scalable network switch

    DOEpatents

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Takken, Todd E [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY

    2007-12-04

    In a massively parallel computing system having a plurality of nodes configured in m multi-dimensions, each node including a computing device, a method for routing packets towards their destination nodes is provided which includes generating at least one of a 2m plurality of compact bit vectors containing information derived from downstream nodes. A multilevel arbitration process in which downstream information stored in the compact vectors, such as link status information and fullness of downstream buffers, is used to determine a preferred direction and virtual channel for packet transmission. Preferred direction ranges are encoded and virtual channels are selected by examining the plurality of compact bit vectors. This dynamic routing method eliminates the necessity of routing tables, thus enhancing scalability of the switch.

  17. Optimized scalable network switch

    DOEpatents

    Blumrich, Matthias A.; Chen, Dong; Coteus, Paul W.

    2010-02-23

    In a massively parallel computing system having a plurality of nodes configured in m multi-dimensions, each node including a computing device, a method for routing packets towards their destination nodes is provided which includes generating at least one of a 2m plurality of compact bit vectors containing information derived from downstream nodes. A multilevel arbitration process in which downstream information stored in the compact vectors, such as link status information and fullness of downstream buffers, is used to determine a preferred direction and virtual channel for packet transmission. Preferred direction ranges are encoded and virtual channels are selected by examining the plurality of compact bit vectors. This dynamic routing method eliminates the necessity of routing tables, thus enhancing scalability of the switch.

  18. High-immersion three-dimensional display of the numerical computer model

    NASA Astrophysics Data System (ADS)

    Xing, Shujun; Yu, Xunbo; Zhao, Tianqi; Cai, Yuanfa; Chen, Duo; Chen, Zhidong; Sang, Xinzhu

    2013-08-01

    High-immersion three-dimensional (3D) displays making them valuable tools for many applications, such as designing and constructing desired building houses, industrial architecture design, aeronautics, scientific research, entertainment, media advertisement, military areas and so on. However, most technologies provide 3D display in the front of screens which are in parallel with the walls, and the sense of immersion is decreased. To get the right multi-view stereo ground image, cameras' photosensitive surface should be parallax to the public focus plane and the cameras' optical axes should be offset to the center of public focus plane both atvertical direction and horizontal direction. It is very common to use virtual cameras, which is an ideal pinhole camera to display 3D model in computer system. We can use virtual cameras to simulate the shooting method of multi-view ground based stereo image. Here, two virtual shooting methods for ground based high-immersion 3D display are presented. The position of virtual camera is determined by the people's eye position in the real world. When the observer stand in the circumcircle of 3D ground display, offset perspective projection virtual cameras is used. If the observer stands out the circumcircle of 3D ground display, offset perspective projection virtual cameras and the orthogonal projection virtual cameras are adopted. In this paper, we mainly discussed the parameter setting of virtual cameras. The Near Clip Plane parameter setting is the main point in the first method, while the rotation angle of virtual cameras is the main point in the second method. In order to validate the results, we use the D3D and OpenGL to render scenes of different viewpoints and generate a stereoscopic image. A realistic visualization system for 3D models is constructed and demonstrated for viewing horizontally, which provides high-immersion 3D visualization. The displayed 3D scenes are compared with the real objects in the real world.

  19. A Parallel Vector Machine for the PM Programming Language

    NASA Astrophysics Data System (ADS)

    Bellerby, Tim

    2016-04-01

    PM is a new programming language which aims to make the writing of computational geoscience models on parallel hardware accessible to scientists who are not themselves expert parallel programmers. It is based around the concept of communicating operators: language constructs that enable variables local to a single invocation of a parallelised loop to be viewed as if they were arrays spanning the entire loop domain. This mechanism enables different loop invocations (which may or may not be executing on different processors) to exchange information in a manner that extends the successful Communicating Sequential Processes idiom from single messages to collective communication. Communicating operators avoid the additional synchronisation mechanisms, such as atomic variables, required when programming using the Partitioned Global Address Space (PGAS) paradigm. Using a single loop invocation as the fundamental unit of concurrency enables PM to uniformly represent different levels of parallelism from vector operations through shared memory systems to distributed grids. This paper describes an implementation of PM based on a vectorised virtual machine. On a single processor node, concurrent operations are implemented using masked vector operations. Virtual machine instructions operate on vectors of values and may be unmasked, masked using a Boolean field, or masked using an array of active vector cell locations. Conditional structures (such as if-then-else or while statement implementations) calculate and apply masks to the operations they control. A shift in mask representation from Boolean to location-list occurs when active locations become sufficiently sparse. Parallel loops unfold data structures (or vectors of data structures for nested loops) into vectors of values that may additionally be distributed over multiple computational nodes and then split into micro-threads compatible with the size of the local cache. Inter-node communication is accomplished using standard OpenMP and MPI. Performance analyses of the PM vector machine, demonstrating its scaling properties with respect to domain size and the number of processor nodes will be presented for a range of hardware configurations. The PM software and language definition are being made available under unrestrictive MIT and Creative Commons Attribution licenses respectively: www.pm-lang.org.

  20. Stage Cylindrical Immersive Display

    NASA Technical Reports Server (NTRS)

    Abramyan, Lucy; Norris, Jeffrey S.; Powell, Mark W.; Mittman, David S.; Shams, Khawaja S.

    2011-01-01

    Panoramic images with a wide field of view intend to provide a better understanding of an environment by placing objects of the environment on one seamless image. However, understanding the sizes and relative positions of the objects in a panorama is not intuitive and prone to errors because the field of view is unnatural to human perception. Scientists are often faced with the difficult task of interpreting the sizes and relative positions of objects in an environment when viewing an image of the environment on computer monitors or prints. A panorama can display an object that appears to be to the right of the viewer when it is, in fact, behind the viewer. This misinterpretation can be very costly, especially when the environment is remote and/or only accessible by unmanned vehicles. A 270 cylindrical display has been developed that surrounds the viewer with carefully calibrated panoramic imagery that correctly engages their natural kinesthetic senses and provides a more accurate awareness of the environment. The cylindrical immersive display offers a more natural window to the environment than a standard cubic CAVE (Cave Automatic Virtual Environment), and the geometry allows multiple collocated users to simultaneously view data and share important decision-making tasks. A CAVE is an immersive virtual reality environment that allows one or more users to absorb themselves in a virtual environment. A common CAVE setup is a room-sized cube where the cube sides act as projection planes. By nature, all cubic CAVEs face a problem with edge matching at edges and corners of the display. Modern immersive displays have found ways to minimize seams by creating very tight edges, and rely on the user to ignore the seam. One significant deficiency of flat-walled CAVEs is that the sense of orientation and perspective within the scene is broken across adjacent walls. On any single wall, parallel lines properly converge at their vanishing point as they should, and the sense of perspective within the scene contained on only one wall has integrity. Unfortunately, parallel lines that lie on adjacent walls do not necessarily remain parallel. This results in inaccuracies in the scene that can distract the viewer and subtract from the immersive experience of the CAVE.

  1. Virtualization in network and servers infrastructure to support dynamic system reconfiguration in ALMA

    NASA Astrophysics Data System (ADS)

    Shen, Tzu-Chiang; Ovando, Nicolás.; Bartsch, Marcelo; Simmond, Max; Vélez, Gastón; Robles, Manuel; Soto, Rubén.; Ibsen, Jorge; Saldias, Christian

    2012-09-01

    ALMA is the first astronomical project being constructed and operated under industrial approach due to the huge amount of elements involved. In order to achieve the maximum through put during the engineering and scientific commissioning phase, several production lines have been established to work in parallel. This decision required modification in the original system architecture in which all the elements are controlled and operated within a unique Standard Test Environment (STE). The advance in the network industry and together with the maturity of virtualization paradigm allows us to provide a solution which can replicate the STE infrastructure without changing their network address definition. This is only possible with Virtual Routing and Forwarding (VRF) and Virtual LAN (VLAN) concepts. The solution allows dynamic reconfiguration of antennas and other hardware across the production lines with minimum time and zero human intervention in the cabling. We also push the virtualization even further, classical rack mount servers are being replaced and consolidated by blade servers. On top of them virtualized server are centrally administrated with VMWare ESX. Hardware costs and system administration effort will be reduced considerably. This mechanism has been established and operated successfully during the last two years. This experience gave us confident to propose a solution to divide the main operation array into subarrays using the same concept which will introduce huge flexibility and efficiency for ALMA operation and eventually may simplify the complexity of ALMA core observing software since there will be no need to deal with subarrays complexity at software level.

  2. Mindfulness-based relapse prevention combined with virtual reality cue exposure for methamphetamine use disorder: Study protocol for a randomized controlled trial.

    PubMed

    Chen, Xi Jing; Wang, Dong Mei; Zhou, Li Dan; Winkler, Markus; Pauli, Paul; Sui, Nan; Li, Yong Hui

    2018-04-19

    Mindfulness-based relapse prevention (MBRP) is a method that combines cognitive behavioral relapse prevention with mindfulness practice. Research suggests that MBRP can effectively reduce withdrawal/craving in people with substance use disorder (SUD). An important part of MBRP is to practice mindfulness meditation to cope with high-risk situations for relapse, such as stimuli and situations associated with drug taking. Virtual reality cue exposure (VRCE) may be a complementary approach to MBRP as it allows for controlled and graded presentations of various high-risk situations with distal and proximal drug cues. The aim of the study is to investigate the effects of MBRP combined with VRCE, in comparison to MBRP alone or treatment as usual, on craving and emotional responses in people with methamphetamine use disorders. The study is a parallel randomized controlled study including 180 participants with methamphetamine use disorder. Three parallel groups will receive 8 weeks of MBRP combined with VRCE, MBRP alone, or treatment as usual, respectively. Craving, virtual cue reactivity, anxiety, depression, emotion regulation, mindfulness and drug-related attention bias will be assessed at pre-treatment, post-treatment, and 3 and 6 months of follow-up. This innovative study aims at investigating the effects of MBRP combined with VRCE in people with SUD. The combined intervention may have important clinical implications for relapse prevention due to its ease of application and high cost-effectiveness. This study may also stimulate research on the neuronal and psychological mechanisms of MBRP in substance use disorder. ChiCTR-INR-17013041. Copyright © 2018 Elsevier Inc. All rights reserved.

  3. Development of a Big Data Application Architecture for Navy Manpower, Personnel, Training, and Education

    DTIC Science & Technology

    2016-03-01

    science IT information technology JBOD just a bunch of disks JDBC java database connectivity xviii JPME Joint Professional Military Education JSO...Joint Service Officer JVM java virtual machine MPP massively parallel processing MPTE Manpower, Personnel, Training, and Education NAVMAC Navy...27 external database, whether it is MySQL , Oracle, DB2, or SQL Server (Teller, 2015). Connectors optimize the data transfer by obtaining metadata

  4. Chromatically corrected virtual image visual display. [reducing eye strain in flight simulators

    NASA Technical Reports Server (NTRS)

    Kahlbaum, W. M., Jr. (Inventor)

    1980-01-01

    An in-line, three element, large diameter, optical display lens is disclosed which has a front convex-convex element, a central convex-concave element, and a rear convex-convex element. The lens, used in flight simulators, magnifies an image presented on a television monitor and, by causing light rays leaving the lens to be in essentially parallel paths, reduces eye strain of the simulator operator.

  5. Scalable, High-performance 3D Imaging Software Platform: System Architecture and Application to Virtual Colonoscopy

    PubMed Central

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli; Brett, Bevin

    2013-01-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. In this work, we have developed a software platform that is designed to support high-performance 3D medical image processing for a wide range of applications using increasingly available and affordable commodity computing systems: multi-core, clusters, and cloud computing systems. To achieve scalable, high-performance computing, our platform (1) employs size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D image processing algorithms; (2) supports task scheduling for efficient load distribution and balancing; and (3) consists of a layered parallel software libraries that allow a wide range of medical applications to share the same functionalities. We evaluated the performance of our platform by applying it to an electronic cleansing system in virtual colonoscopy, with initial experimental results showing a 10 times performance improvement on an 8-core workstation over the original sequential implementation of the system. PMID:23366803

  6. Comparison of the lowest-order transverse-electric (TE1) and transverse-magnetic (TEM) modes of the parallel-plate waveguide for terahertz pulse applications.

    PubMed

    Mendis, Rajind; Mittleman, Daniel M

    2009-08-17

    We present a comprehensive experimental study comparing the propagation characteristics of the virtually unknown TE(1) mode to the well-known TEM mode of the parallel-plate waveguide (PPWG), for THz pulse applications. We demonstrate that it is possible to overcome the undesirable effects caused by the TE(1) mode's inherent low-frequency cutoff, making it a viable THz wave-guiding option, and that for certain applications, the TE(1) mode may even be more desirable than the TEM mode. This study presents a whole new dimension to the THz technological capabilities offered by the PPWG, via the possible use of the TE(1) mode. (c) 2009 Optical Society of America

  7. Negative tunnel magnetoresistance and differential conductance in transport through double quantum dots

    NASA Astrophysics Data System (ADS)

    Trocha, Piotr; Weymann, Ireneusz; Barnaś, Józef

    2009-10-01

    Spin-dependent transport through two coupled single-level quantum dots weakly connected to ferromagnetic leads with collinear magnetizations is considered theoretically. Transport characteristics, including the current, linear and nonlinear conductances, and tunnel magnetoresistance are calculated using the real-time diagrammatic technique in the parallel, serial, and intermediate geometries. The effects due to virtual tunneling processes between the two dots via the leads, associated with off-diagonal coupling matrix elements, are also considered. Negative differential conductance and negative tunnel magnetoresistance have been found in the case of serial and intermediate geometries, while no such behavior has been observed for double quantum dots coupled in parallel. It is also shown that transport characteristics strongly depend on the magnitude of the off-diagonal coupling matrix elements.

  8. Gravisensitivity of various host plant -virus systems in simulated microgravity

    NASA Astrophysics Data System (ADS)

    Mishchenko, Lidiya; Taran, Oksana; Gordejchyk, Olga

    In spite of considerable achievements in the study of gravity effects on plant development, some issues of gravitropism, like species-specificity and gravitation response remain unclear. The so-lution of such problems is connected with the aspects of life supply, in piloted space expeditions. The role of microgravity remains practically unstudied in the development of relations in the system host plant-virus, which are important for biotechnologies in crop production. It is ev-ident that the conditions of space flight can act as stressors, and the stress inducted by them favors the reactivation of latest herpes viruses in humans (satish et al., 2009) Viral infections of plants, which also can be in a latest state at certain stages of plant organism development, cause great damage to the growth and development of a host plant. Space flight conditions may cause both reactivation of latent viral infection in plants and its elimination, as it has been found by us for the system WSMW -wheat (Mishchenko et al., 2004). Our further research activities were concentrated on the identification of gravisensitivity in the system virus -potato plant to find out whether there was any species -related specificity of the reaction. In our research we used potato plants of Krymska Rosa, Zhuravushka, Agave, Belarosa, Kupalinka, and Zdubytok varieties. Simulated microgravity was ensured by clinostats KG-8 and Cycle -2. Gravisensitiv-ity has been studied the systems including PVX, PVM and PVY. Virus concentrations have been determined by ELISA using LOEWE reagents (placecountry-regionGermany). Virus iden-tification by morphological features were done by electron microscopy. For the system PVX -potato plant, we found the reduction in virus antigens content with prolonged clinostating. On the 18th day of cultivation, the plants showed a high level of X-virus antigen content on both stationary (control) and clinostated variants. On 36th and 47th day, depending plant variety, clinostated plants had lower X-virus antigen content, compared with negative control. In plants, cultivated without clinostating, PVX antigen content was 5-10 times greater than on negative control variants. Prolonged (over 43 days) clinostating, depending on potato plant genotype, may cause both simulation and impeding of the accumulation of Y-virus antigens in potato plants. Studying the interaction between the host plant and PVM, we found that prolonged clinorotation at first reduced the antigen content by 25-30% compared with stationary control. Further on after 44 days of experimentation, the antigen content increased with more intensive increase in non-clinostated plants. Thus, prolonged clinostating reduced the intensity of anti-gen accumulation but did not stop it completely. We admit that proves a low sensitivity of the system PVM -potato plant to simulated microgravity. The phenomena of PVX reproduction in simulated microgravity may find on employment in present-day biotechnologies.

  9. Milestone Completion Report WBS 1.3.5.05 ECP/VTK-m FY17Q2 [MS-17/01] Better Dynamic Types Design SDA05-1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moreland, Kenneth D.

    The FY17Q2 milestone of the ECP/VTK-m project, which is the first milestone, includes the completion of design documents for the introduction of virtual methods into the VTK-m framework. Specifically, the ability from within the code of a device (e.g. GPU or Xeon Phi) to jump to a virtual method specified at run time. This change will enable us to drastically reduce the compile time and the executable code size for the VTK-m library. Our first design introduced the idea of adding virtual functions to classes that are used during algorithm execution. (Virtual methods were previously banned from the so calledmore » execution environment.) The design was straightforward. VTK-m already has the generic concepts of an “array handle” that provides a uniform interface to memory of different structures and an “array portal” that provides generic access to said memory. These array handles and portals use C++ templating to adjust them to different memory structures. This composition provides a powerful ability to adapt to data sources, but requires knowing static types. The proposed design creates a template specialization of an array portal that decorates another array handle while hiding its type. In this way we can wrap any type of static array handle and then feed it to a single compiled instance of a function. The second design focused on the mechanics of implementing virtual methods on parallel devices with a focus on CUDA. Our initial experiments on CUDA showed a very large overhead for using virtual C++ classes with virtual methods, the standard approach. Instead, we are using an alternate method provided by C that uses function pointers. With the completion of this milestone, we are able to move to the implementation of objects with virtual (like) methods. The upshot will be much faster compile times and much smaller library/executable sizes.« less

  10. gWEGA: GPU-accelerated WEGA for molecular superposition and shape comparison.

    PubMed

    Yan, Xin; Li, Jiabo; Gu, Qiong; Xu, Jun

    2014-06-05

    Virtual screening of a large chemical library for drug lead identification requires searching/superimposing a large number of three-dimensional (3D) chemical structures. This article reports a graphic processing unit (GPU)-accelerated weighted Gaussian algorithm (gWEGA) that expedites shape or shape-feature similarity score-based virtual screening. With 86 GPU nodes (each node has one GPU card), gWEGA can screen 110 million conformations derived from an entire ZINC drug-like database with diverse antidiabetic agents as query structures within 2 s (i.e., screening more than 55 million conformations per second). The rapid screening speed was accomplished through the massive parallelization on multiple GPU nodes and rapid prescreening of 3D structures (based on their shape descriptors and pharmacophore feature compositions). Copyright © 2014 Wiley Periodicals, Inc.

  11. Making extreme computations possible with virtual machines

    NASA Astrophysics Data System (ADS)

    Reuter, J.; Chokoufe Nejad, B.; Ohl, T.

    2016-10-01

    State-of-the-art algorithms generate scattering amplitudes for high-energy physics at leading order for high-multiplicity processes as compiled code (in Fortran, C or C++). For complicated processes the size of these libraries can become tremendous (many GiB). We show that amplitudes can be translated to byte-code instructions, which even reduce the size by one order of magnitude. The byte-code is interpreted by a Virtual Machine with runtimes comparable to compiled code and a better scaling with additional legs. We study the properties of this algorithm, as an extension of the Optimizing Matrix Element Generator (O'Mega). The bytecode matrix elements are available as alternative input for the event generator WHIZARD. The bytecode interpreter can be implemented very compactly, which will help with a future implementation on massively parallel GPUs.

  12. Free Energy-Based Virtual Screening and Optimization of RNase H Inhibitors of HIV-1 Reverse Transcriptase.

    PubMed

    Zhang, Baofeng; D'Erasmo, Michael P; Murelli, Ryan P; Gallicchio, Emilio

    2016-09-30

    We report the results of a binding free energy-based virtual screening campaign of a library of 77 α-hydroxytropolone derivatives against the challenging RNase H active site of the reverse transcriptase (RT) enzyme of human immunodeficiency virus-1. Multiple protonation states, rotamer states, and binding modalities of each compound were individually evaluated. The work involved more than 300 individual absolute alchemical binding free energy parallel molecular dynamics calculations and over 1 million CPU hours on national computing clusters and a local campus computational grid. The thermodynamic and structural measures obtained in this work rationalize a series of characteristics of this system useful for guiding future synthetic and biochemical efforts. The free energy model identified key ligand-dependent entropic and conformational reorganization processes difficult to capture using standard docking and scoring approaches. Binding free energy-based optimization of the lead compounds emerging from the virtual screen has yielded four compounds with very favorable binding properties, which will be the subject of further experimental investigations. This work is one of the few reported applications of advanced-binding free energy models to large-scale virtual screening and optimization projects. It further demonstrates that, with suitable algorithms and automation, advanced-binding free energy models can have a useful role in early-stage drug-discovery programs.

  13. Ground Motion Prediction for M7+ scenarios on the San Andreas Fault using the Virtual Earthquake Approach

    NASA Astrophysics Data System (ADS)

    Denolle, M.; Dunham, E. M.; Prieto, G.; Beroza, G. C.

    2013-05-01

    There is no clearer example of the increase in hazard due to prolonged and amplified shaking in sedimentary, than the case of Mexico City in the 1985 Michoacan earthquake. It is critically important to identify what other cities might be susceptible to similar basin amplification effects. Physics-based simulations in 3D crustal structure can be used to model and anticipate those effects, but they rely on our knowledge of the complexity of the medium. We propose a parallel approach to validate ground motion simulations using the ambient seismic field. We compute the Earth's impulse response combining the ambient seismic field and coda-wave enforcing causality and symmetry constraints. We correct the surface impulse responses to account for the source depth, mechanism and duration using a 1D approximation of the local surface-wave excitation. We call the new responses virtual earthquakes. We validate the ground motion predicted from the virtual earthquakes against moderate earthquakes in southern California. We then combine temporary seismic stations on the southern San Andreas Fault and extend the point source approximation of the Virtual Earthquake Approach to model finite kinematic ruptures. We confirm the coupling between source directivity and amplification in downtown Los Angeles seen in simulations.

  14. Taming Wild Horses: The Need for Virtual Time-based Scheduling of VMs in Network Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoginath, Srikanth B; Perumalla, Kalyan S; Henz, Brian J

    2012-01-01

    The next generation of scalable network simulators employ virtual machines (VMs) to act as high-fidelity models of traffic producer/consumer nodes in simulated networks. However, network simulations could be inaccurate if VMs are not scheduled according to virtual time, especially when many VMs are hosted per simulator core in a multi-core simulator environment. Since VMs are by default free-running, on the outset, it is not clear if, and to what extent, their untamed execution affects the results in simulated scenarios. Here, we provide the first quantitative basis for establishing the need for generalized virtual time scheduling of VMs in network simulators,more » based on an actual prototyped implementations. To exercise breadth, our system is tested with multiple disparate applications: (a) a set of message passing parallel programs, (b) a computer worm propagation phenomenon, and (c) a mobile ad-hoc wireless network simulation. We define and use error metrics and benchmarks in scaled tests to empirically report the poor match of traditional, fairness-based VM scheduling to VM-based network simulation, and also clearly show the better performance of our simulation-specific scheduler, with up to 64 VMs hosted on a 12-core simulator node.« less

  15. Development and Application of ANN Model for Worker Assignment into Virtual Cells of Large Sized Configurations

    NASA Astrophysics Data System (ADS)

    Murali, R. V.; Puri, A. B.; Fathi, Khalid

    2010-10-01

    This paper presents an extended version of study already undertaken on development of an artificial neural networks (ANNs) model for assigning workforce into virtual cells under virtual cellular manufacturing systems (VCMS) environments. Previously, the same authors have introduced this concept and applied it to virtual cells of two-cell configuration and the results demonstrated that ANNs could be a worth applying tool for carrying out workforce assignments. In this attempt, three-cell configurations problems are considered for worker assignment task. Virtual cells are formed under dual resource constraint (DRC) context in which the number of available workers is less than the total number of machines available. Since worker assignment tasks are quite non-linear and highly dynamic in nature under varying inputs & conditions and, in parallel, ANNs have the ability to model complex relationships between inputs and outputs and find similar patterns effectively, an attempt was earlier made to employ ANNs into the above task. In this paper, the multilayered perceptron with feed forward (MLP-FF) neural network model has been reused for worker assignment tasks of three-cell configurations under DRC context and its performance at different time periods has been analyzed. The previously proposed worker assignment model has been reconfigured and cell formation solutions available for three-cell configuration in the literature are used in combination to generate datasets for training ANNs framework. Finally, results of the study have been presented and discussed.

  16. An equivalent viscoelastic model for rock mass with parallel joints

    NASA Astrophysics Data System (ADS)

    Li, Jianchun; Ma, Guowei; Zhao, Jian

    2010-03-01

    An equivalent viscoelastic medium model is proposed for rock mass with parallel joints. A concept of "virtual wave source (VWS)" is proposed to take into account the wave reflections between the joints. The equivalent model can be effectively applied to analyze longitudinal wave propagation through discontinuous media with parallel joints. Parameters in the equivalent viscoelastic model are derived analytically based on longitudinal wave propagation across a single rock joint. The proposed model is then verified by applying identical incident waves to the discontinuous and equivalent viscoelastic media at one end to compare the output waves at the other end. When the wavelength of the incident wave is sufficiently long compared to the joint spacing, the effect of the VWS on wave propagation in rock mass is prominent. The results from the equivalent viscoelastic medium model are very similar to those determined from the displacement discontinuity method. Frequency dependence and joint spacing effect on the equivalent viscoelastic model and the VWS method are discussed.

  17. GROMACS 4:  Algorithms for Highly Efficient, Load-Balanced, and Scalable Molecular Simulation.

    PubMed

    Hess, Berk; Kutzner, Carsten; van der Spoel, David; Lindahl, Erik

    2008-03-01

    Molecular simulation is an extremely useful, but computationally very expensive tool for studies of chemical and biomolecular systems. Here, we present a new implementation of our molecular simulation toolkit GROMACS which now both achieves extremely high performance on single processors from algorithmic optimizations and hand-coded routines and simultaneously scales very well on parallel machines. The code encompasses a minimal-communication domain decomposition algorithm, full dynamic load balancing, a state-of-the-art parallel constraint solver, and efficient virtual site algorithms that allow removal of hydrogen atom degrees of freedom to enable integration time steps up to 5 fs for atomistic simulations also in parallel. To improve the scaling properties of the common particle mesh Ewald electrostatics algorithms, we have in addition used a Multiple-Program, Multiple-Data approach, with separate node domains responsible for direct and reciprocal space interactions. Not only does this combination of algorithms enable extremely long simulations of large systems but also it provides that simulation performance on quite modest numbers of standard cluster nodes.

  18. Computational aspects of helicopter trim analysis and damping levels from Floquet theory

    NASA Technical Reports Server (NTRS)

    Gaonkar, Gopal H.; Achar, N. S.

    1992-01-01

    Helicopter trim settings of periodic initial state and control inputs are investigated for convergence of Newton iteration in computing the settings sequentially and in parallel. The trim analysis uses a shooting method and a weak version of two temporal finite element methods with displacement formulation and with mixed formulation of displacements and momenta. These three methods broadly represent two main approaches of trim analysis: adaptation of initial-value and finite element boundary-value codes to periodic boundary conditions, particularly for unstable and marginally stable systems. In each method, both the sequential and in-parallel schemes are used and the resulting nonlinear algebraic equations are solved by damped Newton iteration with an optimally selected damping parameter. The impact of damped Newton iteration, including earlier-observed divergence problems in trim analysis, is demonstrated by the maximum condition number of the Jacobian matrices of the iterative scheme and by virtual elimination of divergence. The advantages of the in-parallel scheme over the conventional sequential scheme are also demonstrated.

  19. Use of Parallel Micro-Platform for the Simulation the Space Exploration

    NASA Astrophysics Data System (ADS)

    Velasco Herrera, Victor Manuel; Velasco Herrera, Graciela; Rosano, Felipe Lara; Rodriguez Lozano, Salvador; Lucero Roldan Serrato, Karen

    The purpose of this work is to create a parallel micro-platform, that simulates the virtual movements of a space exploration in 3D. One of the innovations presented in this design consists of the application of a lever mechanism for the transmission of the movement. The development of such a robot is a challenging task very different of the industrial manipulators due to a totally different target system of requirements. This work presents the study and simulation, aided by computer, of the movement of this parallel manipulator. The development of this model has been developed using the platform of computer aided design Unigraphics, in which it was done the geometric modeled of each one of the components and end assembly (CAD), the generation of files for the computer aided manufacture (CAM) of each one of the pieces and the kinematics simulation of the system evaluating different driving schemes. We used the toolbox (MATLAB) of aerospace and create an adaptive control module to simulate the system.

  20. An efficient parallel-processing method for transposing large matrices in place.

    PubMed

    Portnoff, M R

    1999-01-01

    We have developed an efficient algorithm for transposing large matrices in place. The algorithm is efficient because data are accessed either sequentially in blocks or randomly within blocks small enough to fit in cache, and because the same indexing calculations are shared among identical procedures operating on independent subsets of the data. This inherent parallelism makes the method well suited for a multiprocessor computing environment. The algorithm is easy to implement because the same two procedures are applied to the data in various groupings to carry out the complete transpose operation. Using only a single processor, we have demonstrated nearly an order of magnitude increase in speed over the previously published algorithm by Gate and Twigg for transposing a large rectangular matrix in place. With multiple processors operating in parallel, the processing speed increases almost linearly with the number of processors. A simplified version of the algorithm for square matrices is presented as well as an extension for matrices large enough to require virtual memory.

  1. AHPCRC (Army High Performance Computing Research Center) Bulletin. Volume 2, Issue 1

    DTIC Science & Technology

    2010-01-01

    Researchers in AHPCRC Technical Area 4 focus on improving processes for developing scalable, accurate parallel programs that are easily ported from one...control number. 1. REPORT DATE 2011 2. REPORT TYPE 3. DATES COVERED 00-00-2011 to 00-00-2011 4 . TITLE AND SUBTITLE AHPCRC (Army High...continued on page 4 Virtual levels in Sequoia represent an abstract memory hierarchy without specifying data transfer mechanisms, giving the

  2. Multifractal Internet Traffic Model and Active Queue Management

    DTIC Science & Technology

    2003-01-01

    dropped by the Adaptive RED , ssthresh decreases from 64KB to 4KB and the new con- gestion window cwnd is decreased from 8KB to 1KB (Tahoe). The situation...method to predict the queuing behavior of FIFO and RED queues. In order to satisfy a given delay and jitter requirement for real time connections, and to...5.2 Vulnerability of Adaptive RED to Web-mice . . . . . . . . . . . . . 103 5.3 A Parallel Virtual Queues Structure

  3. Separating the Laparoscopic Camera Cord From the Monopolar "Bovie" Cord Reduces Unintended Thermal Injury From Antenna Coupling: A Randomized Controlled Trial.

    PubMed

    Robinson, Thomas N; Jones, Edward L; Dunn, Christina L; Dunne, Bruce; Johnson, Elizabeth; Townsend, Nicole T; Paniccia, Alessandro; Stiegmann, Greg V

    2015-06-01

    The monopolar "Bovie" is used in virtually every laparoscopic operation. The active electrode and its cord emit radiofrequency energy that couples (or transfers) to nearby conductive material without direct contact. This phenomenon is increased when the active electrode cord is oriented parallel to another wire/cord. The parallel orientation of the "Bovie" and laparoscopic camera cords cause transfer of energy to the camera cord resulting in cutaneous burns at the camera trocar incision. We hypothesized that separating the active electrode/camera cords would reduce thermal injury occurring at the camera trocar incision in comparison to parallel oriented active electrode/camera cords. In this prospective, blinded, randomized controlled trial, patients undergoing standardized laparoscopic cholecystectomy were randomized to separated active electrode/camera cords or parallel oriented active electrode/camera cords. The primary outcome variable was thermal injury determined by histology from skin biopsied at the camera trocar incision. Eighty-four patients participated. Baseline demographics were similar in the groups for age, sex, preoperative diagnosis, operative time, and blood loss. Thermal injury at the camera trocar incision was lower in the separated versus parallel group (31% vs 57%; P = 0.027). Separation of the laparoscopic camera cord from the active electrode cord decreases thermal injury from antenna coupling at the camera trocar incision in comparison to the parallel orientation of these cords. Therefore, parallel orientation of these cords (an arrangement promoted by integrated operating rooms) should be abandoned. The findings of this study should influence the operating room setup for all laparoscopic cases.

  4. Nonlinear Fluid Computations in a Distributed Environment

    NASA Technical Reports Server (NTRS)

    Atwood, Christopher A.; Smith, Merritt H.

    1995-01-01

    The performance of a loosely and tightly-coupled workstation cluster is compared against a conventional vector supercomputer for the solution the Reynolds- averaged Navier-Stokes equations. The application geometries include a transonic airfoil, a tiltrotor wing/fuselage, and a wing/body/empennage/nacelle transport. Decomposition is of the manager-worker type, with solution of one grid zone per worker process coupled using the PVM message passing library. Task allocation is determined by grid size and processor speed, subject to available memory penalties. Each fluid zone is computed using an implicit diagonal scheme in an overset mesh framework, while relative body motion is accomplished using an additional worker process to re-establish grid communication.

  5. Establishment of key grid-connected performance index system for integrated PV-ES system

    NASA Astrophysics Data System (ADS)

    Li, Q.; Yuan, X. D.; Qi, Q.; Liu, H. M.

    2016-08-01

    In order to further promote integrated optimization operation of distributed new energy/ energy storage/ active load, this paper studies the integrated photovoltaic-energy storage (PV-ES) system which is connected with the distribution network, and analyzes typical structure and configuration selection for integrated PV-ES generation system. By combining practical grid- connected characteristics requirements and technology standard specification of photovoltaic generation system, this paper takes full account of energy storage system, and then proposes several new grid-connected performance indexes such as paralleled current sharing characteristic, parallel response consistency, adjusting characteristic, virtual moment of inertia characteristic, on- grid/off-grid switch characteristic, and so on. A comprehensive and feasible grid-connected performance index system is then established to support grid-connected performance testing on integrated PV-ES system.

  6. Collective communications apparatus and method for parallel systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knies, Allan D.; Keppel, David Pardo; Woo, Dong Hyuk

    A collective communication apparatus and method for parallel computing systems. For example, one embodiment of an apparatus comprises a plurality of processor elements (PEs); collective interconnect logic to dynamically form a virtual collective interconnect (VCI) between the PEs at runtime without global communication among all of the PEs, the VCI defining a logical topology between the PEs in which each PE is directly communicatively coupled to a only a subset of the remaining PEs; and execution logic to execute collective operations across the PEs, wherein one or more of the PEs receive first results from a first portion of themore » subset of the remaining PEs, perform a portion of the collective operations, and provide second results to a second portion of the subset of the remaining PEs.« less

  7. A LabVIEW-Based Virtual Instrument System for Laser-Induced Fluorescence Spectroscopy.

    PubMed

    Wu, Qijun; Wang, Lufei; Zu, Lily

    2011-01-01

    We report the design and operation of a Virtual Instrument (VI) system based on LabVIEW 2009 for laser-induced fluorescence experiments. This system achieves synchronous control of equipment and acquisition of real-time fluorescence data communicating with a single computer via GPIB, USB, RS232, and parallel ports. The reported VI system can also accomplish data display, saving, and analysis, and printing the results. The VI system performs sequences of operations automatically, and this system has been successfully applied to obtain the excitation and dispersion spectra of α-methylnaphthalene. The reported VI system opens up new possibilities for researchers and increases the efficiency and precision of experiments. The design and operation of the VI system are described in detail in this paper, and the advantages that this system can provide are highlighted.

  8. Comparison of Virtual Oscillator and Droop Control: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Brian B; Rodriguez, Miguel; Dhople, Sairaj

    Virtual oscillator control and droop control are two techniques that can be used to ensure synchronization and power sharing of parallel inverters in islanded operation. VOC relies on the implementation of non-linear Van der Pol oscillator equations in the control system of the inverter, acting upon the time-domain instantaneous inverter current and terminal voltage. On the other hand, DC explicitly computes active and reactive power produced by the inverter and relies on limited bandwidth low-pass filters. Even though both methods can be engineered to produce the same steady-state characteristics, their dynamic performances are significantly different. This paper presents analytical andmore » experimental results that aim to compare both methods. It is shown that VOC is inherently faster and enables minimizing the circulating currents. The results are verified using three 120V, 1kW inverters.« less

  9. A LabVIEW-Based Virtual Instrument System for Laser-Induced Fluorescence Spectroscopy

    PubMed Central

    Wu, Qijun; Wang, Lufei; Zu, Lily

    2011-01-01

    We report the design and operation of a Virtual Instrument (VI) system based on LabVIEW 2009 for laser-induced fluorescence experiments. This system achieves synchronous control of equipment and acquisition of real-time fluorescence data communicating with a single computer via GPIB, USB, RS232, and parallel ports. The reported VI system can also accomplish data display, saving, and analysis, and printing the results. The VI system performs sequences of operations automatically, and this system has been successfully applied to obtain the excitation and dispersion spectra of α-methylnaphthalene. The reported VI system opens up new possibilities for researchers and increases the efficiency and precision of experiments. The design and operation of the VI system are described in detail in this paper, and the advantages that this system can provide are highlighted. PMID:22013388

  10. Feasibility of incorporating functionally relevant virtual rehabilitation in sub-acute stroke care: perception of patients and clinicians.

    PubMed

    Demers, Marika; Chan Chun Kong, Daniel; Levin, Mindy F

    2018-03-11

    To determine user satisfaction and safety of incorporating a low-cost virtual rehabilitation intervention as an adjunctive therapeutic option for cognitive-motor upper limb rehabilitation in individuals with sub-acute stroke. A low-cost upper limb virtual rehabilitation application incorporating realistic functionally-relevant unimanual and bimanual tasks, specifically designed for cognitive-motor rehabilitation was developed for patients with sub-acute stroke. Clinicians and individuals with stroke interacted with the intervention for 15-20 or 20-45 minutes, respectively. The study had a mixed-methods convergent parallel design that included a focus group interview with clinicians working in a stroke program and semi-structured interviews and standardized assessments (Borg Perceived Exertion Scale, Short Feedback Questionnaire) for participants with sub-acute stroke undergoing rehabilitation. The occurrence of adverse events was also noted. Three main themes emerged from the clinician focus group and patient interviews: Perceived usefulness in rehabilitation, satisfaction with the virtual reality intervention and aspects to improve. All clinicians and the majority of participants with stroke were highly satisfied with the intervention and perceived its usefulness to decrease arm motor impairment during functional tasks. No participants experienced major adverse events. Incorporation of this type of functional activity game-based virtual reality intervention in the sub-acute phase of rehabilitation represents a way to transfer skills learned early in the clinical setting to real world situations. This type of intervention may lead to better integration of the upper limb into everyday activities. Implications for Rehabilitation • Use of a cognitive-motor low-cost virtual reality intervention designed to remediate arm motor impairments in sub-acute stroke is feasible, safe and perceived as useful by therapists and patients for stroke rehabilitation.    • Input from end-users (therapists and individuals with stroke) is critical for the development and implementation of a virtual reality intervention.

  11. Data General Corporation Advanced Operating System/Virtual Storage (AOS/ VS). Revision 7.60

    DTIC Science & Technology

    1989-02-22

    control list for each directory and data file. An access control list includes the users who can and cannot access files as well as the access...and any required data, it can -5- February 22, 1989 Final Evaluation Report Data General AOS/VS SYSTEM OVERVIEW operate asynchronously and in parallel...memory. The IOC can perform the data transfer without further interventiin from the CPU. The I/O channels interface with the processor or system

  12. Environment Study of AGNs at z = 0.3 to 3.0 Using the Japanese Virtual Observatory

    NASA Astrophysics Data System (ADS)

    Shirasaki, Y.; Ohishi, M.; Mizumoto, Y.; Takata, T.; Tanaka, M.; Yasuda, N.

    2010-12-01

    We present a science use case of Virtual Observatory, which was achieved to examine an environment of AGN up to redshift of 3.0. We used the Japanese Virtual Observatory (JVO) to obtain Subaru Suprime-Cam images around known AGNs. According to the hierarchical galaxy formation model, AGNs are expected to be found in an environment of higher galaxy density than that of typical galaxies. The current observations, however, indicate that AGNs do not reside in a particularly high density environment. We investigated ˜1000 AGNs, which is about ten times larger samples than the other studies covering the redshifts larger than 0.6. We successfully found significant excess of galaxies around AGNs at redshifts of 0.3 to 1.8. If this work was done in a classical manner, that is, raw data were retrieved from the archive through a form-based web interface in an interactive way, and the data were reduced on a low performance computer, it might take several years to finish it. Since the Virtual Observatory system is accessible through a standard interface, it is easy to query and retrieve data in an automatic way. We constructed a pipeline for retrieving the data and calculating the galaxy number density around a given coordinate. This procedure was executed in parallel on ˜10 quad core PCs, and it took only one day for obtaining the final result. Our result implies that the Virtual Observatory can be a powerful tool to do an astronomical research based on large amount of data.

  13. Declarative virtual water maze learning and emotional fear conditioning in primary insomnia.

    PubMed

    Kuhn, Marion; Hertenstein, Elisabeth; Feige, Bernd; Landmann, Nina; Spiegelhalder, Kai; Baglioni, Chiara; Hemmerling, Johanna; Durand, Diana; Frase, Lukas; Klöppel, Stefan; Riemann, Dieter; Nissen, Christoph

    2018-05-02

    Healthy sleep restores the brain's ability to adapt to novel input through memory formation based on activity-dependent refinements of the strength of neural transmission across synapses (synaptic plasticity). In line with this framework, patients with primary insomnia often report subjective memory impairment. However, investigations of memory performance did not produce conclusive results. The aim of this study was to further investigate memory performance in patients with primary insomnia in comparison to healthy controls, using two well-characterized learning tasks, a declarative virtual water maze task and emotional fear conditioning. Twenty patients with primary insomnia according to DSM-IV criteria (17 females, three males, 43.5 ± 13.0 years) and 20 good sleeper controls (17 females, three males, 41.7 ± 12.8 years) were investigated in a parallel-group study. All participants completed a hippocampus-dependent virtual Morris water maze task and amygdala-dependent classical fear conditioning. Patients with insomnia showed significantly delayed memory acquisition in the virtual water maze task, but no significant difference in fear acquisition compared with controls. These findings are consistent with the notion that memory processes that emerge from synaptic refinements in a hippocampal-neocortical network are particularly sensitive to chronic disruptions of sleep, while those in a basic emotional amygdala-dependent network may be more resilient. © 2018 European Sleep Research Society.

  14. An evaluation of organic light emitting diode monitors for medical applications: Great timing, but luminance artifacts

    PubMed Central

    Elze, Tobias; Taylor, Christopher; Bex, Peter J.

    2013-01-01

    Purpose: In contrast to the dominant medical liquid crystal display (LCD) technology, organic light-emitting diode (OLED) monitors control the display luminance via separate light-emitting diodes for each pixel and are therefore supposed to overcome many previously documented temporal artifacts of medical LCDs. We assessed the temporal and luminance characteristics of the only currently available OLED monitor designed for use in the medical treatment field (SONY PVM2551MD) and checked the authors’ main findings with another SONY OLED device (PVM2541). Methods: Temporal properties of the photometric output were measured with an optical transient recorder. Luminances of the three color primaries and white for all 256 digital driving levels (DDLs) were measured with a spectroradiometer. Between the luminances of neighboring DDLs, just noticeable differences were calculated according to a perceptual model developed for medical displays. Luminances of full screen (FS) stimuli were compared to luminances of smaller stimuli with identical DDLs. Results: All measured luminance transition times were below 300 μs. Luminances were independent of the luminance in the preceding frame. However, for the single color primaries, up to 50.5% of the luminances of neighboring DDLs were not perceptually distinguishable. If two color primaries were active simultaneously, between 36.7% and 55.1% of neighboring luminances for increasing DDLs of the third primary were even decreasing. Moreover, luminance saturation effects were observed when too many pixels were active simultaneously. This effect was strongest for white; a small white patch was close to 400 cd/m2, but in FS the luminance of white saturated at 162 cd/m2. Due to different saturation levels, the luminance of FS green and FS yellow could exceed the luminance of FS white for identical DDLs. Conclusions: The OLED temporal characteristics are excellent and superior to those of LCDs. However, the OLEDs revealed severe perceptually relevant artifacts with implications for applicability to medical imaging. PMID:24007183

  15. An evaluation of organic light emitting diode monitors for medical applications: great timing, but luminance artifacts.

    PubMed

    Elze, Tobias; Taylor, Christopher; Bex, Peter J

    2013-09-01

    In contrast to the dominant medical liquid crystal display (LCD) technology, organic light-emitting diode (OLED) monitors control the display luminance via separate light-emitting diodes for each pixel and are therefore supposed to overcome many previously documented temporal artifacts of medical LCDs. We assessed the temporal and luminance characteristics of the only currently available OLED monitor designed for use in the medical treatment field (SONY PVM2551MD) and checked the authors' main findings with another SONY OLED device (PVM2541). Temporal properties of the photometric output were measured with an optical transient recorder. Luminances of the three color primaries and white for all 256 digital driving levels (DDLs) were measured with a spectroradiometer. Between the luminances of neighboring DDLs, just noticeable differences were calculated according to a perceptual model developed for medical displays. Luminances of full screen (FS) stimuli were compared to luminances of smaller stimuli with identical DDLs. All measured luminance transition times were below 300 μs. Luminances were independent of the luminance in the preceding frame. However, for the single color primaries, up to 50.5% of the luminances of neighboring DDLs were not perceptually distinguishable. If two color primaries were active simultaneously, between 36.7% and 55.1% of neighboring luminances for increasing DDLs of the third primary were even decreasing. Moreover, luminance saturation effects were observed when too many pixels were active simultaneously. This effect was strongest for white; a small white patch was close to 400 cd/m(2), but in FS the luminance of white saturated at 162 cd/m(2). Due to different saturation levels, the luminance of FS green and FS yellow could exceed the luminance of FS white for identical DDLs. The OLED temporal characteristics are excellent and superior to those of LCDs. However, the OLEDs revealed severe perceptually relevant artifacts with implications for applicability to medical imaging.

  16. Vestibular Migraine in Children and Adolescents: Clinical Findings and Laboratory Tests

    PubMed Central

    Langhagen, Thyra; Lehrer, Nicole; Borggraefe, Ingo; Heinen, Florian; Jahn, Klaus

    2015-01-01

    Introduction: Vestibular migraine (VM) is the most common cause of episodic vertigo in children. We summarize the clinical findings and laboratory test results in a cohort of children and adolescents with VM. We discuss the limitations of current classification criteria for dizzy children. Methods: A retrospective chart analysis was performed on 118 children with migraine related vertigo at a tertiary care center. Patients were grouped in the following categories: (1) definite vestibular migraine (dVM); (2) probable vestibular migraine (pVM); (3) suspected vestibular migraine (sVM); (4) benign paroxysmal vertigo (BPV); and (5) migraine with/without aura (oM) plus vertigo/dizziness according to the International Classification of Headache Disorders, 3rd edition (beta version). Results: The mean age of all patients was 12 ± 3 years (range 3–18 years, 70 females). 36 patients (30%) fulfilled criteria for dVM, 33 (28%) for pVM, 34 (29%) for sVM, 7 (6%) for BPV, and 8 (7%) for oM. Somatoform vertigo (SV) co-occurred in 27% of patients. Episodic syndromes were reported in 8%; the family history of migraine was positive in 65%. Mild central ocular motor signs were found in 24% (most frequently horizontal saccadic pursuit). Laboratory tests showed that about 20% had pathological function of the horizontal vestibulo-ocular reflex, and almost 50% had abnormal postural sway patterns. Conclusion: Patients with definite, probable, and suspected VM do not differ in the frequency of ocular motor, vestibular, or postural abnormalities. VM is the best explanation for their symptoms. It is essential to establish diagnostic criteria in clinical studies. In clinical practice, however, the most reasonable diagnosis should be made in order to begin treatment. Such a procedure also minimizes the fear of the parents and children, reduces the need to interrupt leisure time and school activities, and prevents the development of SV. PMID:25674076

  17. Wide-range radioactive-gas-concentration detector

    DOEpatents

    Anderson, D.F.

    1981-11-16

    A wide-range radioactive-gas-concentration detector and monitor capable of measuring radioactive-gas concentrations over a range of eight orders of magnitude is described. The device is designed to have an ionization chamber sufficiently small to give a fast response time for measuring radioactive gases but sufficiently large to provide accurate readings at low concentration levels. Closely spaced parallel-plate grids provide a uniform electric field in the active region to improve the accuracy of measurements and reduce ion migration time so as to virtually eliminate errors due to ion recombination. The parallel-plate grids are fabricated with a minimal surface area to reduce the effects of contamination resulting from absorption of contaminating materials on the surface of the grids. Additionally, the ionization-chamber wall is spaced a sufficient distance from the active region of the ionization chamber to minimize contamination effects.

  18. Parallel implementation of D-Phylo algorithm for maximum likelihood clusters.

    PubMed

    Malik, Shamita; Sharma, Dolly; Khatri, Sunil Kumar

    2017-03-01

    This study explains a newly developed parallel algorithm for phylogenetic analysis of DNA sequences. The newly designed D-Phylo is a more advanced algorithm for phylogenetic analysis using maximum likelihood approach. The D-Phylo while misusing the seeking capacity of k -means keeps away from its real constraint of getting stuck at privately conserved motifs. The authors have tested the behaviour of D-Phylo on Amazon Linux Amazon Machine Image(Hardware Virtual Machine)i2.4xlarge, six central processing unit, 122 GiB memory, 8  ×  800 Solid-state drive Elastic Block Store volume, high network performance up to 15 processors for several real-life datasets. Distributing the clusters evenly on all the processors provides us the capacity to accomplish a near direct speed if there should arise an occurrence of huge number of processors.

  19. An adaptable parallel algorithm for the direct numerical simulation of incompressible turbulent flows using a Fourier spectral/hp element method and MPI virtual topologies

    NASA Astrophysics Data System (ADS)

    Bolis, A.; Cantwell, C. D.; Moxey, D.; Serson, D.; Sherwin, S. J.

    2016-09-01

    A hybrid parallelisation technique for distributed memory systems is investigated for a coupled Fourier-spectral/hp element discretisation of domains characterised by geometric homogeneity in one or more directions. The performance of the approach is mathematically modelled in terms of operation count and communication costs for identifying the most efficient parameter choices. The model is calibrated to target a specific hardware platform after which it is shown to accurately predict the performance in the hybrid regime. The method is applied to modelling turbulent flow using the incompressible Navier-Stokes equations in an axisymmetric pipe and square channel. The hybrid method extends the practical limitations of the discretisation, allowing greater parallelism and reduced wall times. Performance is shown to continue to scale when both parallelisation strategies are used.

  20. Planning in subsumption architectures

    NASA Technical Reports Server (NTRS)

    Chalfant, Eugene C.

    1994-01-01

    A subsumption planner using a parallel distributed computational paradigm based on the subsumption architecture for control of real-world capable robots is described. Virtual sensor state space is used as a planning tool to visualize the robot's anticipated effect on its environment. Decision sequences are generated based on the environmental situation expected at the time the robot must commit to a decision. Between decision points, the robot performs in a preprogrammed manner. A rudimentary, domain-specific partial world model contains enough information to extrapolate the end results of the rote behavior between decision points. A collective network of predictors operates in parallel with the reactive network forming a recurrrent network which generates plans as a hierarchy. Details of a plan segment are generated only when its execution is imminent. The use of the subsumption planner is demonstrated by a simple maze navigation problem.

  1. [PVFS 2000: An operational parallel file system for Beowulf

    NASA Technical Reports Server (NTRS)

    Ligon, Walt

    2004-01-01

    The approach has been to develop Parallel Virtual File System version 2 (PVFS2) , retaining the basic philosophy of the original file system but completely rewriting the code. It shows the architecture of the server and client components. BMI - BMI is the network abstraction layer. It is designed with a common driver and modules for each protocol supported. The interface is non-blocking, and provides mechanisms for optimizations including pinning user buffers. Currently TCP/IP and GM(Myrinet) modules have been implemented. Trove -Trove is the storage abstraction layer. It provides for storing both data spaces and name/value pairs. Trove can also be implemented using different underlying storage mechanisms including native files, raw disk partitions, SQL and other databases. The current implementation uses native files for data spaces and Berkeley db for name/value pairs.

  2. The self in cyberspace. Identity formation in postmodern societies and Jung's Self as an objective psyche.

    PubMed

    Roesler, Christian

    2008-06-01

    Jung's concept of the Self is compared with current theories of identity formation in post-modern society concerning the question: is the self constituted through experience and cultural influences--as it is argued by current theories in the social sciences--or is it already preformed inside the person, as Jung argues? The impact of communication media on the formation of identity in today's societies is discussed with a focus on internet communication and virtual realities. The resulting types of identities are conceptualized as polycentric which has surprising parallels to Jung's idea of the Self. The epistemology of constructivism and parallels in Jung's thought are demonstrated. Jung's work in this respect often appears contradictory in itself but this can be dealt with by a postmodern approach which accepts a plurality of truths.

  3. Performance prediction: A case study using a multi-ring KSR-1 machine

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He; Zhu, Jianping

    1995-01-01

    While computers with tens of thousands of processors have successfully delivered high performance power for solving some of the so-called 'grand-challenge' applications, the notion of scalability is becoming an important metric in the evaluation of parallel machine architectures and algorithms. In this study, the prediction of scalability and its application are carefully investigated. A simple formula is presented to show the relation between scalability, single processor computing power, and degradation of parallelism. A case study is conducted on a multi-ring KSR1 shared virtual memory machine. Experimental and theoretical results show that the influence of topology variation of an architecture is predictable. Therefore, the performance of an algorithm on a sophisticated, heirarchical architecture can be predicted and the best algorithm-machine combination can be selected for a given application.

  4. In-silico guided discovery of novel CCR9 antagonists

    NASA Astrophysics Data System (ADS)

    Zhang, Xin; Cross, Jason B.; Romero, Jan; Heifetz, Alexander; Humphries, Eric; Hall, Katie; Wu, Yuchuan; Stucka, Sabrina; Zhang, Jing; Chandonnet, Haoqun; Lippa, Blaise; Ryan, M. Dominic; Baber, J. Christian

    2018-03-01

    Antagonism of CCR9 is a promising mechanism for treatment of inflammatory bowel disease, including ulcerative colitis and Crohn's disease. There is limited experimental data on CCR9 and its ligands, complicating efforts to identify new small molecule antagonists. We present here results of a successful virtual screening and rational hit-to-lead campaign that led to the discovery and initial optimization of novel CCR9 antagonists. This work uses a novel data fusion strategy to integrate the output of multiple computational tools, such as 2D similarity search, shape similarity, pharmacophore searching, and molecular docking, as well as the identification and incorporation of privileged chemokine fragments. The application of various ranking strategies, which combined consensus and parallel selection methods to achieve a balance of enrichment and novelty, resulted in 198 virtual screening hits in total, with an overall hit rate of 18%. Several hits were developed into early leads through targeted synthesis and purchase of analogs.

  5. Digital image compression for a 2f multiplexing optical setup

    NASA Astrophysics Data System (ADS)

    Vargas, J.; Amaya, D.; Rueda, E.

    2016-07-01

    In this work a virtual 2f multiplexing system was implemented in combination with digital image compression techniques and redundant information elimination. Depending on the image type to be multiplexed, a memory-usage saving of as much as 99% was obtained. The feasibility of the system was tested using three types of images, binary characters, QR codes, and grey level images. A multiplexing step was implemented digitally, while a demultiplexing step was implemented in a virtual 2f optical setup following real experimental parameters. To avoid cross-talk noise, each image was codified with a specially designed phase diffraction carrier that would allow the separation and relocation of the multiplexed images on the observation plane by simple light propagation. A description of the system is presented together with simulations that corroborate the method. The present work may allow future experimental implementations that will make use of all the parallel processing capabilities of optical systems.

  6. A high performance scientific cloud computing environment for materials simulations

    NASA Astrophysics Data System (ADS)

    Jorissen, K.; Vila, F. D.; Rehr, J. J.

    2012-09-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including tools for execution and monitoring performance, as well as efficient I/O utilities that enable seamless connections to and from the cloud. Our SCC platform is optimized for the Amazon Elastic Compute Cloud (EC2). We present benchmarks for prototypical scientific applications and demonstrate performance comparable to local compute clusters. To facilitate code execution and provide user-friendly access, we have also integrated cloud computing capability in a JAVA-based GUI. Our SCC platform may be an alternative to traditional HPC resources for materials science or quantum chemistry applications.

  7. High-performance scientific computing in the cloud

    NASA Astrophysics Data System (ADS)

    Jorissen, Kevin; Vila, Fernando; Rehr, John

    2011-03-01

    Cloud computing has the potential to open up high-performance computational science to a much broader class of researchers, owing to its ability to provide on-demand, virtualized computational resources. However, before such approaches can become commonplace, user-friendly tools must be developed that hide the unfamiliar cloud environment and streamline the management of cloud resources for many scientific applications. We have recently shown that high-performance cloud computing is feasible for parallelized x-ray spectroscopy calculations. We now present benchmark results for a wider selection of scientific applications focusing on electronic structure and spectroscopic simulation software in condensed matter physics. These applications are driven by an improved portable interface that can manage virtual clusters and run various applications in the cloud. We also describe a next generation of cluster tools, aimed at improved performance and a more robust cluster deployment. Supported by NSF grant OCI-1048052.

  8. Parallel Worlds of Public and Commercial Bioactive Chemistry Data

    PubMed Central

    2014-01-01

    The availability of structures and linked bioactivity data in databases is powerfully enabling for drug discovery and chemical biology. However, we now review some confounding issues with the divergent expansions of public and commercial sources of chemical structures. These are associated with not only expanding patent extraction but also increasingly large vendor collections amassed via different selection criteria between SciFinder from Chemical Abstracts Service (CAS) and major public sources such as PubChem, ChemSpider, UniChem, and others. These increasingly massive collections may include both real and virtual compounds, as well as so-called prophetic compounds from patents. We address a range of issues raised by the challenges faced resolving the NIH probe compounds. In addition we highlight the confounding of prior-art searching by virtual compounds that could impact the composition of matter patentability of a new medicinal chemistry lead. Finally, we propose some potential solutions. PMID:25415348

  9. Facilitating Co-Design for Extreme-Scale Systems Through Lightweight Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engelmann, Christian; Lauer, Frank

    This work focuses on tools for investigating algorithm performance at extreme scale with millions of concurrent threads and for evaluating the impact of future architecture choices to facilitate the co-design of high-performance computing (HPC) architectures and applications. The approach focuses on lightweight simulation of extreme-scale HPC systems with the needed amount of accuracy. The prototype presented in this paper is able to provide this capability using a parallel discrete event simulation (PDES), such that a Message Passing Interface (MPI) application can be executed at extreme scale, and its performance properties can be evaluated. The results of an initial prototype aremore » encouraging as a simple 'hello world' MPI program could be scaled up to 1,048,576 virtual MPI processes on a four-node cluster, and the performance properties of two MPI programs could be evaluated at up to 16,384 virtual MPI processes on the same system.« less

  10. Saccades to future ball location reveal memory-based prediction in a virtual-reality interception task.

    PubMed

    Diaz, Gabriel; Cooper, Joseph; Rothkopf, Constantin; Hayhoe, Mary

    2013-01-16

    Despite general agreement that prediction is a central aspect of perception, there is relatively little evidence concerning the basis on which visual predictions are made. Although both saccadic and pursuit eye-movements reveal knowledge of the future position of a moving visual target, in many of these studies targets move along simple trajectories through a fronto-parallel plane. Here, using a naturalistic and racquet-based interception task in a virtual environment, we demonstrate that subjects make accurate predictions of visual target motion, even when targets follow trajectories determined by the complex dynamics of physical interactions and the head and body are unrestrained. Furthermore, we found that, following a change in ball elasticity, subjects were able to accurately adjust their prebounce predictions of the ball's post-bounce trajectory. This suggests that prediction is guided by experience-based models of how information in the visual image will change over time.

  11. Saccades to future ball location reveal memory-based prediction in a virtual-reality interception task

    PubMed Central

    Diaz, Gabriel; Cooper, Joseph; Rothkopf, Constantin; Hayhoe, Mary

    2013-01-01

    Despite general agreement that prediction is a central aspect of perception, there is relatively little evidence concerning the basis on which visual predictions are made. Although both saccadic and pursuit eye-movements reveal knowledge of the future position of a moving visual target, in many of these studies targets move along simple trajectories through a fronto-parallel plane. Here, using a naturalistic and racquet-based interception task in a virtual environment, we demonstrate that subjects make accurate predictions of visual target motion, even when targets follow trajectories determined by the complex dynamics of physical interactions and the head and body are unrestrained. Furthermore, we found that, following a change in ball elasticity, subjects were able to accurately adjust their prebounce predictions of the ball's post-bounce trajectory. This suggests that prediction is guided by experience-based models of how information in the visual image will change over time. PMID:23325347

  12. Adaptation of a Multi-Block Structured Solver for Effective Use in a Hybrid CPU/GPU Massively Parallel Environment

    NASA Astrophysics Data System (ADS)

    Gutzwiller, David; Gontier, Mathieu; Demeulenaere, Alain

    2014-11-01

    Multi-Block structured solvers hold many advantages over their unstructured counterparts, such as a smaller memory footprint and efficient serial performance. Historically, multi-block structured solvers have not been easily adapted for use in a High Performance Computing (HPC) environment, and the recent trend towards hybrid GPU/CPU architectures has further complicated the situation. This paper will elaborate on developments and innovations applied to the NUMECA FINE/Turbo solver that have allowed near-linear scalability with real-world problems on over 250 hybrid GPU/GPU cluster nodes. Discussion will focus on the implementation of virtual partitioning and load balancing algorithms using a novel meta-block concept. This implementation is transparent to the user, allowing all pre- and post-processing steps to be performed using a simple, unpartitioned grid topology. Additional discussion will elaborate on developments that have improved parallel performance, including fully parallel I/O with the ADIOS API and the GPU porting of the computationally heavy CPUBooster convergence acceleration module. Head of HPC and Release Management, Numeca International.

  13. Hybrid MPI/OpenMP Implementation of the ORAC Molecular Dynamics Program for Generalized Ensemble and Fast Switching Alchemical Simulations.

    PubMed

    Procacci, Piero

    2016-06-27

    We present a new release (6.0β) of the ORAC program [Marsili et al. J. Comput. Chem. 2010, 31, 1106-1116] with a hybrid OpenMP/MPI (open multiprocessing message passing interface) multilevel parallelism tailored for generalized ensemble (GE) and fast switching double annihilation (FS-DAM) nonequilibrium technology aimed at evaluating the binding free energy in drug-receptor system on high performance computing platforms. The production of the GE or FS-DAM trajectories is handled using a weak scaling parallel approach on the MPI level only, while a strong scaling force decomposition scheme is implemented for intranode computations with shared memory access at the OpenMP level. The efficiency, simplicity, and inherent parallel nature of the ORAC implementation of the FS-DAM algorithm, project the code as a possible effective tool for a second generation high throughput virtual screening in drug discovery and design. The code, along with documentation, testing, and ancillary tools, is distributed under the provisions of the General Public License and can be freely downloaded at www.chim.unifi.it/orac .

  14. Trajectory Tracking of a Planer Parallel Manipulator by Using Computed Force Control Method

    NASA Astrophysics Data System (ADS)

    Bayram, Atilla

    2017-03-01

    Despite small workspace, parallel manipulators have some advantages over their serial counterparts in terms of higher speed, acceleration, rigidity, accuracy, manufacturing cost and payload. Accordingly, this type of manipulators can be used in many applications such as in high-speed machine tools, tuning machine for feeding, sensitive cutting, assembly and packaging. This paper presents a special type of planar parallel manipulator with three degrees of freedom. It is constructed as a variable geometry truss generally known planar Stewart platform. The reachable and orientation workspaces are obtained for this manipulator. The inverse kinematic analysis is solved for the trajectory tracking according to the redundancy and joint limit avoidance. Then, the dynamics model of the manipulator is established by using Virtual Work method. The simulations are performed to follow the given planar trajectories by using the dynamic equations of the variable geometry truss manipulator and computed force control method. In computed force control method, the feedback gain matrices for PD control are tuned with fixed matrices by trail end error and variable ones by means of optimization with genetic algorithm.

  15. SHARED VIRTUAL ENVIRONMENTS FOR COLLECTIVE TRAINING

    NASA Technical Reports Server (NTRS)

    Loftin, R. Bowen

    2000-01-01

    Historically NASA has trained teams of astronauts by bringing them to the Johnson Space Center in Houston to undergo generic training, followed by mission-specific training. This latter training begins after a crew has been selected for a mission (perhaps two years before the launch of that mission). While some Space Shuttle flights have included an astronaut from a foreign country, the International Space Station will be consistently crewed by teams comprised of astronauts from two or more of the partner nations. The cost of training these international teams continues to grow in both monetary and personal terms. Thus, NASA has been seeking alternative training approaches for the International Space Station program. Since 1994 we have been developing, testing, and refining shared virtual environments for astronaut team training, including the use of virtual environments for use while in or in transit to the task location. In parallel with this effort, we have also been preparing applications for training teams of military personnel engaged in peacekeeping missions. This paper will describe the applications developed to date, some of the technological challenges that have been overcome in their development, and the research performed to guide the development and to measure the efficacy of these shared environments as training tools.

  16. Real-time simultaneous and proportional myoelectric control using intramuscular EMG

    PubMed Central

    Kuiken, Todd A; Hargrove, Levi J

    2014-01-01

    Objective Myoelectric prostheses use electromyographic (EMG) signals to control movement of prosthetic joints. Clinically available myoelectric control strategies do not allow simultaneous movement of multiple degrees of freedom (DOFs); however, the use of implantable devices that record intramuscular EMG signals could overcome this constraint. The objective of this study was to evaluate the real-time simultaneous control of three DOFs (wrist rotation, wrist flexion/extension, and hand open/close) using intramuscular EMG. Approach We evaluated task performance of five able-bodied subjects in a virtual environment using two control strategies with fine-wire EMG: (i) parallel dual-site differential control, which enabled simultaneous control of three DOFs and (ii) pattern recognition control, which required sequential control of DOFs. Main Results Over the course of the experiment, subjects using parallel dual-site control demonstrated increased use of simultaneous control and improved performance in a Fitts' Law test. By the end of the experiment, performance using parallel dual-site control was significantly better (up to a 25% increase in throughput) than when using sequential pattern recognition control for tasks requiring multiple DOFs. The learning trends with parallel dual-site control suggested that further improvements in performance metrics were possible. Subjects occasionally experienced difficulty in performing isolated single-DOF movements with parallel dual-site control but were able to accomplish related Fitts' Law tasks with high levels of path efficiency. Significance These results suggest that intramuscular EMG, used in a parallel dual-site configuration, can provide simultaneous control of a multi-DOF prosthetic wrist and hand and may outperform current methods that enforce sequential control. PMID:25394366

  17. Real-time simultaneous and proportional myoelectric control using intramuscular EMG

    NASA Astrophysics Data System (ADS)

    Smith, Lauren H.; Kuiken, Todd A.; Hargrove, Levi J.

    2014-12-01

    Objective. Myoelectric prostheses use electromyographic (EMG) signals to control movement of prosthetic joints. Clinically available myoelectric control strategies do not allow simultaneous movement of multiple degrees of freedom (DOFs); however, the use of implantable devices that record intramuscular EMG signals could overcome this constraint. The objective of this study was to evaluate the real-time simultaneous control of three DOFs (wrist rotation, wrist flexion/extension, and hand open/close) using intramuscular EMG. Approach. We evaluated task performance of five able-bodied subjects in a virtual environment using two control strategies with fine-wire EMG: (i) parallel dual-site differential control, which enabled simultaneous control of three DOFs and (ii) pattern recognition control, which required sequential control of DOFs. Main results. Over the course of the experiment, subjects using parallel dual-site control demonstrated increased use of simultaneous control and improved performance in a Fitts’ Law test. By the end of the experiment, performance using parallel dual-site control was significantly better (up to a 25% increase in throughput) than when using sequential pattern recognition control for tasks requiring multiple DOFs. The learning trends with parallel dual-site control suggested that further improvements in performance metrics were possible. Subjects occasionally experienced difficulty in performing isolated single-DOF movements with parallel dual-site control but were able to accomplish related Fitts’ Law tasks with high levels of path efficiency. Significance. These results suggest that intramuscular EMG, used in a parallel dual-site configuration, can provide simultaneous control of a multi-DOF prosthetic wrist and hand and may outperform current methods that enforce sequential control.

  18. Spontaneous Hot Flow Anomalies at Quasi-Parallel Shocks: 2. Hybrid Simulations

    NASA Technical Reports Server (NTRS)

    Omidi, N.; Zhang, H.; Sibeck, D.; Turner, D.

    2013-01-01

    Motivated by recent THEMIS observations, this paper uses 2.5-D electromagnetic hybrid simulations to investigate the formation of Spontaneous Hot Flow Anomalies (SHFA) upstream of quasi-parallel bow shocks during steady solar wind conditions and in the absence of discontinuities. The results show the formation of a large number of structures along and upstream of the quasi-parallel bow shock. Their outer edges exhibit density and magnetic field enhancements, while their cores exhibit drops in density, magnetic field, solar wind velocity and enhancements in ion temperature. Using virtual spacecraft in the simulation, we show that the signatures of these structures in the time series data are very similar to those of SHFAs seen in THEMIS data and conclude that they correspond to SHFAs. Examination of the simulation data shows that SHFAs form as the result of foreshock cavitons interacting with the bow shock. Foreshock cavitons in turn form due to the nonlinear evolution of ULF waves generated by the interaction of the solar wind with the backstreaming ions. Because foreshock cavitons are an inherent part of the shock dissipation process, the formation of SHFAs is also an inherent part of the dissipation process leading to a highly non-uniform plasma in the quasi-parallel magnetosheath including large scale density and magnetic field cavities.

  19. A Family of ACO Routing Protocols for Mobile Ad Hoc Networks.

    PubMed

    Rupérez Cañas, Delfín; Sandoval Orozco, Ana Lucila; García Villalba, Luis Javier; Kim, Tai-Hoon

    2017-05-22

    In this work, an ACO routing protocol for mobile ad hoc networks based on AntHocNet is specified. As its predecessor, this new protocol, called AntOR, is hybrid in the sense that it contains elements from both reactive and proactive routing. Specifically, it combines a reactive route setup process with a proactive route maintenance and improvement process. Key aspects of the AntOR protocol are the disjoint-link and disjoint-node routes, separation between the regular pheromone and the virtual pheromone in the diffusion process and the exploration of routes, taking into consideration the number of hops in the best routes. In this work, a family of ACO routing protocols based on AntOR is also specified. These protocols are based on protocol successive refinements. In this work, we also present a parallelized version of AntOR that we call PAntOR. Using programming multiprocessor architectures based on the shared memory protocol, PAntOR allows running tasks in parallel using threads. This parallelization is applicable in the route setup phase, route local repair process and link failure notification. In addition, a variant of PAntOR that consists of having more than one interface, which we call PAntOR-MI (PAntOR-Multiple Interface), is specified. This approach parallelizes the sending of broadcast messages by interface through threads.

  20. Hard X-Ray And Wide Focusing Telescopes

    NASA Technical Reports Server (NTRS)

    Gorenstein, Paul; Johnson, William B. (Technical Monitor)

    2001-01-01

    The development of a hard X-ray telescope requires new technology for both substrates and coatings. Our activities in these two areas were carried out virtually in parallel during most of the past few years. They are converging on the production of our first integral conical, substrate electroformed mirror that will be coated with a graded d-spacing multilayer. Its imaging properties and effective area will be measured in hard X-ray beams. We discuss each of these activities separately in the following two sections.

  1. Striking First: Preemptive and Preventive Attack in U.S. National Security Policy

    DTIC Science & Technology

    2006-01-01

    Street, P.O. Box 2138, Santa Monica, CA 90407-2138 1200 South Hayes Street, Arlington, VA 22202-5050 4570 Fifth Avenue, Suite 600, Pittsburgh , PA...source of petroleum before a U.S. oil embargo could bring the Japanese war effort in China to its knees .29 Such cases have significant parallels with...anticipation of being attacked are a cen- tral concern in virtually all rules of engagement. 16 Striking First both to serve as a basis for the rest of

  2. DOVIS 2.0: An Efficient and Easy to Use Parallel Virtual Screening Tool Based on AutoDock 4.0

    DTIC Science & Technology

    2008-09-08

    under the GNU General Public License. Background Molecular docking is a computational method that pre- dicts how a ligand interacts with a receptor...Hence, it is an important tool in studying receptor-ligand interactions and plays an essential role in drug design. Particularly, molecular docking has...libraries from OpenBabel and setup a molecular data structure as a C++ object in our program. This makes handling of molecular structures (e.g., atoms

  3. Recruitment of human aquaporin 3 to internal membranes in the Plasmodium falciparum infected erythrocyte.

    PubMed

    Bietz, Sven; Montilla, Irine; Külzer, Simone; Przyborski, Jude M; Lingelbach, Klaus

    2009-09-01

    The molecular mechanisms underlying the formation of the parasitophorous vacuolar membrane in Plasmodium falciparum infected erythrocytes are incompletely understood, and the protein composition of this membrane is still enigmatic. Although the differentiated mammalian erythrocyte lacks the machinery required for endocytosis, some reports have described a localisation of host cell membrane proteins at the parasitophorous vacuolar membrane. Aquaporin 3 is an abundant plasma membrane protein of various cells, including mammalian erythrocytes where it is found in distinct oligomeric states. Here we show that human aquaporin 3 is internalized into infected erythrocytes, presumably during or soon after invasion. It is integrated into the PVM where it is organized in novel oligomeric states which are not found in non-infected cells.

  4. PARALLEL ASSAY OF OXYGEN EQUILIBRIA OF HEMOGLOBIN

    PubMed Central

    Lilly, Laura E.; Blinebry, Sara K.; Viscardi, Chelsea M.; Perez, Luis; Bonaventura, Joe; McMahon, Tim J.

    2013-01-01

    Methods to systematically analyze in parallel the function of multiple protein or cell samples in vivo or ex vivo (i.e. functional proteomics) in a controlled gaseous environment have thus far been limited. Here we describe an apparatus and procedure that enables, for the first time, parallel assay of oxygen equilibria in multiple samples. Using this apparatus, numerous simultaneous oxygen equilibrium curves (OECs) can be obtained under truly identical conditions from blood cell samples or purified hemoglobins (Hbs). We suggest that the ability to obtain these parallel datasets under identical conditions can be of immense value, both to biomedical researchers and clinicians who wish to monitor blood health, and to physiologists studying non-human organisms and the effects of climate change on these organisms. Parallel monitoring techniques are essential in order to better understand the functions of critical cellular proteins. The procedure can be applied to human studies, wherein an OEC can be analyzed in light of an individual’s entire genome. Here, we analyzed intraerythrocytic Hb, a protein that operates at the organism’s environmental interface and then comes into close contact with virtually all of the organism’s cells. The apparatus is theoretically scalable, and establishes a functional proteomic screen that can be correlated with genomic information on the same individuals. This new method is expected to accelerate our general understanding of protein function, an increasingly challenging objective as advances in proteomic and genomic throughput outpace the ability to study proteins’ functional properties. PMID:23827235

  5. GeauxDock: Accelerating Structure-Based Virtual Screening with Heterogeneous Computing

    PubMed Central

    Fang, Ye; Ding, Yun; Feinstein, Wei P.; Koppelman, David M.; Moreno, Juana; Jarrell, Mark; Ramanujam, J.; Brylinski, Michal

    2016-01-01

    Computational modeling of drug binding to proteins is an integral component of direct drug design. Particularly, structure-based virtual screening is often used to perform large-scale modeling of putative associations between small organic molecules and their pharmacologically relevant protein targets. Because of a large number of drug candidates to be evaluated, an accurate and fast docking engine is a critical element of virtual screening. Consequently, highly optimized docking codes are of paramount importance for the effectiveness of virtual screening methods. In this communication, we describe the implementation, tuning and performance characteristics of GeauxDock, a recently developed molecular docking program. GeauxDock is built upon the Monte Carlo algorithm and features a novel scoring function combining physics-based energy terms with statistical and knowledge-based potentials. Developed specifically for heterogeneous computing platforms, the current version of GeauxDock can be deployed on modern, multi-core Central Processing Units (CPUs) as well as massively parallel accelerators, Intel Xeon Phi and NVIDIA Graphics Processing Unit (GPU). First, we carried out a thorough performance tuning of the high-level framework and the docking kernel to produce a fast serial code, which was then ported to shared-memory multi-core CPUs yielding a near-ideal scaling. Further, using Xeon Phi gives 1.9× performance improvement over a dual 10-core Xeon CPU, whereas the best GPU accelerator, GeForce GTX 980, achieves a speedup as high as 3.5×. On that account, GeauxDock can take advantage of modern heterogeneous architectures to considerably accelerate structure-based virtual screening applications. GeauxDock is open-sourced and publicly available at www.brylinski.org/geauxdock and https://figshare.com/articles/geauxdock_tar_gz/3205249. PMID:27420300

  6. GeauxDock: Accelerating Structure-Based Virtual Screening with Heterogeneous Computing.

    PubMed

    Fang, Ye; Ding, Yun; Feinstein, Wei P; Koppelman, David M; Moreno, Juana; Jarrell, Mark; Ramanujam, J; Brylinski, Michal

    2016-01-01

    Computational modeling of drug binding to proteins is an integral component of direct drug design. Particularly, structure-based virtual screening is often used to perform large-scale modeling of putative associations between small organic molecules and their pharmacologically relevant protein targets. Because of a large number of drug candidates to be evaluated, an accurate and fast docking engine is a critical element of virtual screening. Consequently, highly optimized docking codes are of paramount importance for the effectiveness of virtual screening methods. In this communication, we describe the implementation, tuning and performance characteristics of GeauxDock, a recently developed molecular docking program. GeauxDock is built upon the Monte Carlo algorithm and features a novel scoring function combining physics-based energy terms with statistical and knowledge-based potentials. Developed specifically for heterogeneous computing platforms, the current version of GeauxDock can be deployed on modern, multi-core Central Processing Units (CPUs) as well as massively parallel accelerators, Intel Xeon Phi and NVIDIA Graphics Processing Unit (GPU). First, we carried out a thorough performance tuning of the high-level framework and the docking kernel to produce a fast serial code, which was then ported to shared-memory multi-core CPUs yielding a near-ideal scaling. Further, using Xeon Phi gives 1.9× performance improvement over a dual 10-core Xeon CPU, whereas the best GPU accelerator, GeForce GTX 980, achieves a speedup as high as 3.5×. On that account, GeauxDock can take advantage of modern heterogeneous architectures to considerably accelerate structure-based virtual screening applications. GeauxDock is open-sourced and publicly available at www.brylinski.org/geauxdock and https://figshare.com/articles/geauxdock_tar_gz/3205249.

  7. Virtual machine-based simulation platform for mobile ad-hoc network-based cyber infrastructure

    DOE PAGES

    Yoginath, Srikanth B.; Perumalla, Kayla S.; Henz, Brian J.

    2015-09-29

    In modeling and simulating complex systems such as mobile ad-hoc networks (MANETs) in de-fense communications, it is a major challenge to reconcile multiple important considerations: the rapidity of unavoidable changes to the software (network layers and applications), the difficulty of modeling the critical, implementation-dependent behavioral effects, the need to sustain larger scale scenarios, and the desire for faster simulations. Here we present our approach in success-fully reconciling them using a virtual time-synchronized virtual machine(VM)-based parallel ex-ecution framework that accurately lifts both the devices as well as the network communications to a virtual time plane while retaining full fidelity. At themore » core of our framework is a scheduling engine that operates at the level of a hypervisor scheduler, offering a unique ability to execute multi-core guest nodes over multi-core host nodes in an accurate, virtual time-synchronized manner. In contrast to other related approaches that suffer from either speed or accuracy issues, our framework provides MANET node-wise scalability, high fidelity of software behaviors, and time-ordering accuracy. The design and development of this framework is presented, and an ac-tual implementation based on the widely used Xen hypervisor system is described. Benchmarks with synthetic and actual applications are used to identify the benefits of our approach. The time inaccuracy of traditional emulation methods is demonstrated, in comparison with the accurate execution of our framework verified by theoretically correct results expected from analytical models of the same scenarios. In the largest high fidelity tests, we are able to perform virtual time-synchronized simulation of 64-node VM-based full-stack, actual software behaviors of MANETs containing a mix of static and mobile (unmanned airborne vehicle) nodes, hosted on a 32-core host, with full fidelity of unmodified ad-hoc routing protocols, unmodified application executables, and user-controllable physical layer effects including inter-device wireless signal strength, reachability, and connectivity.« less

  8. Virtual machine-based simulation platform for mobile ad-hoc network-based cyber infrastructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoginath, Srikanth B.; Perumalla, Kayla S.; Henz, Brian J.

    In modeling and simulating complex systems such as mobile ad-hoc networks (MANETs) in de-fense communications, it is a major challenge to reconcile multiple important considerations: the rapidity of unavoidable changes to the software (network layers and applications), the difficulty of modeling the critical, implementation-dependent behavioral effects, the need to sustain larger scale scenarios, and the desire for faster simulations. Here we present our approach in success-fully reconciling them using a virtual time-synchronized virtual machine(VM)-based parallel ex-ecution framework that accurately lifts both the devices as well as the network communications to a virtual time plane while retaining full fidelity. At themore » core of our framework is a scheduling engine that operates at the level of a hypervisor scheduler, offering a unique ability to execute multi-core guest nodes over multi-core host nodes in an accurate, virtual time-synchronized manner. In contrast to other related approaches that suffer from either speed or accuracy issues, our framework provides MANET node-wise scalability, high fidelity of software behaviors, and time-ordering accuracy. The design and development of this framework is presented, and an ac-tual implementation based on the widely used Xen hypervisor system is described. Benchmarks with synthetic and actual applications are used to identify the benefits of our approach. The time inaccuracy of traditional emulation methods is demonstrated, in comparison with the accurate execution of our framework verified by theoretically correct results expected from analytical models of the same scenarios. In the largest high fidelity tests, we are able to perform virtual time-synchronized simulation of 64-node VM-based full-stack, actual software behaviors of MANETs containing a mix of static and mobile (unmanned airborne vehicle) nodes, hosted on a 32-core host, with full fidelity of unmodified ad-hoc routing protocols, unmodified application executables, and user-controllable physical layer effects including inter-device wireless signal strength, reachability, and connectivity.« less

  9. Kinematics and dynamics of robotic systems with multiple closed loops

    NASA Astrophysics Data System (ADS)

    Zhang, Chang-De

    The kinematics and dynamics of robotic systems with multiple closed loops, such as Stewart platforms, walking machines, and hybrid manipulators, are studied. In the study of kinematics, focus is on the closed-form solutions of the forward position analysis of different parallel systems. A closed-form solution means that the solution is expressed as a polynomial in one variable. If the order of the polynomial is less than or equal to four, the solution has analytical closed-form. First, the conditions of obtaining analytical closed-form solutions are studied. For a Stewart platform, the condition is found to be that one rotational degree of freedom of the output link is decoupled from the other five. Based on this condition, a class of Stewart platforms which has analytical closed-form solution is formulated. Conditions of analytical closed-form solution for other parallel systems are also studied. Closed-form solutions of forward kinematics for walking machines and multi-fingered grippers are then studied. For a parallel system with three three-degree-of-freedom subchains, there are 84 possible ways to select six independent joints among nine joints. These 84 ways can be classified into three categories: Category 3:3:0, Category 3:2:1, and Category 2:2:2. It is shown that the first category has no solutions; the solutions of the second category have analytical closed-form; and the solutions of the last category are higher order polynomials. The study is then extended to a nearly general Stewart platform. The solution is a 20th order polynomial and the Stewart platform has a maximum of 40 possible configurations. Also, the study is extended to a new class of hybrid manipulators which consists of two serially connected parallel mechanisms. In the study of dynamics, a computationally efficient method for inverse dynamics of manipulators based on the virtual work principle is developed. Although this method is comparable with the recursive Newton-Euler method for serial manipulators, its advantage is more noteworthy when applied to parallel systems. An approach of inverse dynamics of a walking machine is also developed, which includes inverse dynamic modeling, foot force distribution, and joint force/torque allocation.

  10. Improving the accuracy of the diagnosis of schizophrenia by means of virtual reality.

    PubMed

    Sorkin, Anna; Weinshall, Daphna; Modai, Ilan; Peled, Avi

    2006-03-01

    The authors' goal was to improve the diagnosis of schizophrenia by using virtual reality technology to build a complex, multimodal environment in which cognitive functions can be studied (and measured) in parallel. The authors studied sensory integration within working memory by means of computer navigation through a virtual maze. The simulated journey consisted of a series of rooms, each of which included three doors. Each door was characterized by three features (color, shape, and sound), and a single combination of features--the door-opening rule--was correct. Subjects had to learn the rule and use it. The participants were 39 schizophrenic patients and 21 healthy comparison subjects. Upon completion, each subject was assigned a performance profile, including various error scores, response time, navigation ability, and strategy. A classification procedure based on the subjects' performance profile correctly predicted 85% of the schizophrenic patients (and all of the comparison subjects). Several performance variables showed significant correlations with scores on a standard diagnostic measure (Positive and Negative Syndrome Scale), suggesting potential use of these measurements for the diagnosis of schizophrenia. On the other hand, the patients did not show unusual repetition of response despite stimulus cessation (called "perseveration" in classical studies of schizophrenia), which is a common symptom of the disease. This deficit appeared only when the subjects did not receive proper explanation of the task. The ability to study multimodal performance simultaneously by using virtual reality technology opens new possibilities for the diagnosis of schizophrenia with objective procedures.

  11. Virtual screening of integrase inhibitors by large scale binding free energy calculations: the SAMPL4 challenge

    PubMed Central

    Gallicchio, Emilio; Deng, Nanjie; He, Peng; Wickstrom, Lauren; Perryman, Alexander L.; Santiago, Daniel N.; Forli, Stefano; Olson, Arthur J.; Levy, Ronald M.

    2014-01-01

    As part of the SAMPL4 blind challenge, filtered AutoDock Vina ligand docking predictions and large scale binding energy distribution analysis method binding free energy calculations have been applied to the virtual screening of a focused library of candidate binders to the LEDGF site of the HIV integrase protein. The computational protocol leveraged docking and high level atomistic models to improve enrichment. The enrichment factor of our blind predictions ranked best among all of the computational submissions, and second best overall. This work represents to our knowledge the first example of the application of an all-atom physics-based binding free energy model to large scale virtual screening. A total of 285 parallel Hamiltonian replica exchange molecular dynamics absolute protein-ligand binding free energy simulations were conducted starting from docked poses. The setup of the simulations was fully automated, calculations were distributed on multiple computing resources and were completed in a 6-weeks period. The accuracy of the docked poses and the inclusion of intramolecular strain and entropic losses in the binding free energy estimates were the major factors behind the success of the method. Lack of sufficient time and computing resources to investigate additional protonation states of the ligands was a major cause of mispredictions. The experiment demonstrated the applicability of binding free energy modeling to improve hit rates in challenging virtual screening of focused ligand libraries during lead optimization. PMID:24504704

  12. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-03-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3DMIP platform when a larger number of cores is available.

  13. Techniques for Single System Integration of Elastic Simulation Features

    NASA Astrophysics Data System (ADS)

    Mitchell, Nathan M.

    Techniques for simulating the behavior of elastic objects have matured considerably over the last several decades, tackling diverse problems from non-linear models for incompressibility to accurate self-collisions. Alongside these contributions, advances in parallel hardware design and algorithms have made simulation more efficient and affordable than ever before. However, prior research often has had to commit to design choices that compromise certain simulation features to better optimize others, resulting in a fragmented landscape of solutions. For complex, real-world tasks, such as virtual surgery, a holistic approach is desirable, where complex behavior, performance, and ease of modeling are supported equally. This dissertation caters to this goal in the form of several interconnected threads of investigation, each of which contributes a piece of an unified solution. First, it will be demonstrated how various non-linear materials can be combined with lattice deformers to yield simulations with behavioral richness and a high potential for parallelism. This potential will be exploited to show how a hybrid solver approach based on large macroblocks can accelerate the convergence of these deformers. Further extensions of the lattice concept with non-manifold topology will allow for efficient processing of self-collisions and topology change. Finally, these concepts will be explored in the context of a case study on virtual plastic surgery, demonstrating a real-world problem space where these ideas can be combined to build an expressive authoring tool, allowing surgeons to record procedures digitally for future reference or education.

  14. Experimental study on heat transfer enhancement of laminar ferrofluid flow in horizontal tube partially filled porous media under fixed parallel magnet bars

    NASA Astrophysics Data System (ADS)

    Sheikhnejad, Yahya; Hosseini, Reza; Saffar Avval, Majid

    2017-02-01

    In this study, steady state laminar ferroconvection through circular horizontal tube partially filled with porous media under constant heat flux is experimentally investigated. Transverse magnetic fields were applied on ferrofluid flow by two fixed parallel magnet bar positioned on a certain distance from beginning of the test section. The results show promising notable enhancement in heat transfer as a consequence of partially filled porous media and magnetic field, up to 2.2 and 1.4 fold enhancement were observed in heat transfer coefficient respectively. It was found that presence of both porous media and magnetic field simultaneously can highly improve heat transfer up to 2.4 fold. Porous media of course plays a major role in this configuration. Virtually, application of Magnetic field and porous media also insert higher pressure loss along the pipe which again porous media contribution is higher that magnetic field.

  15. Wide range radioactive gas concentration detector

    DOEpatents

    Anderson, David F.

    1984-01-01

    A wide range radioactive gas concentration detector and monitor which is capable of measuring radioactive gas concentrations over a range of eight orders of magnitude. The device of the present invention is designed to have an ionization chamber which is sufficiently small to give a fast response time for measuring radioactive gases but sufficiently large to provide accurate readings at low concentration levels. Closely spaced parallel plate grids provide a uniform electric field in the active region to improve the accuracy of measurements and reduce ion migration time so as to virtually eliminate errors due to ion recombination. The parallel plate grids are fabricated with a minimal surface area to reduce the effects of contamination resulting from absorption of contaminating materials on the surface of the grids. Additionally, the ionization chamber wall is spaced a sufficient distance from the active region of the ionization chamber to minimize contamination effects.

  16. Phylogeny of the TRAF/MATH domain.

    PubMed

    Zapata, Juan M; Martínez-García, Vanesa; Lefebvre, Sophie

    2007-01-01

    The TNF-receptor associated factor (TRAF) domain (TD), also known as the meprin and TRAF-C homology (MATH) domain is a fold of seven anti-parallel p-helices that participates in protein-protein interactions. This fold is broadly represented among eukaryotes, where it is found associated with a discrete set of protein-domains. Virtually all protein families encompassing a TRAF/MATH domain seem to be involved in the regulation of protein processing and ubiquitination, strongly suggesting a parallel evolution of the TRAF/MATH domain and certain proteolysis pathways in eukaryotes. The restricted number of living organisms for which we have information of their genetic and protein make-up limits the scope and analysis of the MATH domain in evolution. However, the available information allows us to get a glimpse on the origins, distribution and evolution of the TRAF/MATH domain, which will be overviewed in this chapter.

  17. Oligopeptides of Chorionic Gonadotropin β-Subunit in Induction of T Cell Differentiation into Treg and Th17.

    PubMed

    Zamorina, S A; Shirshev, S V

    2015-11-01

    The role of oligopeptides of chorionic gonadotropin β-subunit (LQGV, AQGV, and VLPALP) in induction of differentiation into T-regulatory lymphocytes (Treg) and IL-17-producing lymphocytes (Th17) was studied in an in vitro system. Chorionic gonadotropin and oligopeptides promoted CD4(+) cell differentiation into functionally active Treg (FOXP3(+)GITR(+) and FOXP3(+)CTLA-4(+)), while chorionic gonadotropin and AQGV additionally stimulated IL-10 production by these cells. In parallel, chorionic gonadotropin and oligopeptides prevented CD4(+) cell differentiation into Th17 lymphocytes (ROR-gt(+)IL-17A(+)) and suppressed IL-17A secretion. Hence, oligopeptides of chorionic gonadotropin β-subunit promoted differentiation of CD4(+) cells into Treg and, in parallel, suppress Th17 induction, thus virtually completely reproducing the effects of the hormone, which opens new vista for their use in clinical practice.

  18. Supersonic civil airplane study and design: Performance and sonic boom

    NASA Technical Reports Server (NTRS)

    Cheung, Samson

    1995-01-01

    Since aircraft configuration plays an important role in aerodynamic performance and sonic boom shape, the configuration of the next generation supersonic civil transport has to be tailored to meet high aerodynamic performance and low sonic boom requirements. Computational fluid dynamics (CFD) can be used to design airplanes to meet these dual objectives. The work and results in this report are used to support NASA's High Speed Research Program (HSRP). CFD tools and techniques have been developed for general usages of sonic boom propagation study and aerodynamic design. Parallel to the research effort on sonic boom extrapolation, CFD flow solvers have been coupled with a numeric optimization tool to form a design package for aircraft configuration. This CFD optimization package has been applied to configuration design on a low-boom concept and an oblique all-wing concept. A nonlinear unconstrained optimizer for Parallel Virtual Machine has been developed for aerodynamic design and study.

  19. Gilgamesh: A Multithreaded Processor-In-Memory Architecture for Petaflops Computing

    NASA Technical Reports Server (NTRS)

    Sterling, T. L.; Zima, H. P.

    2002-01-01

    Processor-in-Memory (PIM) architectures avoid the von Neumann bottleneck in conventional machines by integrating high-density DRAM and CMOS logic on the same chip. Parallel systems based on this new technology are expected to provide higher scalability, adaptability, robustness, fault tolerance and lower power consumption than current MPPs or commodity clusters. In this paper we describe the design of Gilgamesh, a PIM-based massively parallel architecture, and elements of its execution model. Gilgamesh extends existing PIM capabilities by incorporating advanced mechanisms for virtualizing tasks and data and providing adaptive resource management for load balancing and latency tolerance. The Gilgamesh execution model is based on macroservers, a middleware layer which supports object-based runtime management of data and threads allowing explicit and dynamic control of locality and load balancing. The paper concludes with a discussion of related research activities and an outlook to future work.

  20. Evaluation of the eruptive potential and probability in open conduit volcano (Mt Etna) based on soil CO2 flux measurements

    NASA Astrophysics Data System (ADS)

    De Gregorio, Sofia; Camarda, Marco

    2016-04-01

    The evaluation of the amount of magma that might be potentially erupted, i.e. the eruptive potential (EP), and the probability of eruptive event occurrence, i.e. eruptive probability (EPR) of active volcano is one of the most compelling and challenging topic addressed by the volcanology community in the last years. The evaluation of the EP in open conduit volcano is generally based on constant magma supply rate deduced by long-term series of eruptive rate. This EP computation gives good results on long-term (centuries) evaluations, but resulted less effective when short-term (years or months) estimations are needed. Actually the rate of magma supply can undergo changes both on long-term and short-term. At steady condition it can be supposed that the regular supply of magma determines an almost constant level of magma in the feeding system (FS) whereas episodic surplus of magma inputs, with respect the regular supply, can cause large variations in the magma level. Follow that the surplus of magma occasionally entered in the FS represents a supply of material that sooner or later will be disposed, i.e. it will be emitted. Afterwards the amount of surplus of magma inward the FS nearly corresponds to the amount of magma that must be erupted in order to restore the equilibrium. Further, larger is the amount of surplus of magma stored in the system higher is the energetic level of the system and its propensity to erupt or in other words its EPR. On the light of the above consideration herein, we present an innovative methodology to evaluate the EP based on the quantification of surplus of magma with respect the regular supply, progressively intruded in the FS. To estimate the surplus of magma supply we used soil CO2 emission data measured monthly at 130 sites in two peripheral areas of Mt Etna Volcano. Indeed as reported by many authors soil CO2 emissions in the areas are linked to magma supply dynamics and more, anomalous discharges of CO2 are ascribable to surplus of magma intruded in the feeding system. We analyzed ten years of data and according to Henry's law we associate anomalous periods of degassing (i.e. peaks) to a partial volume of magma (PVM) intruded in the FS. In spite of the fact that we have only a partial view of the volume of magma involved, it should be noted that the view is always the same and hence the magnitude of the recorded anomalies is proportional the total amount of the surplus of magma entered the FS. Thus, we found a conversion factor able to convert the PVM to total amount of surplus of magma. This factor was deduced by comparing, over specific periods, the cumulative value of PVM with the cumulative of the volume of eruptive products (VEP). At this point the EP, over a determinate period of time, is computed by the difference of surplus of volume of magma intruded and the VEP progressively emitted. Simple statistical treatment can be applied to the time series of the EP to define a threshold value and to identify periods of high level of EP and hence periods with a high EPR. The result over ten years of monitoring showed as the 80% of time the eruptive events started when the values of EPR were high.

  1. A Family of ACO Routing Protocols for Mobile Ad Hoc Networks

    PubMed Central

    Rupérez Cañas, Delfín; Sandoval Orozco, Ana Lucila; García Villalba, Luis Javier; Kim, Tai-hoon

    2017-01-01

    In this work, an ACO routing protocol for mobile ad hoc networks based on AntHocNet is specified. As its predecessor, this new protocol, called AntOR, is hybrid in the sense that it contains elements from both reactive and proactive routing. Specifically, it combines a reactive route setup process with a proactive route maintenance and improvement process. Key aspects of the AntOR protocol are the disjoint-link and disjoint-node routes, separation between the regular pheromone and the virtual pheromone in the diffusion process and the exploration of routes, taking into consideration the number of hops in the best routes. In this work, a family of ACO routing protocols based on AntOR is also specified. These protocols are based on protocol successive refinements. In this work, we also present a parallelized version of AntOR that we call PAntOR. Using programming multiprocessor architectures based on the shared memory protocol, PAntOR allows running tasks in parallel using threads. This parallelization is applicable in the route setup phase, route local repair process and link failure notification. In addition, a variant of PAntOR that consists of having more than one interface, which we call PAntOR-MI (PAntOR-Multiple Interface), is specified. This approach parallelizes the sending of broadcast messages by interface through threads. PMID:28531159

  2. Computer Science Techniques Applied to Parallel Atomistic Simulation

    NASA Astrophysics Data System (ADS)

    Nakano, Aiichiro

    1998-03-01

    Recent developments in parallel processing technology and multiresolution numerical algorithms have established large-scale molecular dynamics (MD) simulations as a new research mode for studying materials phenomena such as fracture. However, this requires large system sizes and long simulated times. We have developed: i) Space-time multiresolution schemes; ii) fuzzy-clustering approach to hierarchical dynamics; iii) wavelet-based adaptive curvilinear-coordinate load balancing; iv) multilevel preconditioned conjugate gradient method; and v) spacefilling-curve-based data compression for parallel I/O. Using these techniques, million-atom parallel MD simulations are performed for the oxidation dynamics of nanocrystalline Al. The simulations take into account the effect of dynamic charge transfer between Al and O using the electronegativity equalization scheme. The resulting long-range Coulomb interaction is calculated efficiently with the fast multipole method. Results for temperature and charge distributions, residual stresses, bond lengths and bond angles, and diffusivities of Al and O will be presented. The oxidation of nanocrystalline Al is elucidated through immersive visualization in virtual environments. A unique dual-degree education program at Louisiana State University will also be discussed in which students can obtain a Ph.D. in Physics & Astronomy and a M.S. from the Department of Computer Science in five years. This program fosters interdisciplinary research activities for interfacing High Performance Computing and Communications with large-scale atomistic simulations of advanced materials. This work was supported by NSF (CAREER Program), ARO, PRF, and Louisiana LEQSF.

  3. Geometric and perceptual effects of the location of the observer vantage point for linear-perspective images.

    PubMed

    Todorović, Dejan

    2005-01-01

    New geometric analyses are presented of three impressive examples of the effects of location of the vantage point on virtual 3-D spaces conveyed by linear-perspective images. In the 'egocentric-road' effect, the perceived direction of the depicted road is always pointed towards the observer, for any position of the vantage point. It is shown that perspective images of real-observer-aimed roads are characterised by a specific, simple pattern of projected side lines. Given that pattern, the position of the observer, and certain assumptions and perspective arguments, the perceived direction of the virtual road towards the observer can be predicted. In the 'skewed balcony' and the 'collapsing ceiling' effects, the position of the vantage point affects the impression of alignment of the virtual architecture conveyed by large-scale illusionistic paintings and the real architecture surrounding them. It is shown that the dislocation of the vantage point away from the viewing position prescribed by the perspective construction induces a mismatch between the painted vanishing point of elements in the picture and the real vanishing point of corresponding elements of the actual architecture. This mismatch of vanishing points provides visual information that the elements of the two architectures are not mutually parallel.

  4. Concept of Operations for Commercial and Business Aircraft Synthetic Vision Systems. 1.0

    NASA Technical Reports Server (NTRS)

    Williams Daniel M.; Waller, Marvin C.; Koelling, John H.; Burdette, Daniel W.; Capron, William R.; Barry, John S.; Gifford, Richard B.; Doyle, Thomas M.

    2001-01-01

    A concept of operations (CONOPS) for the Commercial and Business (CaB) aircraft synthetic vision systems (SVS) is described. The CaB SVS is expected to provide increased safety and operational benefits in normal and low visibility conditions. Providing operational benefits will promote SVS implementation in the Net, improve aviation safety, and assist in meeting the national aviation safety goal. SVS will enhance safety and enable consistent gate-to-gate aircraft operations in normal and low visibility conditions. The goal for developing SVS is to support operational minima as low as Category 3b in a variety of environments. For departure and ground operations, the SVS goal is to enable operations with a runway visual range of 300 feet. The system is an integrated display concept that provides a virtual visual environment. The SVS virtual visual environment is composed of three components: an enhanced intuitive view of the flight environment, hazard and obstacle defection and display, and precision navigation guidance. The virtual visual environment will support enhanced operations procedures during all phases of flight - ground operations, departure, en route, and arrival. The applications selected for emphasis in this document include low visibility departures and arrivals including parallel runway operations, and low visibility airport surface operations. These particular applications were selected because of significant potential benefits afforded by SVS.

  5. ChemHTPS - A virtual high-throughput screening program suite for the chemical and materials sciences

    NASA Astrophysics Data System (ADS)

    Afzal, Mohammad Atif Faiz; Evangelista, William; Hachmann, Johannes

    The discovery of new compounds, materials, and chemical reactions with exceptional properties is the key for the grand challenges in innovation, energy and sustainability. This process can be dramatically accelerated by means of the virtual high-throughput screening (HTPS) of large-scale candidate libraries. The resulting data can further be used to study the underlying structure-property relationships and thus facilitate rational design capability. This approach has been extensively used for many years in the drug discovery community. However, the lack of openly available virtual HTPS tools is limiting the use of these techniques in various other applications such as photovoltaics, optoelectronics, and catalysis. Thus, we developed ChemHTPS, a general-purpose, comprehensive and user-friendly suite, that will allow users to efficiently perform large in silico modeling studies and high-throughput analyses in these applications. ChemHTPS also includes a massively parallel molecular library generator which offers a multitude of options to customize and restrict the scope of the enumerated chemical space and thus tailor it for the demands of specific applications. To streamline the non-combinatorial exploration of chemical space, we incorporate genetic algorithms into the framework. In addition to implementing smarter algorithms, we also focus on the ease of use, workflow, and code integration to make this technology more accessible to the community.

  6. On the Value of Estimating Human Arm Stiffness during Virtual Teleoperation with Robotic Manipulators

    PubMed Central

    Buzzi, Jacopo; Ferrigno, Giancarlo; Jansma, Joost M.; De Momi, Elena

    2017-01-01

    Teleoperated robotic systems are widely spreading in multiple different fields, from hazardous environments exploration to surgery. In teleoperation, users directly manipulate a master device to achieve task execution at the slave robot side; this interaction is fundamental to guarantee both system stability and task execution performance. In this work, we propose a non-disruptive method to study the arm endpoint stiffness. We evaluate how users exploit the kinetic redundancy of the arm to achieve stability and precision during the execution of different tasks with different master devices. Four users were asked to perform two planar trajectories following virtual tasks using both a serial and a parallel link master device. Users' arm kinematics and muscular activation were acquired and combined with a user-specific musculoskeletal model to estimate the joint stiffness. Using the arm kinematic Jacobian, the arm end-point stiffness was derived. The proposed non-disruptive method is capable of estimating the arm endpoint stiffness during the execution of virtual teleoperated tasks. The obtained results are in accordance with the existing literature in human motor control and show, throughout the tested trajectory, a modulation of the arm endpoint stiffness that is affected by task characteristics and hand speed and acceleration. PMID:29018319

  7. Global Software Development with Cloud Platforms

    NASA Astrophysics Data System (ADS)

    Yara, Pavan; Ramachandran, Ramaseshan; Balasubramanian, Gayathri; Muthuswamy, Karthik; Chandrasekar, Divya

    Offshore and outsourced distributed software development models and processes are facing challenges, previously unknown, with respect to computing capacity, bandwidth, storage, security, complexity, reliability, and business uncertainty. Clouds promise to address these challenges by adopting recent advances in virtualization, parallel and distributed systems, utility computing, and software services. In this paper, we envision a cloud-based platform that addresses some of these core problems. We outline a generic cloud architecture, its design and our first implementation results for three cloud forms - a compute cloud, a storage cloud and a cloud-based software service- in the context of global distributed software development (GSD). Our ”compute cloud” provides computational services such as continuous code integration and a compile server farm, ”storage cloud” offers storage (block or file-based) services with an on-line virtual storage service, whereas the on-line virtual labs represent a useful cloud service. We note some of the use cases for clouds in GSD, the lessons learned with our prototypes and identify challenges that must be conquered before realizing the full business benefits. We believe that in the future, software practitioners will focus more on these cloud computing platforms and see clouds as a means to supporting a ecosystem of clients, developers and other key stakeholders.

  8. A 3-RSR Haptic Wearable Device for Rendering Fingertip Contact Forces.

    PubMed

    Leonardis, Daniele; Solazzi, Massimiliano; Bortone, Ilaria; Frisoli, Antonio

    2017-01-01

    A novel wearable haptic device for modulating contact forces at the fingertip is presented. Rendering of forces by skin deformation in three degrees of freedom (DoF), with contact-no contact capabilities, was implemented through rigid parallel kinematics. The novel asymmetrical three revolute-spherical-revolute (3-RSR) configuration allowed compact dimensions with minimum encumbrance of the hand workspace. The device was designed to render constant to low frequency deformation of the fingerpad in three DoF, combining light weight with relatively high output forces. A differential method for solving the non-trivial inverse kinematics is proposed and implemented in real time for controlling the device. The first experimental activity evaluated discrimination of different fingerpad stretch directions in a group of five subjects. The second experiment, enrolling 19 subjects, evaluated cutaneous feedback provided in a virtual pick-and-place manipulation task. Stiffness of the fingerpad plus device was measured and used to calibrate the physics of the virtual environment. The third experiment with 10 subjects evaluated interaction forces in a virtual lift-and-hold task. Although with different performance in the two manipulation experiments, overall results show that participants better controlled interaction forces when the cutaneous feedback was active, with significant differences between the visual and visuo-haptic experimental conditions.

  9. Estimation of CO2 reduction by parallel hard-type power hybridization for gasoline and diesel vehicles.

    PubMed

    Oh, Yunjung; Park, Junhong; Lee, Jong Tae; Seo, Jigu; Park, Sungwook

    2017-10-01

    The purpose of this study is to investigate possible improvements in ICEVs by implementing fuzzy logic-based parallel hard-type power hybrid systems. Two types of conventional ICEVs (gasoline and diesel) and two types of HEVs (gasoline-electric, diesel electric) were generated using vehicle and powertrain simulation tools and a Matlab-Simulink application programming interface. For gasoline and gasoline-electric HEV vehicles, the prediction accuracy for four types of LDV models was validated by conducting comparative analysis with the chassis dynamometer and OBD test data. The predicted results show strong correlation with the test data. The operating points of internal combustion engines and electric motors are well controlled in the high efficiency region and battery SOC was well controlled within ±1.6%. However, for diesel vehicles, we generated virtual diesel-electric HEV vehicle because there is no available vehicles with similar engine and vehicle specifications with ICE vehicle. Using a fuzzy logic-based parallel hybrid system in conventional ICEVs demonstrated that HEVs showed superior performance in terms of fuel consumption and CO 2 emission in most driving modes. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Rigid-flexible coupling dynamic modeling and investigation of a redundantly actuated parallel manipulator with multiple actuation modes

    NASA Astrophysics Data System (ADS)

    Liang, Dong; Song, Yimin; Sun, Tao; Jin, Xueying

    2017-09-01

    A systematic dynamic modeling methodology is presented to develop the rigid-flexible coupling dynamic model (RFDM) of an emerging flexible parallel manipulator with multiple actuation modes. By virtue of assumed mode method, the general dynamic model of an arbitrary flexible body with any number of lumped parameters is derived in an explicit closed form, which possesses the modular characteristic. Then the completely dynamic model of system is formulated based on the flexible multi-body dynamics (FMD) theory and the augmented Lagrangian multipliers method. An approach of combining the Udwadia-Kalaba formulation with the hybrid TR-BDF2 numerical algorithm is proposed to address the nonlinear RFDM. Two simulation cases are performed to investigate the dynamic performance of the manipulator with different actuation modes. The results indicate that the redundant actuation modes can effectively attenuate vibration and guarantee higher dynamic performance compared to the traditional non-redundant actuation modes. Finally, a virtual prototype model is developed to demonstrate the validity of the presented RFDM. The systematic methodology proposed in this study can be conveniently extended for the dynamic modeling and controller design of other planar flexible parallel manipulators, especially the emerging ones with multiple actuation modes.

  11. KNBD: A Remote Kernel Block Server for Linux

    NASA Technical Reports Server (NTRS)

    Becker, Jeff

    1999-01-01

    I am developing a prototype of a Linux remote disk block server whose purpose is to serve as a lower level component of a parallel file system. Parallel file systems are an important component of high performance supercomputers and clusters. Although supercomputer vendors such as SGI and IBM have their own custom solutions, there has been a void and hence a demand for such a system on Beowulf-type PC Clusters. Recently, the Parallel Virtual File System (PVFS) project at Clemson University has begun to address this need (1). Although their system provides much of the functionality of (and indeed was inspired by) the equivalent file systems in the commercial supercomputer market, their system is all in user-space. Migrating their 10 services to the kernel could provide a performance boost, by obviating the need for expensive system calls. Thanks to Pavel Machek, the Linux kernel has provided the network block device (2) with kernels 2.1.101 and later. You can configure this block device to redirect reads and writes to a remote machine's disk. This can be used as a building block for constructing a striped file system across several nodes.

  12. Proceedings: Sisal `93

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feo, J.T.

    1993-10-01

    This report contain papers on: Programmability and performance issues; The case of an iterative partial differential equation solver; Implementing the kernal of the Australian Region Weather Prediction Model in Sisal; Even and quarter-even prime length symmetric FFTs and their Sisal Implementations; Top-down thread generation for Sisal; Overlapping communications and computations on NUMA architechtures; Compiling technique based on dataflow analysis for funtional programming language Valid; Copy elimination for true multidimensional arrays in Sisal 2.0; Increasing parallelism for an optimization that reduces copying in IF2 graphs; Caching in on Sisal; Cache performance of Sisal Vs. FORTRAN; FFT algorithms on a shared-memory multiprocessor;more » A parallel implementation of nonnumeric search problems in Sisal; Computer vision algorithms in Sisal; Compilation of Sisal for a high-performance data driven vector processor; Sisal on distributed memory machines; A virtual shared addressing system for distributed memory Sisal; Developing a high-performance FFT algorithm in Sisal for a vector supercomputer; Implementation issues for IF2 on a static data-flow architechture; and Systematic control of parallelism in array-based data-flow computation. Selected papers have been indexed separately for inclusion in the Energy Science and Technology Database.« less

  13. Data Parallel Bin-Based Indexing for Answering Queries on Multi-Core Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gosink, Luke; Wu, Kesheng; Bethel, E. Wes

    2009-06-02

    The multi-core trend in CPUs and general purpose graphics processing units (GPUs) offers new opportunities for the database community. The increase of cores at exponential rates is likely to affect virtually every server and client in the coming decade, and presents database management systems with a huge, compelling disruption that will radically change how processing is done. This paper presents a new parallel indexing data structure for answering queries that takes full advantage of the increasing thread-level parallelism emerging in multi-core architectures. In our approach, our Data Parallel Bin-based Index Strategy (DP-BIS) first bins the base data, and then partitionsmore » and stores the values in each bin as a separate, bin-based data cluster. In answering a query, the procedures for examining the bin numbers and the bin-based data clusters offer the maximum possible level of concurrency; each record is evaluated by a single thread and all threads are processed simultaneously in parallel. We implement and demonstrate the effectiveness of DP-BIS on two multi-core architectures: a multi-core CPU and a GPU. The concurrency afforded by DP-BIS allows us to fully utilize the thread-level parallelism provided by each architecture--for example, our GPU-based DP-BIS implementation simultaneously evaluates over 12,000 records with an equivalent number of concurrently executing threads. In comparing DP-BIS's performance across these architectures, we show that the GPU-based DP-BIS implementation requires significantly less computation time to answer a query than the CPU-based implementation. We also demonstrate in our analysis that DP-BIS provides better overall performance than the commonly utilized CPU and GPU-based projection index. Finally, due to data encoding, we show that DP-BIS accesses significantly smaller amounts of data than index strategies that operate solely on a column's base data; this smaller data footprint is critical for parallel processors that possess limited memory resources (e.g., GPUs).« less

  14. Comparison of two paradigms for distributed shared memory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levelt, W.G.; Kaashoek, M.F.; Bal, H.E.

    1990-08-01

    The paper compares two paradigms for Distributed Shared Memory on loosely coupled computing systems: the shared data-object model as used in Orca, a programming language specially designed for loosely coupled computing systems and the Shared Virtual Memory model. For both paradigms the authors have implemented two systems, one using only point-to-point messages, the other using broadcasting as well. They briefly describe these two paradigms and their implementations. Then they compare their performance on four applications: the traveling salesman problem, alpha-beta search, matrix multiplication and the all pairs shortest paths problem. The measurements show that both paradigms can be used efficientlymore » for programming large-grain parallel applications. Significant speedups were obtained on all applications. The unstructured Shared Virtual Memory paradigm achieves the best absolute performance, although this is largely due to the preliminary nature of the Orca compiler used. The structured shared data-object model achieves the highest speedups and is much easier to program and to debug.« less

  15. Request queues for interactive clients in a shared file system of a parallel computing system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bent, John M.; Faibish, Sorin

    Interactive requests are processed from users of log-in nodes. A metadata server node is provided for use in a file system shared by one or more interactive nodes and one or more batch nodes. The interactive nodes comprise interactive clients to execute interactive tasks and the batch nodes execute batch jobs for one or more batch clients. The metadata server node comprises a virtual machine monitor; an interactive client proxy to store metadata requests from the interactive clients in an interactive client queue; a batch client proxy to store metadata requests from the batch clients in a batch client queue;more » and a metadata server to store the metadata requests from the interactive client queue and the batch client queue in a metadata queue based on an allocation of resources by the virtual machine monitor. The metadata requests can be prioritized, for example, based on one or more of a predefined policy and predefined rules.« less

  16. Effect of quetiapine vs. placebo on response to two virtual public speaking exposures in individuals with social phobia.

    PubMed

    Donahue, Christopher B; Kushner, Matt G; Thuras, Paul D; Murphy, Tom G; Van Demark, Joani B; Adson, David E

    2009-04-01

    Clinical practice and open-label studies suggest that quetiapine (an atypical anti-psychotic) might improve symptoms for individuals with social anxiety disorder (SAD). The purpose of this study was to provide a rigorous test of the acute impact of a single dose of quetiapine (25mg) on SAD symptoms. Individuals with SAD (N=20) were exposed to a 4-min virtual reality (VR) public speaking challenge after having received quetiapine or placebo (double-blind) 1h earlier. A parallel VR challenge occurred 1 week later using a counter-balanced cross-over (within subject) design for the medication-placebo order between the two sessions. There was no significant drug effect for quetiapine on the primary outcome measures. However, quetiapine was associated with significantly elevated heart rate and sleepiness compared with placebo. Study findings suggest that a single dose of 25mg quetiapine is not effective in alleviating SAD symptoms in individuals with fears of public speaking.

  17. Gravity and perceptual stability during translational head movement on earth and in microgravity.

    PubMed

    Jaekl, P; Zikovitz, D C; Jenkin, M R; Jenkin, H L; Zacher, J E; Harris, L R

    2005-01-01

    We measured the amount of visual movement judged consistent with translational head movement under normal and microgravity conditions. Subjects wore a virtual reality helmet in which the ratio of the movement of the world to the movement of the head (visual gain) was variable. Using the method of adjustment under normal gravity 10 subjects adjusted the visual gain until the visual world appeared stable during head movements that were either parallel or orthogonal to gravity. Using the method of constant stimuli under normal gravity, seven subjects moved their heads and judged whether the virtual world appeared to move "with" or "against" their movement for several visual gains. One subject repeated the constant stimuli judgements in microgravity during parabolic flight. The accuracy of judgements appeared unaffected by the direction or absence of gravity. Only the variability appeared affected by the absence of gravity. These results are discussed in relation to discomfort during head movements in microgravity. c2005 Elsevier Ltd. All rights reserved.

  18. Virtual environment display for a 3D audio room simulation

    NASA Astrophysics Data System (ADS)

    Chapin, William L.; Foster, Scott

    1992-06-01

    Recent developments in virtual 3D audio and synthetic aural environments have produced a complex acoustical room simulation. The acoustical simulation models a room with walls, ceiling, and floor of selected sound reflecting/absorbing characteristics and unlimited independent localizable sound sources. This non-visual acoustic simulation, implemented with 4 audio ConvolvotronsTM by Crystal River Engineering and coupled to the listener with a Poihemus IsotrakTM, tracking the listener's head position and orientation, and stereo headphones returning binaural sound, is quite compelling to most listeners with eyes closed. This immersive effect should be reinforced when properly integrated into a full, multi-sensory virtual environment presentation. This paper discusses the design of an interactive, visual virtual environment, complementing the acoustic model and specified to: 1) allow the listener to freely move about the space, a room of manipulable size, shape, and audio character, while interactively relocating the sound sources; 2) reinforce the listener's feeling of telepresence into the acoustical environment with visual and proprioceptive sensations; 3) enhance the audio with the graphic and interactive components, rather than overwhelm or reduce it; and 4) serve as a research testbed and technology transfer demonstration. The hardware/software design of two demonstration systems, one installed and one portable, are discussed through the development of four iterative configurations. The installed system implements a head-coupled, wide-angle, stereo-optic tracker/viewer and multi-computer simulation control. The portable demonstration system implements a head-mounted wide-angle, stereo-optic display, separate head and pointer electro-magnetic position trackers, a heterogeneous parallel graphics processing system, and object oriented C++ program code.

  19. The Subcellular Location of Ovalbumin in Plasmodium berghei Blood Stages Influences the Magnitude of T-Cell Responses

    PubMed Central

    Lin, Jing-Wen; Shaw, Tovah N.; Annoura, Takeshi; Fougère, Aurélie; Bouchier, Pascale; Chevalley-Maurel, Séverine; Kroeze, Hans; Franke-Fayard, Blandine; Janse, Chris J.; Couper, Kevin N.

    2014-01-01

    Model antigens are frequently introduced into pathogens to study determinants that influence T-cell responses to infections. To address whether an antigen's subcellular location influences the nature and magnitude of antigen-specific T-cell responses, we generated Plasmodium berghei parasites expressing the model antigen ovalbumin (OVA) either in the parasite cytoplasm or on the parasitophorous vacuole membrane (PVM). For cytosolic expression, OVA alone or conjugated to mCherry was expressed from a strong constitutive promoter (OVAhsp70 or OVA::mCherryhsp70); for PVM expression, OVA was fused to HEP17/EXP1 (OVA::Hep17hep17). Unexpectedly, OVA expression in OVAhsp70 parasites was very low, but when OVA was fused to mCherry (OVA::mCherryhsp70), it was highly expressed. OVA expression in OVA::Hep17hep17 parasites was strong but significantly less than that in OVA::mCherryhsp70 parasites. These transgenic parasites were used to examine the effects of antigen subcellular location and expression level on the development of T-cell responses during blood-stage infections. While all OVA-expressing parasites induced activation and proliferation of OVA-specific CD8+ T cells (OT-I) and CD4+ T cells (OT-II), the level of activation varied: OVA::Hep17hep17 parasites induced significantly stronger splenic and intracerebral OT-I and OT-II responses than those of OVA::mCherryhsp70 parasites, but OVA::mCherryhsp70 parasites promoted stronger OT-I and OT-II responses than those of OVAhsp70 parasites. Despite lower OVA expression levels, OVA::Hep17hep17 parasites induced stronger T-cell responses than those of OVA::mCherryhsp70 parasites. These results indicate that unconjugated cytosolic OVA is not stably expressed in Plasmodium parasites and, importantly, that its cellular location and expression level influence both the induction and magnitude of parasite-specific T-cell responses. These parasites represent useful tools for studying the development and function of antigen-specific T-cell responses during malaria infection. PMID:25156724

  20. Quantifying air distribution, ventilation effectiveness and airborne pollutant transport in an aircraft cabin mockup

    NASA Astrophysics Data System (ADS)

    Wang, Aijun

    The health, safety and comfort of passengers during flight inspired this research into cabin air quality, which is closely related to its airflow distribution, ventilation effectiveness and airborne pollutant transport. The experimental facility is a full-scale aircraft cabin mockup. A volumetric particle tracking velocimetry (VPTV) technique was enhanced by incorporating a self-developed streak recognition algorithm. Two stable recirculation regions, the reverse flows above the seats and the main air jets from the air supply inlets formed the complicated airflow patterns inside the cabin mockup. The primary air flow was parallel to the passenger rows. The small velocity component in the direction of the cabin depth caused less net air exchange between the passenger rows than that parallel to the passenger rows. Different total air supply rate changed the developing behaviors of the main air jets, leading to different local air distribution patterns. Two indices, Local mean age of air and ventilation effectiveness factor (VEF), were measured at five levels of air supply rate and two levels of heating load. Local mean age of air decreased linearly with an increase in the air supply rate, while the VEF remained consistent when the air supply rate varied. The thermal buoyancy force from the thermal plume generated the upside plume flow, opposite to the main jet flow above the boundary seats and thus lowered the local net air exchange. The airborne transport dynamics depends on the distance between the source and the receptors, the relative location of pollutant source, and air supply rate. Exposure risk was significantly reduced with increased distance between source and receptors. Another possible way to decrease the exposure risk was to position the release source close to the exhaust outlets. Increasing the air supply rate could be an effective solution under some emergency situations. The large volume of data regarding the three-dimensional air velocities was visualized in the CAVE virtual environment. ShadowLight, a virtual reality application was used to import and navigate the velocity vectors through the virtual airspace. A real world demonstration and an active interaction with the three-dimensional air velocity data have been established.

  1. Design and analysis of a global sub-mesoscale and tidal dynamics admitting virtual ocean.

    NASA Astrophysics Data System (ADS)

    Menemenlis, D.; Hill, C. N.

    2016-02-01

    We will describe the techniques used to realize a global kilometerscale ocean model configuration that includes representation of sea-ice and tidal excitation, and spans scales from planetary gyres to internal tides. A simulation using this model configuration provides a virtual ocean that admits some sub-mesoscale dynamics and tidal energetics not normally represented in global calculations. This extends simulated ocean behavior beyond broadly quasi-geostrophic flows and provides a preliminary example of a next generation computational approach to explicitly probing the interactions between instabilities that are usually parameterized and dominant energetic scales in the ocean. From previous process studies we have ascertained that this can lead to a qualitative improvement in the realism of many significant processes including geostrophic eddy dynamics, shelf-break exchange and topographic mixing. Computationally we exploit high-degrees of parallelism in both numerical evaluation and in recording model state to persistent disk storage. Together this allows us to compute and record a full three-dimensional model trajectory at hourly frequency for a timeperiod of 5 months with less than 9 million core hours of parallel computer time, using the present generation NASA Ames Research Center facilities. We have used this capability to create a 5 month trajectory archive, sampled at high spatial and temporal frequency for an ocean configuration that is initialized from a realistic data-assimilated state and driven with reanalysis surface forcing from ECMWF. The resulting database of model state provides a novel virtual laboratory for exploring coupling across scales in the ocean, and for testing ideas on the relationship between small scale fluxes and large scale state. The computation is complemented by counterpart computations that are coarsened two and four times respectively. In this presentation we will review the computational and numerical technologies employed and show how the high spatio-temporal frequency archive of model state can provide a new and promising tool for researching richer ocean dynamics at scale. We will also outline how computations of this nature could be combined with next generation computer hardware plans to help inform important climate process questions.

  2. Interactive Parallel Data Analysis within Data-Centric Cluster Facilities using the IPython Notebook

    NASA Astrophysics Data System (ADS)

    Pascoe, S.; Lansdowne, J.; Iwi, A.; Stephens, A.; Kershaw, P.

    2012-12-01

    The data deluge is making traditional analysis workflows for many researchers obsolete. Support for parallelism within popular tools such as matlab, IDL and NCO is not well developed and rarely used. However parallelism is necessary for processing modern data volumes on a timescale conducive to curiosity-driven analysis. Furthermore, for peta-scale datasets such as the CMIP5 archive, it is no longer practical to bring an entire dataset to a researcher's workstation for analysis, or even to their institutional cluster. Therefore, there is an increasing need to develop new analysis platforms which both enable processing at the point of data storage and which provides parallelism. Such an environment should, where possible, maintain the convenience and familiarity of our current analysis environments to encourage curiosity-driven research. We describe how we are combining the interactive python shell (IPython) with our JASMIN data-cluster infrastructure. IPython has been specifically designed to bridge the gap between the HPC-style parallel workflows and the opportunistic curiosity-driven analysis usually carried out using domain specific languages and scriptable tools. IPython offers a web-based interactive environment, the IPython notebook, and a cluster engine for parallelism all underpinned by the well-respected Python/Scipy scientific programming stack. JASMIN is designed to support the data analysis requirements of the UK and European climate and earth system modeling community. JASMIN, with its sister facility CEMS focusing the earth observation community, has 4.5 PB of fast parallel disk storage alongside over 370 computing cores provide local computation. Through the IPython interface to JASMIN, users can make efficient use of JASMIN's multi-core virtual machines to perform interactive analysis on all cores simultaneously or can configure IPython clusters across multiple VMs. Larger-scale clusters can be provisioned through JASMIN's batch scheduling system. Outputs can be summarised and visualised using the full power of Python's many scientific tools, including Scipy, Matplotlib, Pandas and CDAT. This rich user experience is delivered through the user's web browser; maintaining the interactive feel of a workstation-based environment with the parallel power of a remote data-centric processing facility.

  3. Parallel transmission RF pulse design with strict temperature constraints.

    PubMed

    Deniz, Cem M; Carluccio, Giuseppe; Collins, Christopher

    2017-05-01

    RF safety in parallel transmission (pTx) is generally ensured by imposing specific absorption rate (SAR) limits during pTx RF pulse design. There is increasing interest in using temperature to ensure safety in MRI. In this work, we present a local temperature correlation matrix formalism and apply it to impose strict constraints on maximum absolute temperature in pTx RF pulse design for head and hip regions. Electromagnetic field simulations were performed on the head and hip of virtual body models. Temperature correlation matrices were calculated for four different exposure durations ranging between 6 and 24 min using simulated fields and body-specific constants. Parallel transmission RF pulses were designed using either SAR or temperature constraints, and compared with each other and unconstrained RF pulse design in terms of excitation fidelity and safety. The use of temperature correlation matrices resulted in better excitation fidelity compared with the use of SAR in parallel transmission RF pulse design (for the 6 min exposure period, 8.8% versus 21.0% for the head and 28.0% versus 32.2% for the hip region). As RF exposure duration increases (from 6 min to 24 min), the benefit of using temperature correlation matrices on RF pulse design diminishes. However, the safety of the subject is always guaranteed (the maximum temperature was equal to 39°C). This trend was observed in both head and hip regions, where the perfusion rates are very different. Copyright © 2017 John Wiley & Sons, Ltd.

  4. Studying Geology of Central Texas through Web-Based Virtual Field Trips

    NASA Astrophysics Data System (ADS)

    Chan, C.; Khan, S. D.; Wellner, J. S.

    2007-12-01

    Each year over 2500 students, mainly non-science majors, take introductory geology classes at the University of Houston. Optional field trips to Central Texas for these classes provide a unique learning opportunity for students to experience geologic concepts in a real world context. The field trips visit Enchanted Rock, Inks Lake, Bee Cave Road, Lion Mountain, and Slaughter Gap. Unfortunately, only around 10% of our students participate in these field trips. We are developing a web-based virtual field trip for Central Texas to provide an additional effective learning experience for students in these classes. The module for Enchanted Rock is complete and consists of linked geological maps, satellite imagery, digital elevation models, 3-D photography, digital video, and 3-D virtual reality visualizations. The ten virtual stops focus on different geologic process and are accompanied by questions and answers. To test the efficacy of the virtual field trip, we developed a quiz to measure student learning and a survey to evaluate the website. The quiz consists of 10 questions paralleling each stop and information on student attendance on the Central Texas field trip and/or the virtual field trip. From the survey, the average time spent on the website was 26 minutes, and overall the ratings of the virtual field trip were positive. Most noticeably students responded that the information on the website was relevant to their class and that the pictures, figures, and animations were essential to the website. Although high correlation coefficients between responses were expected for some questions (i.e., 0.89 for "The content or text of the website was clear" and "The information on the website was easy to read"), some correlations were less expected: 0.77 for "The number of test questions was appropriate" and "The information on the website was easy to read," and 0.70 for "The test questions reinforced the material presented on the website" and "The information on the website is relevant to my class." These virtual field trips provide an alternative for students who cannot attend the actual field trips. They also allow transfer students to experience these sites before attending upper level field trips, which often return to study these sites in more detail. These modules provide a valuable supplementary experience for all students, as they emphasize skills for which we are presently unable to provide sufficient practice in lecture, fieldtrips, or laboratory. Public access to the field trips is available at: http://geoinfo.geosc.uh.edu/VR/

  5. The computer-aided parallel external fixator for complex lower limb deformity correction.

    PubMed

    Wei, Mengting; Chen, Jianwen; Guo, Yue; Sun, Hao

    2017-12-01

    Since parameters of the parallel external fixator are difficult to measure and calculate in real applications, this study developed computer software that can help the doctor measure parameters using digital technology and generate an electronic prescription for deformity correction. According to Paley's deformity measurement method, we provided digital measurement techniques. In addition, we proposed an deformity correction algorithm to calculate the elongations of the six struts and developed a electronic prescription software. At the same time, a three-dimensional simulation of the parallel external fixator and deformed fragment was made using virtual reality modeling language technology. From 2013 to 2015, fifteen patients with complex lower limb deformity were treated with parallel external fixators and the self-developed computer software. All of the cases had unilateral limb deformity. The deformities were caused by old osteomyelitis in nine cases and traumatic sequelae in six cases. A doctor measured the related angulation, displacement and rotation on postoperative radiographs using the digital measurement techniques. Measurement data were input into the electronic prescription software to calculate the daily adjustment elongations of the struts. Daily strut adjustments were conducted according to the data calculated. The frame was removed when expected results were achieved. Patients lived independently during the adjustment. The mean follow-up was 15 months (range 10-22 months). The duration of frame fixation from the time of application to the time of removal averaged 8.4 months (range 2.5-13.1 months). All patients were satisfied with the corrected limb alignment. No cases of wound infections or complications occurred. Using the computer-aided parallel external fixator for the correction of lower limb deformities can achieve satisfactory outcomes. The correction process can be simplified and is precise and digitized, which will greatly improve the treatment in a clinical application.

  6. Laser Ray Tracing in a Parallel Arbitrary Lagrangian-Eulerian Adaptive Mesh Refinement Hydrocode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Masters, N D; Kaiser, T B; Anderson, R W

    2009-09-28

    ALE-AMR is a new hydrocode that we are developing as a predictive modeling tool for debris and shrapnel formation in high-energy laser experiments. In this paper we present our approach to implementing laser ray-tracing in ALE-AMR. We present the equations of laser ray tracing, our approach to efficient traversal of the adaptive mesh hierarchy in which we propagate computational rays through a virtual composite mesh consisting of the finest resolution representation of the modeled space, and anticipate simulations that will be compared to experiments for code validation.

  7. A Massively Parallel Tensor Contraction Framework for Coupled-Cluster Computations

    DTIC Science & Technology

    2014-08-02

    CCSDT The CCSD model [41], where T = T1 + T2 (i.e. n = 2 in Equation 2), is one of the most widely used coupled-cluster methods as it provides a good...derived from response theory. Extending this to CCSDT [30, 35], where T = T1 + T2 + T3 ( n = 3), gives an even more accurate method (often capable of...CCSD and CCSDT have leading-order costs of O(n2on 4 v) and O( n 3 on 5 v), where no and nv are the number of occupied and virtual orbitals, respectively

  8. New species from the Galoka and Kalabenono massifs: two unknown and severely threatened mountainous areas in NW Madagascar

    PubMed Central

    Callmander, Martin W.; Rakotovao, Charles; Razafitsalama, Jeremi; Phillipson, Peter B.; Buerki, Sven; Hong-Wa, Cynthia; Rakotoarivelo, Nivo; Andriambololonera, Sylvie; Koopman, Margaret M.; Johnson, David M.; Deroin, Thierry; Ravoahangy, Andriamandranto; Solo, Serge; Labat, Jean-Noël; Lowry, Porter P.

    2011-01-01

    The Galoka mountain chain, comprising principally the Galoka and Kalabenono massifs, situated at the northern edge of the Sambirano Region in NW Madagascar is an area that was virtually unknown botanically. It was visited three times between 2005 and 2007 as part of a floristic inventory. Both massifs contain the last remaining primary forests in the Galoka chain, which extends parallel to the coastline from South of Ambilobe to North of Ambanja. Several new species have been discovered amongst the collections, eight of which are described here. PMID:21857767

  9. From Sky to Earth: Data Science Methodology Transfer

    NASA Astrophysics Data System (ADS)

    Mahabal, Ashish A.; Crichton, Daniel; Djorgovski, S. G.; Law, Emily; Hughes, John S.

    2017-06-01

    We describe here the parallels in astronomy and earth science datasets, their analyses, and the opportunities for methodology transfer from astroinformatics to geoinformatics. Using example of hydrology, we emphasize how meta-data and ontologies are crucial in such an undertaking. Using the infrastructure being designed for EarthCube - the Virtual Observatory for the earth sciences - we discuss essential steps for better transfer of tools and techniques in the future e.g. domain adaptation. Finally we point out that it is never a one-way process and there is enough for astroinformatics to learn from geoinformatics as well.

  10. Asynchronous broadcast for ordered delivery between compute nodes in a parallel computing system where packet header space is limited

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, Sameer

    Disclosed is a mechanism on receiving processors in a parallel computing system for providing order to data packets received from a broadcast call and to distinguish data packets received at nodes from several incoming asynchronous broadcast messages where header space is limited. In the present invention, processors at lower leafs of a tree do not need to obtain a broadcast message by directly accessing the data in a root processor's buffer. Instead, each subsequent intermediate node's rank id information is squeezed into the software header of packet headers. In turn, the entire broadcast message is not transferred from the rootmore » processor to each processor in a communicator but instead is replicated on several intermediate nodes which then replicated the message to nodes in lower leafs. Hence, the intermediate compute nodes become "virtual root compute nodes" for the purpose of replicating the broadcast message to lower levels of a tree.« less

  11. Quartic scaling MP2 for solids: A highly parallelized algorithm in the plane wave basis

    NASA Astrophysics Data System (ADS)

    Schäfer, Tobias; Ramberger, Benjamin; Kresse, Georg

    2017-03-01

    We present a low-complexity algorithm to calculate the correlation energy of periodic systems in second-order Møller-Plesset (MP2) perturbation theory. In contrast to previous approximation-free MP2 codes, our implementation possesses a quartic scaling, O ( N 4 ) , with respect to the system size N and offers an almost ideal parallelization efficiency. The general issue that the correlation energy converges slowly with the number of basis functions is eased by an internal basis set extrapolation. The key concept to reduce the scaling is to eliminate all summations over virtual orbitals which can be elegantly achieved in the Laplace transformed MP2 formulation using plane wave basis sets and fast Fourier transforms. Analogously, this approach could allow us to calculate second order screened exchange as well as particle-hole ladder diagrams with a similar low complexity. Hence, the presented method can be considered as a step towards systematically improved correlation energies.

  12. A Component-Based Extension Framework for Large-Scale Parallel Simulations in NEURON

    PubMed Central

    King, James G.; Hines, Michael; Hill, Sean; Goodman, Philip H.; Markram, Henry; Schürmann, Felix

    2008-01-01

    As neuronal simulations approach larger scales with increasing levels of detail, the neurosimulator software represents only a part of a chain of tools ranging from setup, simulation, interaction with virtual environments to analysis and visualizations. Previously published approaches to abstracting simulator engines have not received wide-spread acceptance, which in part may be to the fact that they tried to address the challenge of solving the model specification problem. Here, we present an approach that uses a neurosimulator, in this case NEURON, to describe and instantiate the network model in the simulator's native model language but then replaces the main integration loop with its own. Existing parallel network models are easily adopted to run in the presented framework. The presented approach is thus an extension to NEURON but uses a component-based architecture to allow for replaceable spike exchange components and pluggable components for monitoring, analysis, or control that can run in this framework alongside with the simulation. PMID:19430597

  13. Psychophysical estimation of 3D virtual depth of united, synthesized and mixed type stereograms by means of simultaneous observation

    NASA Astrophysics Data System (ADS)

    Iizuka, Masayuki; Ookuma, Yoshio; Nakashima, Yoshio; Takamatsu, Mamoru

    2007-02-01

    Recently, many types of computer-generated stereograms (CGSs), i.e. various works of art produced by using computer are published for hobby and entertainment. It is said that activation of brain, improvement of visual eye sight, decrease of mental stress, effect of healing, etc. are expected when properly appreciating a kind of CGS as the stereoscopic view. There is a lot of information on the internet web site concerning all aspects of stereogram history, science, social organization, various types of stereograms, and free software for generating CGS. Generally, the CGS is classified into nine types: (1) stereo pair type, (2) anaglyph type, (3) repeated pattern type, (4) embedded type, (5) random dot stereogram (RDS), (6) single image stereogram (SIS), (7) united stereogram, (8) synthesized stereogram, and (9) mixed or multiple type stereogram. Each stereogram has advantages and disadvantages when viewing directly the stereogram with two eyes by training with a little patience. In this study, the characteristics of united, synthesized and mixed type stereograms, the role and composition of depth map image (DMI) called hidden image or picture, and the effect of irregular shift of texture pattern image called wall paper are discussed from the viewpoint of psychophysical estimation of 3D virtual depth and visual quality of virtual image by means of simultaneous observation in the case of the parallel viewing method.

  14. Integrating multiple scientific computing needs via a Private Cloud infrastructure

    NASA Astrophysics Data System (ADS)

    Bagnasco, S.; Berzano, D.; Brunetti, R.; Lusso, S.; Vallero, S.

    2014-06-01

    In a typical scientific computing centre, diverse applications coexist and share a single physical infrastructure. An underlying Private Cloud facility eases the management and maintenance of heterogeneous use cases such as multipurpose or application-specific batch farms, Grid sites catering to different communities, parallel interactive data analysis facilities and others. It allows to dynamically and efficiently allocate resources to any application and to tailor the virtual machines according to the applications' requirements. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques; for example, rolling updates can be performed easily and minimizing the downtime. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 site and a dynamically expandable PROOF-based Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The Private Cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem (used in two different configurations for worker- and service-class hypervisors) and the OpenWRT Linux distribution (used for network virtualization). A future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and by using mainstream contextualization tools like CloudInit.

  15. Topological switching between an alpha-beta parallel protein and a remarkably helical molten globule.

    PubMed

    Nabuurs, Sanne M; Westphal, Adrie H; aan den Toorn, Marije; Lindhoud, Simon; van Mierlo, Carlo P M

    2009-06-17

    Partially folded protein species transiently exist during folding of most proteins. Often these species are molten globules, which may be on- or off-pathway to native protein. Molten globules have a substantial amount of secondary structure but lack virtually all the tertiary side-chain packing characteristic of natively folded proteins. These ensembles of interconverting conformers are prone to aggregation and potentially play a role in numerous devastating pathologies, and thus attract considerable attention. The molten globule that is observed during folding of apoflavodoxin from Azotobacter vinelandii is off-pathway, as it has to unfold before native protein can be formed. Here we report that this species can be trapped under nativelike conditions by substituting amino acid residue F44 by Y44, allowing spectroscopic characterization of its conformation. Whereas native apoflavodoxin contains a parallel beta-sheet surrounded by alpha-helices (i.e., the flavodoxin-like or alpha-beta parallel topology), it is shown that the molten globule has a totally different topology: it is helical and contains no beta-sheet. The presence of this remarkably nonnative species shows that single polypeptide sequences can code for distinct folds that swap upon changing conditions. Topological switching between unrelated protein structures is likely a general phenomenon in the protein structure universe.

  16. Simulating coupled dynamics of a rigid-flexible multibody system and compressible fluid

    NASA Astrophysics Data System (ADS)

    Hu, Wei; Tian, Qiang; Hu, HaiYan

    2018-04-01

    As a subsequent work of previous studies of authors, a new parallel computation approach is proposed to simulate the coupled dynamics of a rigid-flexible multibody system and compressible fluid. In this approach, the smoothed particle hydrodynamics (SPH) method is used to model the compressible fluid, the natural coordinate formulation (NCF) and absolute nodal coordinate formulation (ANCF) are used to model the rigid and flexible bodies, respectively. In order to model the compressible fluid properly and efficiently via SPH method, three measures are taken as follows. The first is to use the Riemann solver to cope with the fluid compressibility, the second is to define virtual particles of SPH to model the dynamic interaction between the fluid and the multibody system, and the third is to impose the boundary conditions of periodical inflow and outflow to reduce the number of SPH particles involved in the computation process. Afterwards, a parallel computation strategy is proposed based on the graphics processing unit (GPU) to detect the neighboring SPH particles and to solve the dynamic equations of SPH particles in order to improve the computation efficiency. Meanwhile, the generalized-alpha algorithm is used to solve the dynamic equations of the multibody system. Finally, four case studies are given to validate the proposed parallel computation approach.

  17. Effectiveness of Virtual Reality Exercises in STroke Rehabilitation (EVREST): rationale, design, and protocol of a pilot randomized clinical trial assessing the Wii gaming system.

    PubMed

    Saposnik, G; Mamdani, M; Bayley, M; Thorpe, K E; Hall, J; Cohen, L G; Teasell, R

    2010-02-01

    Evidence suggests that increasing intensity of rehabilitation results in better motor recovery. Limited evidence is available on the effectiveness of an interactive virtual reality gaming system for stroke rehabilitation. EVREST was designed to evaluate feasibility, safety and efficacy of using the Nintendo Wii gaming virtual reality (VRWii) technology to improve arm recovery in stroke patients. Pilot randomized study comparing, VRWii versus recreational therapy (RT) in patients receiving standard rehabilitation within six months of stroke with a motor deficit of > or =3 on the Chedoke-McMaster Scale (arm). In this study we expect to randomize 20 patients. All participants (age 18-85) will receive customary rehabilitative treatment consistent of a standardized protocol (eight sessions, 60 min each, over a two-week period). The primary feasibility outcome is the total time receiving the intervention. The primary safety outcome is the proportion of patients experiencing intervention-related adverse events during the study period. Efficacy, a secondary outcome measure, will be measured by the Wolf Motor Function Test, Box and Block Test, and Stroke Impact Scale at the four-week follow-up visit. From November, 2008 to September, 2009 21 patients were randomized to VRWii or RT. Mean age, 61 (range 41-83) years. Mean time from stroke onset 25 (range 10-56) days. EVREST is the first randomized parallel controlled trial assessing the feasibility, safety, and efficacy of virtual reality using Wii gaming technology in stroke rehabilitation. The results of this study will serve as the basis for a larger multicentre trial. ClinicalTrials.gov registration# NTC692523.

  18. Design and implementation of a reliable and cost-effective cloud computing infrastructure: the INFN Napoli experience

    NASA Astrophysics Data System (ADS)

    Capone, V.; Esposito, R.; Pardi, S.; Taurino, F.; Tortone, G.

    2012-12-01

    Over the last few years we have seen an increasing number of services and applications needed to manage and maintain cloud computing facilities. This is particularly true for computing in high energy physics, which often requires complex configurations and distributed infrastructures. In this scenario a cost effective rationalization and consolidation strategy is the key to success in terms of scalability and reliability. In this work we describe an IaaS (Infrastructure as a Service) cloud computing system, with high availability and redundancy features, which is currently in production at INFN-Naples and ATLAS Tier-2 data centre. The main goal we intended to achieve was a simplified method to manage our computing resources and deliver reliable user services, reusing existing hardware without incurring heavy costs. A combined usage of virtualization and clustering technologies allowed us to consolidate our services on a small number of physical machines, reducing electric power costs. As a result of our efforts we developed a complete solution for data and computing centres that can be easily replicated using commodity hardware. Our architecture consists of 2 main subsystems: a clustered storage solution, built on top of disk servers running GlusterFS file system, and a virtual machines execution environment. GlusterFS is a network file system able to perform parallel writes on multiple disk servers, providing this way live replication of data. High availability is also achieved via a network configuration using redundant switches and multiple paths between hypervisor hosts and disk servers. We also developed a set of management scripts to easily perform basic system administration tasks such as automatic deployment of new virtual machines, adaptive scheduling of virtual machines on hypervisor hosts, live migration and automated restart in case of hypervisor failures.

  19. Proliferation assessment in breast carcinomas using digital image analysis based on virtual Ki67/cytokeratin double staining.

    PubMed

    Røge, Rasmus; Riber-Hansen, Rikke; Nielsen, Søren; Vyberg, Mogens

    2016-07-01

    Manual estimation of Ki67 Proliferation Index (PI) in breast carcinoma classification is labor intensive and prone to intra- and interobserver variation. Standard Digital Image Analysis (DIA) has limitations due to issues with tumor cell identification. Recently, a computer algorithm, DIA based on Virtual Double Staining (VDS), segmenting Ki67-positive and -negative tumor cells using digitally fused parallel cytokeratin (CK) and Ki67-stained slides has been introduced. In this study, we compare VDS with manual stereological counting of Ki67-positive and -negative cells and examine the impact of the physical distance of the parallel slides on the alignment of slides. TMAs, containing 140 cores of consecutively obtained breast carcinomas, were stained for CK and Ki67 using optimized staining protocols. By means of stereological principles, Ki67-positive and -negative cell profiles were counted in sampled areas and used for the estimation of PIs of the whole tissue core. The VDS principle was applied to both the same sampled areas and the whole tissue core. Additionally, five neighboring slides were stained for CK in order to examine the alignment algorithm. Correlation between manual counting and VDS in both sampled areas and whole core was almost perfect (correlation coefficients above 0.97). Bland-Altman plots did not reveal any skewness in any data ranges. There was a good agreement in alignment (>85 %) in neighboring slides, whereas agreement decreased in non-neighboring slides. VDS gave similar results compared with manual counting using stereological principles. Introduction of this method in clinical and research practice may improve accuracy and reproducibility of Ki67 PI.

  20. sLORETA current source density analysis of evoked potentials for spatial updating in a virtual navigation task

    PubMed Central

    Nguyen, Hai M.; Matsumoto, Jumpei; Tran, Anh H.; Ono, Taketoshi; Nishijo, Hisao

    2014-01-01

    Previous studies have reported that multiple brain regions are activated during spatial navigation. However, it is unclear whether these activated brain regions are specifically associated with spatial updating or whether some regions are recruited for parallel cognitive processes. The present study aimed to localize current sources of event related potentials (ERPs) associated with spatial updating specifically. In the control phase of the experiment, electroencephalograms (EEGs) were recorded while subjects sequentially traced 10 blue checkpoints on the streets of a virtual town, which were sequentially connected by a green line, by manipulating a joystick. In the test phase of the experiment, the checkpoints and green line were not indicated. Instead, a tone was presented when the subjects entered the reference points where they were then required to trace the 10 invisible spatial reference points corresponding to the checkpoints. The vertex-positive ERPs with latencies of approximately 340 ms from the moment when the subjects entered the unmarked reference points were significantly larger in the test than in the control phases. Current source density analysis of the ERPs by standardized low-resolution brain electromagnetic tomography (sLORETA) indicated activation of brain regions in the test phase that are associated with place and landmark recognition (entorhinal cortex/hippocampus, parahippocampal and retrosplenial cortices, fusiform, and lingual gyri), detecting self-motion (posterior cingulate and posterior insular cortices), motor planning (superior frontal gyrus, including the medial frontal cortex), and regions that process spatial attention (inferior parietal lobule). The present results provide the first identification of the current sources of ERPs associated with spatial updating, and suggest that multiple systems are active in parallel during spatial updating. PMID:24624067

  1. High-Performance Integrated Virtual Environment (HIVE) Tools and Applications for Big Data Analysis.

    PubMed

    Simonyan, Vahan; Mazumder, Raja

    2014-09-30

    The High-performance Integrated Virtual Environment (HIVE) is a high-throughput cloud-based infrastructure developed for the storage and analysis of genomic and associated biological data. HIVE consists of a web-accessible interface for authorized users to deposit, retrieve, share, annotate, compute and visualize Next-generation Sequencing (NGS) data in a scalable and highly efficient fashion. The platform contains a distributed storage library and a distributed computational powerhouse linked seamlessly. Resources available through the interface include algorithms, tools and applications developed exclusively for the HIVE platform, as well as commonly used external tools adapted to operate within the parallel architecture of the system. HIVE is composed of a flexible infrastructure, which allows for simple implementation of new algorithms and tools. Currently, available HIVE tools include sequence alignment and nucleotide variation profiling tools, metagenomic analyzers, phylogenetic tree-building tools using NGS data, clone discovery algorithms, and recombination analysis algorithms. In addition to tools, HIVE also provides knowledgebases that can be used in conjunction with the tools for NGS sequence and metadata analysis.

  2. Numerical modelling of orthogonal cutting: application to woodworking with a bench plane.

    PubMed

    Nairn, John A

    2016-06-06

    A numerical model for orthogonal cutting using the material point method was applied to woodcutting using a bench plane. The cutting process was modelled by accounting for surface energy associated with wood fracture toughness for crack growth parallel to the grain. By using damping to deal with dynamic crack propagation and modelling all contact between wood and the plane, simulations could initiate chip formation and proceed into steady-state chip propagation including chip curling. Once steady-state conditions were achieved, the cutting forces became constant and could be determined as a function of various simulation variables. The modelling details included a cutting tool, the tool's rake and grinding angles, a chip breaker, a base plate and a mouth opening between the base plate and the tool. The wood was modelled as an anisotropic elastic-plastic material. The simulations were verified by comparison to an analytical model and then used to conduct virtual experiments on wood planing. The virtual experiments showed interactions between depth of cut, chip breaker location and mouth opening. Additional simulations investigated the role of tool grinding angle, tool sharpness and friction.

  3. Singularity in the positive Hall coeffcient near pre-onset temperatures in high-Tc superconductors

    NASA Astrophysics Data System (ADS)

    Vezzoli, G. C.; Chen, M. F.; Craver, F.; Moon, B. M.; Safari, A.; Burke, T.; Stanley, W.

    1990-10-01

    Hall measurements using continuous extremely slow cooling and reheating rates as well as employing eqiulibrium point-by-point conventional techniques reveals a clear anomally in RH at pre-onset temperatures near Tc in polycrystalline samples Y1Ba2Cu3O7 and Bi2Sr2Ca2Cu3O10. The anomaly has the appearance of a singularity of Dirac-delta function which parallels earlier work on La1-xSrxCuO4. Recent single crystal work on the Bi-containing high-Tc superconductor is in accord with a clearcut anomaly. The singularity is tentatively interpreted to be associated (upon cooling) with initially the removal of positive holes from the hopping conduction system of the normal state such as from the increased concentration of bound virtual excitons due to increased exciton and hole lifetimes at low temperature. Subsequently the formation of Cooper pairs by mediation from these centers (bound-holes) and/or bound excitons) may cause an ionization of these bound virtual excitons thereby re-introducing holes and electrons into the conduction system at Tc.

  4. High-Performance Integrated Virtual Environment (HIVE) Tools and Applications for Big Data Analysis

    PubMed Central

    Simonyan, Vahan; Mazumder, Raja

    2014-01-01

    The High-performance Integrated Virtual Environment (HIVE) is a high-throughput cloud-based infrastructure developed for the storage and analysis of genomic and associated biological data. HIVE consists of a web-accessible interface for authorized users to deposit, retrieve, share, annotate, compute and visualize Next-generation Sequencing (NGS) data in a scalable and highly efficient fashion. The platform contains a distributed storage library and a distributed computational powerhouse linked seamlessly. Resources available through the interface include algorithms, tools and applications developed exclusively for the HIVE platform, as well as commonly used external tools adapted to operate within the parallel architecture of the system. HIVE is composed of a flexible infrastructure, which allows for simple implementation of new algorithms and tools. Currently, available HIVE tools include sequence alignment and nucleotide variation profiling tools, metagenomic analyzers, phylogenetic tree-building tools using NGS data, clone discovery algorithms, and recombination analysis algorithms. In addition to tools, HIVE also provides knowledgebases that can be used in conjunction with the tools for NGS sequence and metadata analysis. PMID:25271953

  5. Evaluation of an organic light-emitting diode display for precise visual stimulation.

    PubMed

    Ito, Hiroyuki; Ogawa, Masaki; Sunaga, Shoji

    2013-06-11

    A new type of visual display for presentation of a visual stimulus with high quality was assessed. The characteristics of an organic light-emitting diode (OLED) display (Sony PVM-2541, 24.5 in.; Sony Corporation, Tokyo, Japan) were measured in detail from the viewpoint of its applicability to visual psychophysics. We found the new display to be superior to other display types in terms of spatial uniformity, color gamut, and contrast ratio. Changes in the intensity of luminance were sharper on the OLED display than those on a liquid crystal display. Therefore, such OLED displays could replace conventional cathode ray tube displays in vision research for high quality stimulus presentation. Benefits of using OLED displays in vision research were especially apparent in the fields of low-level vision, where precise control and description of the stimulus are needed, e.g., in mesopic or scotopic vision, color vision, and motion perception.

  6. Morphological diversity of nitroguanidine crystals with enhanced mechanical performance and thermodynamic stability

    NASA Astrophysics Data System (ADS)

    Luo, Zhilong; Cui, Yingdan; Dong, Weibing; Xu, Qipeng; Zou, Gaoxing; Kang, Chao; Hou, Baohong; Chen, Song; Gong, Junbo

    2017-12-01

    Nitroguanidine (NQ) is a commonly used explosive, which has been widely used for both civilian and military explosive applications. However, the weak flowability and mechanical performance limit its application. In this work, mechanical performance and thermodynamic stability of NQ crystals were improved by controlling crystal morphologies in the crystallization process. Typical NQ crystals with multiple morphologies and single crystal form were obtained in the presence of additives during the cooling crystallization. The morphology controlled NQ crystals showed higher density, unimodal crystal size distribution and enhanced flowability. The additives showed the inhibitory effect on the nucleation of NQ crystals by in-situ FBRM and PVM determination, and the mechanism was analyzed by means of morphological prediction and molecular simulation. Furthermore, the morphology controlled NQ crystals suggested higher thermodynamic stability according to the calculation of entropy, enthalpy, Gibbs free energy and apparent activation energy on the basis of DSC results.

  7. Pump and Flow Control Subassembly of Thermal Control Subsystem for Photovoltaic Power Module

    NASA Technical Reports Server (NTRS)

    Motil, Brian; Santen, Mark A.

    1993-01-01

    The pump and flow control subassembly (PFCS) is an orbital replacement unit (ORU) on the Space Station Freedom photovoltaic power module (PVM). The PFCS pumps liquid ammonia at a constant rate of approximately 1170 kg/hr while providing temperature control by flow regulation between the radiator and the bypass loop. Also, housed within the ORU is an accumulator to compensate for fluid volumetric changes as well as the electronics and firmware for monitoring and control of the photovoltaic thermal control system (PVTCS). Major electronic functions include signal conditioning, data interfacing and motor control. This paper will provide a description of each major component within the PFCS along with performance test data. In addition, this paper will discuss the flow control algorithm and describe how the nickel hydrogen batteries and associated power electronics will be thermally controlled through regulation of coolant flow to the radiator.

  8. On Borders: From Ancient to Postmodern Times

    NASA Astrophysics Data System (ADS)

    Bellezza, G.

    2013-11-01

    The article deals with the evolution of the concept of borders between human groups and with its slow evolution from the initial no men's land zones to the ideal single-dimension linear borders. In ancient times the first borders were natural, such as mountain ranges or large rivers until, with the development of Geodesy, astronomical borders based on meridians and parallels became a favourite natural base. Actually, Modern States adopted these to fix limits in unknown conquered territories. The postmodern thought led give more importance to cultural borders until, in the most recent times, is becoming rather impossible to fix borders in the virtual cyberspace.

  9. Large scale GW calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Govoni, Marco; Galli, Giulia

    We present GW calculations of molecules, ordered and disordered solids and interfaces, which employ an efficient contour deformation technique for frequency integration and do not require the explicit evaluation of virtual electronic states nor the inversion of dielectric matrices. We also present a parallel implementation of the algorithm, which takes advantage of separable expressions of both the single particle Green’s function and the screened Coulomb interaction. The method can be used starting from density functional theory calculations performed with semilocal or hybrid functionals. The newly developed technique was applied to GW calculations of systems of unprecedented size, including water/semiconductor interfacesmore » with thousands of electrons.« less

  10. Large Scale GW Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Govoni, Marco; Galli, Giulia

    We present GW calculations of molecules, ordered and disordered solids and interfaces, which employ an efficient contour deformation technique for frequency integration and do not require the explicit evaluation of virtual electronic states nor the inversion of dielectric matrices. We also present a parallel implementation of the algorithm which takes advantage of separable expressions of both the single particle Green's function and the screened Coulomb interaction. The method can be used starting from density functional theory calculations performed with semilocal or hybrid functionals. We applied the newly developed technique to GW calculations of systems of unprecedented size, including water/semiconductor interfacesmore » with thousands of electrons.« less

  11. Virtual System Environments

    NASA Astrophysics Data System (ADS)

    Vallée, Geoffroy; Naughton, Thomas; Ong, Hong; Tikotekar, Anand; Engelmann, Christian; Bland, Wesley; Aderholdt, Ferrol; Scott, Stephen L.

    Distributed and parallel systems are typically managed with “static” settings: the operating system (OS) and the runtime environment (RTE) are specified at a given time and cannot be changed to fit an application’s needs. This means that every time application developers want to use their application on a new execution platform, the application has to be ported to this new environment, which may be expensive in terms of application modifications and developer time. However, the science resides in the applications and not in the OS or the RTE. Therefore, it should be beneficial to adapt the OS and the RTE to the application instead of adapting the applications to the OS and the RTE.

  12. On the generalized VIP time integral methodology for transient thermal problems

    NASA Technical Reports Server (NTRS)

    Mei, Youping; Chen, Xiaoqin; Tamma, Kumar K.; Sha, Desong

    1993-01-01

    The paper describes the development and applicability of a generalized VIrtual-Pulse (VIP) time integral method of computation for thermal problems. Unlike past approaches for general heat transfer computations, and with the advent of high speed computing technology and the importance of parallel computations for efficient use of computing environments, a major motivation via the developments described in this paper is the need for developing explicit computational procedures with improved accuracy and stability characteristics. As a consequence, a new and effective VIP methodology is described which inherits these improved characteristics. Numerical illustrative examples are provided to demonstrate the developments and validate the results obtained for thermal problems.

  13. Computational method for multi-modal microscopy based on transport of intensity equation

    NASA Astrophysics Data System (ADS)

    Li, Jiaji; Chen, Qian; Sun, Jiasong; Zhang, Jialin; Zuo, Chao

    2017-02-01

    In this paper, we develop the requisite theory to describe a hybrid virtual-physical multi-modal imaging system which yields quantitative phase, Zernike phase contrast, differential interference contrast (DIC), and light field moment imaging simultaneously based on transport of intensity equation(TIE). We then give the experimental demonstration of these ideas by time-lapse imaging of live HeLa cell mitosis. Experimental results verify that a tunable lens based TIE system, combined with the appropriate post-processing algorithm, can achieve a variety of promising imaging modalities in parallel with the quantitative phase images for the dynamic study of cellular processes.

  14. Large scale GW calculations

    DOE PAGES

    Govoni, Marco; Galli, Giulia

    2015-01-12

    We present GW calculations of molecules, ordered and disordered solids and interfaces, which employ an efficient contour deformation technique for frequency integration and do not require the explicit evaluation of virtual electronic states nor the inversion of dielectric matrices. We also present a parallel implementation of the algorithm, which takes advantage of separable expressions of both the single particle Green’s function and the screened Coulomb interaction. The method can be used starting from density functional theory calculations performed with semilocal or hybrid functionals. The newly developed technique was applied to GW calculations of systems of unprecedented size, including water/semiconductor interfacesmore » with thousands of electrons.« less

  15. Sexual affordances, perceptual-motor invariance extraction and intentional nonlinear dynamics: sexually deviant and non-deviant patterns in male subjects.

    PubMed

    Renaud, Patrice; Goyette, Mathieu; Chartier, Sylvain; Zhornitski, Simon; Trottier, Dominique; Rouleau, Joanne-L; Proulx, Jean; Fedoroff, Paul; Bradford, John-P; Dassylva, Benoit; Bouchard, Stephane

    2010-10-01

    Sexual arousal and gaze behavior dynamics are used to characterize deviant sexual interests in male subjects. Pedophile patients and non-deviant subjects are immersed with virtual characters depicting relevant sexual features. Gaze behavior dynamics as indexed from correlation dimensions (D2) appears to be fractal in nature and significantly different from colored noise (surrogate data tests and recurrence plot analyses were performed). This perceptual-motor fractal dynamics parallels sexual arousal and differs from pedophiles to non-deviant subjects when critical sexual information is processed. Results are interpreted in terms of sexual affordance, perceptual invariance extraction and intentional nonlinear dynamics.

  16. Running climate model on a commercial cloud computing environment: A case study using Community Earth System Model (CESM) on Amazon AWS

    NASA Astrophysics Data System (ADS)

    Chen, Xiuhong; Huang, Xianglei; Jiao, Chaoyi; Flanner, Mark G.; Raeker, Todd; Palen, Brock

    2017-01-01

    The suites of numerical models used for simulating climate of our planet are usually run on dedicated high-performance computing (HPC) resources. This study investigates an alternative to the usual approach, i.e. carrying out climate model simulations on commercially available cloud computing environment. We test the performance and reliability of running the CESM (Community Earth System Model), a flagship climate model in the United States developed by the National Center for Atmospheric Research (NCAR), on Amazon Web Service (AWS) EC2, the cloud computing environment by Amazon.com, Inc. StarCluster is used to create virtual computing cluster on the AWS EC2 for the CESM simulations. The wall-clock time for one year of CESM simulation on the AWS EC2 virtual cluster is comparable to the time spent for the same simulation on a local dedicated high-performance computing cluster with InfiniBand connections. The CESM simulation can be efficiently scaled with the number of CPU cores on the AWS EC2 virtual cluster environment up to 64 cores. For the standard configuration of the CESM at a spatial resolution of 1.9° latitude by 2.5° longitude, increasing the number of cores from 16 to 64 reduces the wall-clock running time by more than 50% and the scaling is nearly linear. Beyond 64 cores, the communication latency starts to outweigh the benefit of distributed computing and the parallel speedup becomes nearly unchanged.

  17. Virtualizing Super-Computation On-Board Uas

    NASA Astrophysics Data System (ADS)

    Salami, E.; Soler, J. A.; Cuadrado, R.; Barrado, C.; Pastor, E.

    2015-04-01

    Unmanned aerial systems (UAS, also known as UAV, RPAS or drones) have a great potential to support a wide variety of aerial remote sensing applications. Most UAS work by acquiring data using on-board sensors for later post-processing. Some require the data gathered to be downlinked to the ground in real-time. However, depending on the volume of data and the cost of the communications, this later option is not sustainable in the long term. This paper develops the concept of virtualizing super-computation on-board UAS, as a method to ease the operation by facilitating the downlink of high-level information products instead of raw data. Exploiting recent developments in miniaturized multi-core devices is the way to speed-up on-board computation. This hardware shall satisfy size, power and weight constraints. Several technologies are appearing with promising results for high performance computing on unmanned platforms, such as the 36 cores of the TILE-Gx36 by Tilera (now EZchip) or the 64 cores of the Epiphany-IV by Adapteva. The strategy for virtualizing super-computation on-board includes the benchmarking for hardware selection, the software architecture and the communications aware design. A parallelization strategy is given for the 36-core TILE-Gx36 for a UAS in a fire mission or in similar target-detection applications. The results are obtained for payload image processing algorithms and determine in real-time the data snapshot to gather and transfer to ground according to the needs of the mission, the processing time, and consumed watts.

  18. The Virtual Short Physical Performance Battery

    PubMed Central

    Wrights, Abbie P.; Haakonssen, Eric H.; Dobrosielski, Meredith A.; Chmelo, Elizabeth A.; Barnard, Ryan T.; Pecorella, Anthony; Ip, Edward H.; Rejeski, W. Jack

    2015-01-01

    Background. Performance-based and self-report instruments of physical function are frequently used and provide complementary information. Identifying older adults with a mismatch between actual and perceived function has utility in clinical settings and in the design of interventions. Using novel, video-animated technology, the objective of this study was to develop a self-report measure that parallels the domains of objective physical function assessed by the Short Physical Performance Battery (SPPB)—the virtual SPPB (vSPPB). Methods. The SPPB, vSPPB, the self-report Pepper Assessment Tool for Disability, the Mobility Assessment Tool-short form, and a 400-m walk test were administered to 110 older adults (mean age = 80.6±5.2 years). One-week test–retest reliability of the vSPPB was examined in 30 participants. Results. The total SPPB (mean [±SD] = 7.7±2.8) and vSPPB (7.7±3.2) scores were virtually identical, yet moderately correlated (r = .601, p < .05). The component scores of the SPPB and vSPPB were also moderately correlated (all p values <.01). The vSPPB (intraclass correlation = .963, p < .05) was reliable; however, individuals with the lowest function overestimated their overall lower extremity function while participants of all functional levels overestimated their ability on chair stands, but accurately perceived their usual gait speed. Conclusion. In spite of the similarity between the SPPB and vSPPB, the moderate strength of the association between the two suggests that they offer unique perspectives on an older adult’s physical function. PMID:25829520

  19. Review of Enabling Technologies to Facilitate Secure Compute Customization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aderholdt, Ferrol; Caldwell, Blake A; Hicks, Susan Elaine

    High performance computing environments are often used for a wide variety of workloads ranging from simulation, data transformation and analysis, and complex workflows to name just a few. These systems may process data for a variety of users, often requiring strong separation between job allocations. There are many challenges to establishing these secure enclaves within the shared infrastructure of high-performance computing (HPC) environments. The isolation mechanisms in the system software are the basic building blocks for enabling secure compute enclaves. There are a variety of approaches and the focus of this report is to review the different virtualization technologies thatmore » facilitate the creation of secure compute enclaves. The report reviews current operating system (OS) protection mechanisms and modern virtualization technologies to better understand the performance/isolation properties. We also examine the feasibility of running ``virtualized'' computing resources as non-privileged users, and providing controlled administrative permissions for standard users running within a virtualized context. Our examination includes technologies such as Linux containers (LXC [32], Docker [15]) and full virtualization (KVM [26], Xen [5]). We categorize these different approaches to virtualization into two broad groups: OS-level virtualization and system-level virtualization. The OS-level virtualization uses containers to allow a single OS kernel to be partitioned to create Virtual Environments (VE), e.g., LXC. The resources within the host's kernel are only virtualized in the sense of separate namespaces. In contrast, system-level virtualization uses hypervisors to manage multiple OS kernels and virtualize the physical resources (hardware) to create Virtual Machines (VM), e.g., Xen, KVM. This terminology of VE and VM, detailed in Section 2, is used throughout the report to distinguish between the two different approaches to providing virtualized execution environments. As part of our technology review we analyzed several current virtualization solutions to assess their vulnerabilities. This included a review of common vulnerabilities and exposures (CVEs) for Xen, KVM, LXC and Docker to gauge their susceptibility to different attacks. The complete details are provided in Section 5 on page 33. Based on this review we concluded that system-level virtualization solutions have many more vulnerabilities than OS level virtualization solutions. As such, security mechanisms like sVirt (Section 3.3) should be considered when using system-level virtualization solutions in order to protect the host against exploits. The majority of vulnerabilities related to KVM, LXC, and Docker are in specific regions of the system. Therefore, future "zero day attacks" are likely to be in the same regions, which suggests that protecting these areas can simplify the protection of the host and maintain the isolation between users. The evaluations of virtualization technologies done thus far are discussed in Section 4. This includes experiments with 'user' namespaces in VEs, which provides the ability to isolate user privileges and allow a user to run with different UIDs within the container while mapping them to non-privileged UIDs in the host. We have identified Linux namespaces as a promising mechanism to isolate shared resources, while maintaining good performance. In Section 4.1 we describe our tests with LXC as a non-root user and leveraging namespaces to control UID/GID mappings and support controlled sharing of parallel file-systems. We highlight several of these namespace capabilities in Section 6.2.3. The other evaluations that were performed during this initial phase of work provide baseline performance data for comparing VEs and VMs to purely native execution. In Section 4.2 we performed tests using the High-Performance Computing Conjugate Gradient (HPCCG) benchmark to establish baseline performance for a scientific application when run on the Native (host) machine in contrast with execution under Docker and KVM. Our tests verified prior studies showing roughly 2-4% overheads in application execution time & MFlops when running in hypervisor-base environments (VMs) as compared to near native performance with VEs. For more details, see Figures 4.5 (page 28), 4.6 (page 28), and 4.7 (page 29). Additionally, in Section 4.3 we include network measurements for TCP bandwidth performance over the 10GigE interface in our testbed. The Native and Docker based tests achieved >= ~9Gbits/sec, while the KVM configuration only achieved 2.5Gbits/sec (Table 4.6 on page 32). This may be a configuration issue with our KVM installation, and is a point for further testing as we refine the network settings in the testbed. The initial network tests were done using a bridged networking configuration. The report outline is as follows: - Section 1 introduces the report and clarifies the scope of the proj...« less

  20. The novel implicit LU-SGS parallel iterative method based on the diffusion equation of a nuclear reactor on a GPU cluster

    NASA Astrophysics Data System (ADS)

    Zhang, Jilin; Sha, Chaoqun; Wu, Yusen; Wan, Jian; Zhou, Li; Ren, Yongjian; Si, Huayou; Yin, Yuyu; Jing, Ya

    2017-02-01

    GPU not only is used in the field of graphic technology but also has been widely used in areas needing a large number of numerical calculations. In the energy industry, because of low carbon, high energy density, high duration and other characteristics, the development of nuclear energy cannot easily be replaced by other energy sources. Management of core fuel is one of the major areas of concern in a nuclear power plant, and it is directly related to the economic benefits and cost of nuclear power. The large-scale reactor core expansion equation is large and complicated, so the calculation of the diffusion equation is crucial in the core fuel management process. In this paper, we use CUDA programming technology on a GPU cluster to run the LU-SGS parallel iterative calculation against the background of the diffusion equation of the reactor. We divide one-dimensional and two-dimensional mesh into a plurality of domains, with each domain evenly distributed on the GPU blocks. A parallel collision scheme is put forward that defines the virtual boundary of the grid exchange information and data transmission by non-stop collision. Compared with the serial program, the experiment shows that GPU greatly improves the efficiency of program execution and verifies that GPU is playing a much more important role in the field of numerical calculations.

  1. Towards extensive spatio-temporal reconstructions of North American land cover: a comparison of state-of-the-art pollen-vegetation models

    NASA Astrophysics Data System (ADS)

    Dawson, A.; Trachsel, M.; Goring, S. J.; Paciorek, C. J.; McLachlan, J. S.; Jackson, S. T.; Williams, J. W.

    2017-12-01

    Pollen records have been extensively used to reconstruct past changes in vegetation and study the underlying processes. However, developing the statistical techniques needed to accurately represent both data and process uncertainties is a formidable challenge. Recent advances in paleoecoinformatics (e.g. the Neotoma Paleoecology Database and the European Pollen Database), Bayesian age-depth models, and process-based pollen-vegetation models, and Bayesian hierarchical modeling have pushed paleovegetation reconstructions forward to a point where multiple sources of uncertainty can be incorporated into reconstructions, which in turn enables new hypotheses to be asked and more rigorous integration of paleovegetation data with earth system models and terrestrial ecosystem models. Several kinds of pollen-vegetation models have been developed, notably LOVE/REVEALS, STEPPS, and classical transfer functions such as the modern analog technique. LOVE/REVEALS has been adopted as the standard method for the LandCover6k effort to develop quantitative reconstructions of land cover for the Holocene, while STEPPS has been developed recently as part of the PalEON project and applied to reconstruct with uncertainty shifts in forest composition in New England and the upper Midwest during the late Holocene. Each PVM has different assumptions and structure and uses different input data, but few comparisons among approaches yet exist. Here, we present new reconstructions of land cover change in northern North America during the Holocene based on LOVE/REVEALS and data drawn from the Neotoma database and compare STEPPS-based reconstructions to those from LOVE/REVEALS. These parallel developments with LOVE/REVEALS provide an opportunity to compare and contrast models, and to begin to generate continental scale reconstructions, with explicit uncertainties, that can provide a base for interdisciplinary research within the biogeosciences. We show how STEPPS provides an important benchmark for past land-cover reconstruction, and how the LandCover 6k effort in North America advances our understanding of the past by allowing cross-continent comparisons using standardized methods and quantifying the impact of humans in the early Anthropocene.

  2. The performance of low-cost commercial cloud computing as an alternative in computational chemistry.

    PubMed

    Thackston, Russell; Fortenberry, Ryan C

    2015-05-05

    The growth of commercial cloud computing (CCC) as a viable means of computational infrastructure is largely unexplored for the purposes of quantum chemistry. In this work, the PSI4 suite of computational chemistry programs is installed on five different types of Amazon World Services CCC platforms. The performance for a set of electronically excited state single-point energies is compared between these CCC platforms and typical, "in-house" physical machines. Further considerations are made for the number of cores or virtual CPUs (vCPUs, for the CCC platforms), but no considerations are made for full parallelization of the program (even though parallelization of the BLAS library is implemented), complete high-performance computing cluster utilization, or steal time. Even with this most pessimistic view of the computations, CCC resources are shown to be more cost effective for significant numbers of typical quantum chemistry computations. Large numbers of large computations are still best utilized by more traditional means, but smaller-scale research may be more effectively undertaken through CCC services. © 2015 Wiley Periodicals, Inc.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Song

    CFD (Computational Fluid Dynamics) is a widely used technique in engineering design field. It uses mathematical methods to simulate and predict flow characteristics in a certain physical space. Since the numerical result of CFD computation is very hard to understand, VR (virtual reality) and data visualization techniques are introduced into CFD post-processing to improve the understandability and functionality of CFD computation. In many cases CFD datasets are very large (multi-gigabytes), and more and more interactions between user and the datasets are required. For the traditional VR application, the limitation of computing power is a major factor to prevent visualizing largemore » dataset effectively. This thesis presents a new system designing to speed up the traditional VR application by using parallel computing and distributed computing, and the idea of using hand held device to enhance the interaction between a user and VR CFD application as well. Techniques in different research areas including scientific visualization, parallel computing, distributed computing and graphical user interface designing are used in the development of the final system. As the result, the new system can flexibly be built on heterogeneous computing environment, dramatically shorten the computation time.« less

  4. LDRD project final report : hybrid AI/cognitive tactical behavior framework for LVC.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Djordjevich, Donna D.; Xavier, Patrick Gordon; Brannon, Nathan Gregory

    This Lab-Directed Research and Development (LDRD) sought to develop technology that enhances scenario construction speed, entity behavior robustness, and scalability in Live-Virtual-Constructive (LVC) simulation. We investigated issues in both simulation architecture and behavior modeling. We developed path-planning technology that improves the ability to express intent in the planning task while still permitting an efficient search algorithm. An LVC simulation demonstrated how this enables 'one-click' layout of squad tactical paths, as well as dynamic re-planning for simulated squads and for real and simulated mobile robots. We identified human response latencies that can be exploited in parallel/distributed architectures. We did an experimentalmore » study to determine where parallelization would be productive in Umbra-based force-on-force (FOF) simulations. We developed and implemented a data-driven simulation composition approach that solves entity class hierarchy issues and supports assurance of simulation fairness. Finally, we proposed a flexible framework to enable integration of multiple behavior modeling components that model working memory phenomena with different degrees of sophistication.« less

  5. Radiofrequency pulse design in parallel transmission under strict temperature constraints.

    PubMed

    Boulant, Nicolas; Massire, Aurélien; Amadon, Alexis; Vignaud, Alexandre

    2014-09-01

    To gain radiofrequency (RF) pulse performance by directly addressing the temperature constraints, as opposed to the specific absorption rate (SAR) constraints, in parallel transmission at ultra-high field. The magnitude least-squares RF pulse design problem under hard SAR constraints was solved repeatedly by using the virtual observation points and an active-set algorithm. The SAR constraints were updated at each iteration based on the result of a thermal simulation. The numerical study was performed for an SAR-demanding and simplified time of flight sequence using B1 and ΔB0 maps obtained in vivo on a human brain at 7T. The proposed adjustment of the SAR constraints combined with an active-set algorithm provided higher flexibility in RF pulse design within a reasonable time. The modifications of those constraints acted directly upon the thermal response as desired. Although further confidence in the thermal models is needed, this study shows that RF pulse design under strict temperature constraints is within reach, allowing better RF pulse performance and faster acquisitions at ultra-high fields at the cost of higher sequence complexity. Copyright © 2013 Wiley Periodicals, Inc.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kasemir, Kay; Pearson, Matthew R

    For several years, the Control System Studio (CS-Studio) Scan System has successfully automated the operation of beam lines at the Oak Ridge National Laboratory (ORNL) High Flux Isotope Reactor (HFIR) and Spallation Neutron Source (SNS). As it is applied to additional beam lines, we need to support simultaneous adjustments of temperatures or motor positions. While this can be implemented via virtual motors or similar logic inside the Experimental Physics and Industrial Control System (EPICS) Input/Output Controllers (IOCs), doing so requires a priori knowledge of experimenters requirements. By adding support for the parallel control of multiple process variables (PVs) to themore » Scan System, we can better support ad hoc automation of experiments that benefit from such simultaneous PV adjustments.« less

  7. Simple two-electrode biosignal amplifier.

    PubMed

    Dobrev, D; Neycheva, T; Mudrov, N

    2005-11-01

    A simple, cost effective circuit for a two-electrode non-differential biopotential amplifier is proposed. It uses a 'virtual ground' transimpedance amplifier and a parallel RC network for input common mode current equalisation, while the signal input impedance preserves its high value. With this innovative interface circuit, a simple non-inverting amplifier fully emulates high CMRR differential. The amplifier equivalent CMRR (typical range from 70-100 dB) is equal to the open loop gain of the operational amplifier used in the transimpedance interface stage. The circuit has very simple structure and utilises a small number of popular components. The amplifier is intended for use in various two-electrode applications, such as Holter-type monitors, defibrillators, ECG monitors, biotelemetry devices etc.

  8. Environmental concept for engineering software on MIMD computers

    NASA Technical Reports Server (NTRS)

    Lopez, L. A.; Valimohamed, K.

    1989-01-01

    The issues related to developing an environment in which engineering systems can be implemented on MIMD machines are discussed. The problem is presented in terms of implementing the finite element method under such an environment. However, neither the concepts nor the prototype implementation environment are limited to this application. The topics discussed include: the ability to schedule and synchronize tasks efficiently; granularity of tasks; load balancing; and the use of a high level language to specify parallel constructs, manage data, and achieve portability. The objective of developing a virtual machine concept which incorporates solutions to the above issues leads to a design that can be mapped onto loosely coupled, tightly coupled, and hybrid systems.

  9. Alignment theory of parallel-beam computed tomography image reconstruction for elastic-type objects using virtual focusing method.

    PubMed

    Jun, Kyungtaek; Kim, Dongwook

    2018-01-01

    X-ray computed tomography has been studied in various fields. Considerable effort has been focused on reconstructing the projection image set from a rigid-type specimen. However, reconstruction of images projected from an object showing elastic motion has received minimal attention. In this paper, a mathematical solution to reconstructing the projection image set obtained from an object with specific elastic motions-periodically, regularly, and elliptically expanded or contracted specimens-is proposed. To reconstruct the projection image set from expanded or contracted specimens, methods are presented for detection of the sample's motion modes, mathematical rescaling of pixel values, and conversion of the projection angle for a common layer.

  10. Virtual EPID standard phantom audit (VESPA) for remote IMRT and VMAT credentialing

    NASA Astrophysics Data System (ADS)

    Miri, Narges; Lehmann, Joerg; Legge, Kimberley; Vial, Philip; Greer, Peter B.

    2017-06-01

    A virtual EPID standard phantom audit (VESPA) has been implemented for remote auditing in support of facility credentialing for clinical trials using IMRT and VMAT. VESPA is based on published methods and a clinically established IMRT QA procedure, here extended to multi-vendor equipment. Facilities are provided with comprehensive instructions and CT datasets to create treatment plans. They deliver the treatment directly to their EPID without any phantom or couch in the beam. In addition, they deliver a set of simple calibration fields per instructions. Collected EPID images are uploaded electronically. In the analysis, the dose is projected back into a virtual cylindrical phantom. 3D gamma analysis is performed. 2D dose planes and linear dose profiles are provided and can be considered when needed for clarification. In addition, using a virtual flat-phantom, 2D field-by-field or arc-by-arc gamma analyses are performed. Pilot facilities covering a range of planning and delivery systems have performed data acquisition and upload successfully. Advantages of VESPA are (1) fast turnaround mainly driven by the facility’s capability of providing the requested EPID images, (2) the possibility for facilities performing the audit in parallel, as there is no need to wait for a phantom, (3) simple and efficient credentialing for international facilities, (4) a large set of data points, and (5) a reduced impact on resources and environment as there is no need to transport heavy phantoms or audit staff. Limitations of the current implementation of VESPA for trials credentialing are that it does not provide absolute dosimetry, therefore a Level I audit is still required, and that it relies on correctly delivered open calibration fields, which are used for system calibration. The implemented EPID based IMRT and VMAT audit system promises to dramatically improve credentialing efficiency for clinical trials and wider applications.

  11. Separating twin images and locating the center of a microparticle in dense suspensions using correlations among reconstructed fields of two parallel holograms.

    PubMed

    Ling, Hangjian; Katz, Joseph

    2014-09-20

    This paper deals with two issues affecting the application of digital holographic microscopy (DHM) for measuring the spatial distribution of particles in a dense suspension, namely discriminating between real and virtual images and accurate detection of the particle center. Previous methods to separate real and virtual fields have involved applications of multiple phase-shifted holograms, combining reconstructed fields of multiple axially displaced holograms, and analysis of intensity distributions of weakly scattering objects. Here, we introduce a simple approach based on simultaneously recording two in-line holograms, whose planes are separated by a short distance from each other. This distance is chosen to be longer than the elongated trace of the particle. During reconstruction, the real images overlap, whereas the virtual images are displaced by twice the distance between hologram planes. Data analysis is based on correlating the spatial intensity distributions of the two reconstructed fields to measure displacement between traces. This method has been implemented for both synthetic particles and a dense suspension of 2 μm particles. The correlation analysis readily discriminates between real and virtual images of a sample containing more than 1300 particles. Consequently, we can now implement DHM for three-dimensional tracking of particles when the hologram plane is located inside the sample volume. Spatial correlations within the same reconstructed field are also used to improve the detection of the axial location of the particle center, extending previously introduced procedures to suspensions of microscopic particles. For each cross section within a particle trace, we sum the correlations among intensity distributions in all planes located symmetrically on both sides of the section. This cumulative correlation has a sharp peak at the particle center. Using both synthetic and recorded particle fields, we show that the uncertainty in localizing the axial location of the center is reduced to about one particle's diameter.

  12. Design, Results, Evolution and Status of the ATLAS Simulation at Point1 Project

    NASA Astrophysics Data System (ADS)

    Ballestrero, S.; Batraneanu, S. M.; Brasolin, F.; Contescu, C.; Fazio, D.; Di Girolamo, A.; Lee, C. J.; Pozo Astigarraga, M. E.; Scannicchio, D. A.; Sedov, A.; Twomey, M. S.; Wang, F.; Zaytsev, A.

    2015-12-01

    During the LHC Long Shutdown 1 (LSI) period, that started in 2013, the Simulation at Point1 (Sim@P1) project takes advantage, in an opportunistic way, of the TDAQ (Trigger and Data Acquisition) HLT (High-Level Trigger) farm of the ATLAS experiment. This farm provides more than 1300 compute nodes, which are particularly suited for running event generation and Monte Carlo production jobs that are mostly CPU and not I/O bound. It is capable of running up to 2700 Virtual Machines (VMs) each with 8 CPU cores, for a total of up to 22000 parallel jobs. This contribution gives a review of the design, the results, and the evolution of the Sim@P1 project, operating a large scale OpenStack based virtualized platform deployed on top of the ATLAS TDAQ HLT farm computing resources. During LS1, Sim@P1 was one of the most productive ATLAS sites: it delivered more than 33 million CPU-hours and it generated more than 1.1 billion Monte Carlo events. The design aspects are presented: the virtualization platform exploited by Sim@P1 avoids interferences with TDAQ operations and it guarantees the security and the usability of the ATLAS private network. The cloud mechanism allows the separation of the needed support on both infrastructural (hardware, virtualization layer) and logical (Grid site support) levels. This paper focuses on the operational aspects of such a large system during the upcoming LHC Run 2 period: simple, reliable, and efficient tools are needed to quickly switch from Sim@P1 to TDAQ mode and back, to exploit the resources when they are not used for the data acquisition, even for short periods. The evolution of the central OpenStack infrastructure is described, as it was upgraded from Folsom to the Icehouse release, including the scalability issues addressed.

  13. Virtual EPID standard phantom audit (VESPA) for remote IMRT and VMAT credentialing.

    PubMed

    Miri, Narges; Lehmann, Joerg; Legge, Kimberley; Vial, Philip; Greer, Peter B

    2017-06-07

    A virtual EPID standard phantom audit (VESPA) has been implemented for remote auditing in support of facility credentialing for clinical trials using IMRT and VMAT. VESPA is based on published methods and a clinically established IMRT QA procedure, here extended to multi-vendor equipment. Facilities are provided with comprehensive instructions and CT datasets to create treatment plans. They deliver the treatment directly to their EPID without any phantom or couch in the beam. In addition, they deliver a set of simple calibration fields per instructions. Collected EPID images are uploaded electronically. In the analysis, the dose is projected back into a virtual cylindrical phantom. 3D gamma analysis is performed. 2D dose planes and linear dose profiles are provided and can be considered when needed for clarification. In addition, using a virtual flat-phantom, 2D field-by-field or arc-by-arc gamma analyses are performed. Pilot facilities covering a range of planning and delivery systems have performed data acquisition and upload successfully. Advantages of VESPA are (1) fast turnaround mainly driven by the facility's capability of providing the requested EPID images, (2) the possibility for facilities performing the audit in parallel, as there is no need to wait for a phantom, (3) simple and efficient credentialing for international facilities, (4) a large set of data points, and (5) a reduced impact on resources and environment as there is no need to transport heavy phantoms or audit staff. Limitations of the current implementation of VESPA for trials credentialing are that it does not provide absolute dosimetry, therefore a Level I audit is still required, and that it relies on correctly delivered open calibration fields, which are used for system calibration. The implemented EPID based IMRT and VMAT audit system promises to dramatically improve credentialing efficiency for clinical trials and wider applications.

  14. Colloidal assembly directed by virtual magnetic moulds

    NASA Astrophysics Data System (ADS)

    Demirörs, Ahmet F.; Pillai, Pramod P.; Kowalczyk, Bartlomiej; Grzybowski, Bartosz A.

    2013-11-01

    Interest in assemblies of colloidal particles has long been motivated by their applications in photonics, electronics, sensors and microlenses. Existing assembly schemes can position colloids of one type relatively flexibly into a range of desired structures, but it remains challenging to produce multicomponent lattices, clusters with precisely controlled symmetries and three-dimensional assemblies. A few schemes can efficiently produce complex colloidal structures, but they require system-specific procedures. Here we show that magnetic field microgradients established in a paramagnetic fluid can serve as `virtual moulds' to act as templates for the assembly of large numbers (~108) of both non-magnetic and magnetic colloidal particles with micrometre precision and typical yields of 80 to 90 per cent. We illustrate the versatility of this approach by producing single-component and multicomponent colloidal arrays, complex three-dimensional structures and a variety of colloidal molecules from polymeric particles, silica particles and live bacteria and by showing that all of these structures can be made permanent. In addition, although our magnetic moulds currently resemble optical traps in that they are limited to the manipulation of micrometre-sized objects, they are massively parallel and can manipulate non-magnetic and magnetic objects simultaneously in two and three dimensions.

  15. Metronome LKM: An open source virtual keyboard driver to measure experiment software latencies.

    PubMed

    Garaizar, Pablo; Vadillo, Miguel A

    2017-10-01

    Experiment software is often used to measure reaction times gathered with keyboards or other input devices. In previous studies, the accuracy and precision of time stamps has been assessed through several means: (a) generating accurate square wave signals from an external device connected to the parallel port of the computer running the experiment software, (b) triggering the typematic repeat feature of some keyboards to get an evenly separated series of keypress events, or (c) using a solenoid handled by a microcontroller to press the input device (keyboard, mouse button, touch screen) that will be used in the experimental setup. Despite the advantages of these approaches in some contexts, none of them can isolate the measurement error caused by the experiment software itself. Metronome LKM provides a virtual keyboard to assess an experiment's software. Using this open source driver, researchers can generate keypress events using high-resolution timers and compare the time stamps collected by the experiment software with those gathered by Metronome LKM (with nanosecond resolution). Our software is highly configurable (in terms of keys pressed, intervals, SysRq activation) and runs on 2.6-4.8 Linux kernels.

  16. MST radar transmitter control and monitor system

    NASA Technical Reports Server (NTRS)

    Brosnahan, J. W.

    1983-01-01

    A generalized transmitter control and monitor card was developed using the Intel 8031 (8051 family) microprocessor. The design was generalized so that this card can be utilized for virtually any control application with only firmware changes. The block diagram appears in Figure 2. The card provides for local control using a 16 key keypad (up to 64 keys are supported). The local display is four digits of 7 segment LEDs. The display can indicate the status of all major system parameters and provide voltage readout for the analog signal inputs. The card can be populated with only the chips required for a given application. Fully populated, the card has two RS-232 serial ports for computer communications. It has a total of 48 TTL parallel lines that can define as either inputs or outputs in groups of four. A total of 32 analog inputs with a 0-5 volt range are supported. In addition, a real-time clock/calendar is available if required. A total of 16 k bytes of ROM and 16 k bytes of RAM is available for programming. This card can be the basis of virtually any monitor or control system with appropriate software.

  17. Collaborative visual analytics of radio surveys in the Big Data era

    NASA Astrophysics Data System (ADS)

    Vohl, Dany; Fluke, Christopher J.; Hassan, Amr H.; Barnes, David G.; Kilborn, Virginia A.

    2017-06-01

    Radio survey datasets comprise an increasing number of individual observations stored as sets of multidimensional data. In large survey projects, astronomers commonly face limitations regarding: 1) interactive visual analytics of sufficiently large subsets of data; 2) synchronous and asynchronous collaboration; and 3) documentation of the discovery workflow. To support collaborative data inquiry, we present encube, a large-scale comparative visual analytics framework. encube can utilise advanced visualization environments such as the CAVE2 (a hybrid 2D and 3D virtual reality environment powered with a 100 Tflop/s GPU-based supercomputer and 84 million pixels) for collaborative analysis of large subsets of data from radio surveys. It can also run on standard desktops, providing a capable visual analytics experience across the display ecology. encube is composed of four primary units enabling compute-intensive processing, advanced visualisation, dynamic interaction, parallel data query, along with data management. Its modularity will make it simple to incorporate astronomical analysis packages and Virtual Observatory capabilities developed within our community. We discuss how encube builds a bridge between high-end display systems (such as CAVE2) and the classical desktop, preserving all traces of the work completed on either platform - allowing the research process to continue wherever you are.

  18. A heterogeneous system based on GPU and multi-core CPU for real-time fluid and rigid body simulation

    NASA Astrophysics Data System (ADS)

    da Silva Junior, José Ricardo; Gonzalez Clua, Esteban W.; Montenegro, Anselmo; Lage, Marcos; Dreux, Marcelo de Andrade; Joselli, Mark; Pagliosa, Paulo A.; Kuryla, Christine Lucille

    2012-03-01

    Computational fluid dynamics in simulation has become an important field not only for physics and engineering areas but also for simulation, computer graphics, virtual reality and even video game development. Many efficient models have been developed over the years, but when many contact interactions must be processed, most models present difficulties or cannot achieve real-time results when executed. The advent of parallel computing has enabled the development of many strategies for accelerating the simulations. Our work proposes a new system which uses some successful algorithms already proposed, as well as a data structure organisation based on a heterogeneous architecture using CPUs and GPUs, in order to process the simulation of the interaction of fluids and rigid bodies. This successfully results in a two-way interaction between them and their surrounding objects. As far as we know, this is the first work that presents a computational collaborative environment which makes use of two different paradigms of hardware architecture for this specific kind of problem. Since our method achieves real-time results, it is suitable for virtual reality, simulation and video game fluid simulation problems.

  19. Dynamically programmable cache

    NASA Astrophysics Data System (ADS)

    Nakkar, Mouna; Harding, John A.; Schwartz, David A.; Franzon, Paul D.; Conte, Thomas

    1998-10-01

    Reconfigurable machines have recently been used as co- processors to accelerate the execution of certain algorithms or program subroutines. The problems with the above approach include high reconfiguration time and limited partial reconfiguration. By far the most critical problems are: (1) the small on-chip memory which results in slower execution time, and (2) small FPGA areas that cannot implement large subroutines. Dynamically Programmable Cache (DPC) is a novel architecture for embedded processors which offers solutions to the above problems. To solve memory access problems, DPC processors merge reconfigurable arrays with the data cache at various cache levels to create a multi-level reconfigurable machines. As a result DPC machines have both higher data accessibility and FPGA memory bandwidth. To solve the limited FPGA resource problem, DPC processors implemented multi-context switching (Virtualization) concept. Virtualization allows implementation of large subroutines with fewer FPGA cells. Additionally, DPC processors can parallelize the execution of several operations resulting in faster execution time. In this paper, the speedup improvement for DPC machines are shown to be 5X faster than an Altera FLEX10K FPGA chip and 2X faster than a Sun Ultral SPARC station for two different algorithms (convolution and motion estimation).

  20. The use of virtual world-based cardiac rehabilitation to encourage healthy lifestyle choices among cardiac patients: intervention development and pilot study protocol.

    PubMed

    Brewer, LaPrincess C; Kaihoi, Brian; Zarling, Kathleen K; Squires, Ray W; Thomas, Randal; Kopecky, Stephen

    2015-04-08

    Despite proven benefits through the secondary prevention of cardiovascular disease (CVD) and reduction of mortality, cardiac rehabilitation (CR) remains underutilized in cardiac patients. Underserved populations most affected by CVD including rural residents, low socioeconomic status patients, and racial/ethnic minorities have the lowest participation rates due to access barriers. Internet-and mobile-based lifestyle interventions have emerged as potential modalities to complement and increase accessibility to CR. An outpatient CR program using virtual world technology may provide an effective alternative to conventional CR by overcoming patient access limitations such as geographics, work schedule constraints, and transportation. The objective of this paper is to describe the research protocol of a two-phased, pilot study that will assess the feasibility (Phase 1) and comparative effectiveness (Phase 2) of a virtual world-based (Second Life) CR program as an extension of a conventional CR program in achieving healthy behavioral change among post-acute coronary syndrome (ACS) and post-percutaneous coronary intervention (PCI) patients. We hypothesize that virtual world CR users will improve behaviors (physical activity, diet, and smoking) to a greater degree than conventional CR participants. In Phase 1, we will recruit at least 10 patients enrolled in outpatient CR who were recently hospitalized for an ACS (unstable angina, ST-segment elevation myocardial infarction, non-ST-segment elevation myocardial infarction) or who recently underwent elective PCI at Mayo Clinic Hospital, Rochester Campus in Rochester, MN with at least one modifiable, lifestyle risk factor target (sedentary lifestyle, unhealthy diet, and current smoking). Recruited patients will participate in a 12-week, virtual world health education program which will provide feedback on the feasibility, usability, and design of the intervention. During Phase 2, we will conduct a 2-arm, parallel group, single-center, randomized controlled trial (RCT). Patients will be randomized at a 1:1 ratio to adjunct virtual world-based CR with conventional CR or conventional CR only. The primary outcome is a composite including at least one of the following (1) at least 150 minutes of physical activity per week, (2) daily consumption of five or more fruits and vegetables, and (3) smoking cessation. Patients will be assessed at 3, 6, and 12 months. The Phase 1 feasibility study is currently open for recruitment which will be followed by the Phase 2 RCT. The anticipated completion date for the study is May 2016. While research on the use of virtual world technology in health programs is in its infancy, it offers unique advantages over current Web-based health interventions including social interactivity and active learning. It also increases accessibility to vulnerable populations who have higher burdens of CVD. This study will yield results on the effectiveness of a virtual world-based CR program as an innovative platform to influence healthy lifestyle behavior and self-efficacy.

  1. Implementing the PM Programming Language using MPI and OpenMP - a New Tool for Programming Geophysical Models on Parallel Systems

    NASA Astrophysics Data System (ADS)

    Bellerby, Tim

    2015-04-01

    PM (Parallel Models) is a new parallel programming language specifically designed for writing environmental and geophysical models. The language is intended to enable implementers to concentrate on the science behind the model rather than the details of running on parallel hardware. At the same time PM leaves the programmer in control - all parallelisation is explicit and the parallel structure of any given program may be deduced directly from the code. This paper describes a PM implementation based on the Message Passing Interface (MPI) and Open Multi-Processing (OpenMP) standards, looking at issues involved with translating the PM parallelisation model to MPI/OpenMP protocols and considering performance in terms of the competing factors of finer-grained parallelisation and increased communication overhead. In order to maximise portability, the implementation stays within the MPI 1.3 standard as much as possible, with MPI-2 MPI-IO file handling the only significant exception. Moreover, it does not assume a thread-safe implementation of MPI. PM adopts a two-tier abstract representation of parallel hardware. A PM processor is a conceptual unit capable of efficiently executing a set of language tasks, with a complete parallel system consisting of an abstract N-dimensional array of such processors. PM processors may map to single cores executing tasks using cooperative multi-tasking, to multiple cores or even to separate processing nodes, efficiently sharing tasks using algorithms such as work stealing. While tasks may move between hardware elements within a PM processor, they may not move between processors without specific programmer intervention. Tasks are assigned to processors using a nested parallelism approach, building on ideas from Reyes et al. (2009). The main program owns all available processors. When the program enters a parallel statement then either processors are divided out among the newly generated tasks (number of new tasks < number of processors) or tasks are divided out among the available processors (number of tasks > number of processors). Nested parallel statements may further subdivide the processor set owned by a given task. Tasks or processors are distributed evenly by default, but uneven distributions are possible under programmer control. It is also possible to explicitly enable child tasks to migrate within the processor set owned by their parent task, reducing load unbalancing at the potential cost of increased inter-processor message traffic. PM incorporates some programming structures from the earlier MIST language presented at a previous EGU General Assembly, while adopting a significantly different underlying parallelisation model and type system. PM code is available at www.pm-lang.org under an unrestrictive MIT license. Reference Ruymán Reyes, Antonio J. Dorta, Francisco Almeida, Francisco de Sande, 2009. Automatic Hybrid MPI+OpenMP Code Generation with llc, Recent Advances in Parallel Virtual Machine and Message Passing Interface, Lecture Notes in Computer Science Volume 5759, 185-195

  2. Flexible, task-dependent use of sensory feedback to control hand movements

    PubMed Central

    Knill, David C.; Bondada, Amulya; Chhabra, Manu

    2011-01-01

    We tested whether changing accuracy demands for simple pointing movements leads humans to adjust the feedback control laws that map sensory signals from the moving hand to motor commands. Subjects made repeated pointing movements in a virtual environment to touch a button whose shape varied randomly from trial-to-trial – between squares, rectangles oriented perpendicular to the movement path and rectangles oriented parallel to the movement path. Subjects performed the task on a horizontal table, but saw the target configuration and a virtual rendering of their pointing finger through a mirror mounted between a monitor and the table. On a one-third of trials, the position of the virtual finger was perturbed by ±1 cm either in the movement direction or perpendicular to the movement direction when the finger passed behind an occluder. Subjects corrected quickly for the perturbations despite not consciously noticing them; however, they corrected almost twice as much for perturbations aligned with the narrow dimension of a target than for perturbations aligned with the long dimension. These changes in apparent feedback gain appeared in the kinematic trajectories soon after the time of the perturbations, indicating that they reflect differences in the feedback control law used throughout the duration of movements. The results indicate that the brain adjusts its feedback control law for individual movements “on-demand” to fit task demands. Simulations of optimal control laws for a two-joint arm show that accuracy demands alone, coupled with signal dependent noise lead to qualitatively the same behavior. PMID:21273407

  3. Efficiently modeling neural networks on massively parallel computers

    NASA Technical Reports Server (NTRS)

    Farber, Robert M.

    1993-01-01

    Neural networks are a very useful tool for analyzing and modeling complex real world systems. Applying neural network simulations to real world problems generally involves large amounts of data and massive amounts of computation. To efficiently handle the computational requirements of large problems, we have implemented at Los Alamos a highly efficient neural network compiler for serial computers, vector computers, vector parallel computers, and fine grain SIMD computers such as the CM-2 connection machine. This paper describes the mapping used by the compiler to implement feed-forward backpropagation neural networks for a SIMD (Single Instruction Multiple Data) architecture parallel computer. Thinking Machines Corporation has benchmarked our code at 1.3 billion interconnects per second (approximately 3 gigaflops) on a 64,000 processor CM-2 connection machine (Singer 1990). This mapping is applicable to other SIMD computers and can be implemented on MIMD computers such as the CM-5 connection machine. Our mapping has virtually no communications overhead with the exception of the communications required for a global summation across the processors (which has a sub-linear runtime growth on the order of O(log(number of processors)). We can efficiently model very large neural networks which have many neurons and interconnects and our mapping can extend to arbitrarily large networks (within memory limitations) by merging the memory space of separate processors with fast adjacent processor interprocessor communications. This paper will consider the simulation of only feed forward neural network although this method is extendable to recurrent networks.

  4. Volunteered Cloud Computing for Disaster Management

    NASA Astrophysics Data System (ADS)

    Evans, J. D.; Hao, W.; Chettri, S. R.

    2014-12-01

    Disaster management relies increasingly on interpreting earth observations and running numerical models; which require significant computing capacity - usually on short notice and at irregular intervals. Peak computing demand during event detection, hazard assessment, or incident response may exceed agency budgets; however some of it can be met through volunteered computing, which distributes subtasks to participating computers via the Internet. This approach has enabled large projects in mathematics, basic science, and climate research to harness the slack computing capacity of thousands of desktop computers. This capacity is likely to diminish as desktops give way to battery-powered mobile devices (laptops, smartphones, tablets) in the consumer market; but as cloud computing becomes commonplace, it may offer significant slack capacity -- if its users are given an easy, trustworthy mechanism for participating. Such a "volunteered cloud computing" mechanism would also offer several advantages over traditional volunteered computing: tasks distributed within a cloud have fewer bandwidth limitations; granular billing mechanisms allow small slices of "interstitial" computing at no marginal cost; and virtual storage volumes allow in-depth, reversible machine reconfiguration. Volunteered cloud computing is especially suitable for "embarrassingly parallel" tasks, including ones requiring large data volumes: examples in disaster management include near-real-time image interpretation, pattern / trend detection, or large model ensembles. In the context of a major disaster, we estimate that cloud users (if suitably informed) might volunteer hundreds to thousands of CPU cores across a large provider such as Amazon Web Services. To explore this potential, we are building a volunteered cloud computing platform and targeting it to a disaster management context. Using a lightweight, fault-tolerant network protocol, this platform helps cloud users join parallel computing projects; automates reconfiguration of their virtual machines; ensures accountability for donated computing; and optimizes the use of "interstitial" computing. Initial applications include fire detection from multispectral satellite imagery and flood risk mapping through hydrological simulations.

  5. Exploring the range of climate biome projections for tropical South America: The role of CO2 fertilization and seasonality

    NASA Astrophysics Data System (ADS)

    Lapola, David M.; Oyama, Marcos D.; Nobre, Carlos A.

    2009-09-01

    Tropical South America vegetation cover projections for the end of the century differ considerably depending on climate scenario and also on how physiological processes are considered in vegetation models. In this paper we use a potential vegetation model (CPTEC-PVM2) to analyze biome distribution in tropical South America under a range of climate projections and a range of estimates about the effects of increased atmospheric CO2. We show that if the CO2 "fertilization effect" indeed takes place and is maintained in the long term in tropical forests, then it will avoid biome shifts in Amazonia in most of the climate scenarios, even if the effect of CO2 fertilization is halved. However, if CO2 fertilization does not play any important role on tropical forests in the future or if dry season is longer than 4 months (projected by 2/14 GCMs), then there is replacement of large portions of Amazonia by tropical savanna.

  6. NASA Tech Briefs, April 2004

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Topics covered include: Analysis of SSEM Sensor Data Using BEAM; Hairlike Percutaneous Photochemical Sensors; Video Guidance Sensors Using Remotely Activated Targets; Simulating Remote Sensing Systems; EHW Approach to Temperature Compensation of Electronics; Polymorphic Electronic Circuits; Micro-Tubular Fuel Cells; Whispering-Gallery-Mode Tunable Narrow-Band-Pass Filter; PVM Wrapper; Simulation of Hyperspectral Images; Algorithm for Controlling a Centrifugal Compressor; Hybrid Inflatable Pressure Vessel; Double-Acting, Locking Carabiners; Position Sensor Integral with a Linear Actuator; Improved Electromagnetic Brake; Flow Straightener for a Rotating-Drum Liquid Separator; Sensory-Feedback Exoskeletal Arm Controller; Active Suppression of Instabilities in Engine Combustors; Fabrication of Robust, Flat, Thinned, UV-Imaging CCDs; Chemical Thinning Process for Fabricating UV-Imaging CCDs; Pseudoslit Spectrometer; Waste-Heat-Driven Cooling Using Complex Compound Sorbents; Improved Refractometer for Measuring Temperatures of Drops; Semiconductor Lasers Containing Quantum Wells in Junctions; Phytoplankton-Fluorescence-Lifetime Vertical Profiler; Hexagonal Pixels and Indexing Scheme for Binary Images; Finding Minimum-Power Broadcast Trees for Wireless Networks; and Automation of Design Engineering Processes.

  7. Virtual earthquake engineering laboratory with physics-based degrading materials on parallel computers

    NASA Astrophysics Data System (ADS)

    Cho, In Ho

    For the last few decades, we have obtained tremendous insight into underlying microscopic mechanisms of degrading quasi-brittle materials from persistent and near-saintly efforts in laboratories, and at the same time we have seen unprecedented evolution in computational technology such as massively parallel computers. Thus, time is ripe to embark on a novel approach to settle unanswered questions, especially for the earthquake engineering community, by harmoniously combining the microphysics mechanisms with advanced parallel computing technology. To begin with, it should be stressed that we placed a great deal of emphasis on preserving clear meaning and physical counterparts of all the microscopic material models proposed herein, since it is directly tied to the belief that by doing so, the more physical mechanisms we incorporate, the better prediction we can obtain. We departed from reviewing representative microscopic analysis methodologies, selecting out "fixed-type" multidirectional smeared crack model as the base framework for nonlinear quasi-brittle materials, since it is widely believed to best retain the physical nature of actual cracks. Microscopic stress functions are proposed by integrating well-received existing models to update normal stresses on the crack surfaces (three orthogonal surfaces are allowed to initiate herein) under cyclic loading. Unlike the normal stress update, special attention had to be paid to the shear stress update on the crack surfaces, due primarily to the well-known pathological nature of the fixed-type smeared crack model---spurious large stress transfer over the open crack under nonproportional loading. In hopes of exploiting physical mechanism to resolve this deleterious nature of the fixed crack model, a tribology-inspired three-dimensional (3d) interlocking mechanism has been proposed. Following the main trend of tribology (i.e., the science and engineering of interacting surfaces), we introduced the base fabric of solid particle-soft matrix to explain realistic interlocking over rough crack surfaces, and the adopted Gaussian distribution feeds random particle sizes to the entire domain. Validation against a well-documented rough crack experiment reveals promising accuracy of the proposed 3d interlocking model. A consumed energy-based damage model has been proposed for the weak correlation between the normal and shear stresses on the crack surfaces, and also for describing the nature of irrecoverable damage. Since the evaluation of the consumed energy is directly linked to the microscopic deformation, which can be efficiently tracked on the crack surfaces, the proposed damage model is believed to provide a more physical interpretation than existing damage mechanics, which fundamentally stem from mathematical derivation with few physical counterparts. Another novel point of the present work lies in the topological transition-based "smart" steel bar model, notably with evolving compressive buckling length. We presented a systematic framework of information flow between the key ingredients of composite materials (i.e., steel bar and its surrounding concrete elements). The smart steel model suggested can incorporate smooth transition during reversal loading, tensile rupture, early buckling after reversal from excessive tensile loading, and even compressive buckling. Especially, the buckling length is made to evolve according to the damage states of the surrounding elements of each bar, while all other dominant models leave the length unchanged. What lies behind all the aforementioned novel attempts is, of course, the problem-optimized parallel platform. In fact, the parallel computing in our field has been restricted to monotonic shock or blast loading with explicit algorithm which is characteristically feasible to be parallelized. In the present study, efficient parallelization strategies for the highly demanding implicit nonlinear finite element analysis (FEA) program for real-scale reinforced concrete (RC) structures under cyclic loading are proposed. Quantitative comparison of state-of-the-art parallel strategies, in terms of factorization, had been carried out, leading to the problem-optimized solver, which is successfully embracing the penalty method and banded nature. Particularly, the penalty method employed imparts considerable smoothness to the global response, which yields a practical superiority of the parallel triangular system solver over other advanced solvers such as parallel preconditioned conjugate gradient method. Other salient issues on parallelization are also addressed. The parallel platform established offers unprecedented access to simulations of real-scale structures, giving new understanding about the physics-based mechanisms adopted and probabilistic randomness at the entire system level. Particularly, the platform enables bold simulations of real-scale RC structures exposed to cyclic loading---H-shaped wall system and 4-story T-shaped wall system. The simulations show the desired capability of accurate prediction of global force-displacement responses, postpeak softening behavior, and compressive buckling of longitudinal steel bars. It is fascinating to see that intrinsic randomness of the 3d interlocking model appears to cause "localized" damage of the real-scale structures, which is consistent with reported observations in different fields such as granular media. Equipped with accuracy, stability and scalability as demonstrated so far, the parallel platform is believed to serve as a fertile ground for the introducing of further physical mechanisms into various research fields as well as the earthquake engineering community. In the near future, it can be further expanded to run in concert with reliable FEA programs such as FRAME3d or OPENSEES. Following the central notion of "multiscale" analysis technique, actual infrastructures exposed to extreme natural hazard can be successfully tackled by this next generation analysis tool---the harmonious union of the parallel platform and a general FEA program. At the same time, any type of experiments can be easily conducted by this "virtual laboratory."

  8. Accelerating Climate and Weather Simulations through Hybrid Computing

    NASA Technical Reports Server (NTRS)

    Zhou, Shujia; Cruz, Carlos; Duffy, Daniel; Tucker, Robert; Purcell, Mark

    2011-01-01

    Unconventional multi- and many-core processors (e.g. IBM (R) Cell B.E.(TM) and NVIDIA (R) GPU) have emerged as effective accelerators in trial climate and weather simulations. Yet these climate and weather models typically run on parallel computers with conventional processors (e.g. Intel, AMD, and IBM) using Message Passing Interface. To address challenges involved in efficiently and easily connecting accelerators to parallel computers, we investigated using IBM's Dynamic Application Virtualization (TM) (IBM DAV) software in a prototype hybrid computing system with representative climate and weather model components. The hybrid system comprises two Intel blades and two IBM QS22 Cell B.E. blades, connected with both InfiniBand(R) (IB) and 1-Gigabit Ethernet. The system significantly accelerates a solar radiation model component by offloading compute-intensive calculations to the Cell blades. Systematic tests show that IBM DAV can seamlessly offload compute-intensive calculations from Intel blades to Cell B.E. blades in a scalable, load-balanced manner. However, noticeable communication overhead was observed, mainly due to IP over the IB protocol. Full utilization of IB Sockets Direct Protocol and the lower latency production version of IBM DAV will reduce this overhead.

  9. A novel highly parallel algorithm for linearly unmixing hyperspectral images

    NASA Astrophysics Data System (ADS)

    Guerra, Raúl; López, Sebastián.; Callico, Gustavo M.; López, Jose F.; Sarmiento, Roberto

    2014-10-01

    Endmember extraction and abundances calculation represent critical steps within the process of linearly unmixing a given hyperspectral image because of two main reasons. The first one is due to the need of computing a set of accurate endmembers in order to further obtain confident abundance maps. The second one refers to the huge amount of operations involved in these time-consuming processes. This work proposes an algorithm to estimate the endmembers of a hyperspectral image under analysis and its abundances at the same time. The main advantage of this algorithm is its high parallelization degree and the mathematical simplicity of the operations implemented. This algorithm estimates the endmembers as virtual pixels. In particular, the proposed algorithm performs the descent gradient method to iteratively refine the endmembers and the abundances, reducing the mean square error, according with the linear unmixing model. Some mathematical restrictions must be added so the method converges in a unique and realistic solution. According with the algorithm nature, these restrictions can be easily implemented. The results obtained with synthetic images demonstrate the well behavior of the algorithm proposed. Moreover, the results obtained with the well-known Cuprite dataset also corroborate the benefits of our proposal.

  10. Impurity-doped optical shock, detonation and damage location sensor

    DOEpatents

    Weiss, J.D.

    1995-02-07

    A shock, detonation, and damage location sensor providing continuous fiber-optic means of measuring shock speed and damage location, and could be designed through proper cabling to have virtually any desired crush pressure. The sensor has one or a plurality of parallel multimode optical fibers, or a singlemode fiber core, surrounded by an elongated cladding, doped along their entire length with impurities to fluoresce in response to light at a different wavelength entering one end of the fiber(s). The length of a fiber would be continuously shorted as it is progressively destroyed by a shock wave traveling parallel to its axis. The resulting backscattered and shifted light would eventually enter a detector and be converted into a proportional electrical signals which would be evaluated to determine shock velocity and damage location. The corresponding reduction in output, because of the shortening of the optical fibers, is used as it is received to determine the velocity and position of the shock front as a function of time. As a damage location sensor the sensor fiber cracks along with the structure to which it is mounted. The size of the resulting drop in detector output is indicative of the location of the crack. 8 figs.

  11. Design and implementation of a high performance network security processor

    NASA Astrophysics Data System (ADS)

    Wang, Haixin; Bai, Guoqiang; Chen, Hongyi

    2010-03-01

    The last few years have seen many significant progresses in the field of application-specific processors. One example is network security processors (NSPs) that perform various cryptographic operations specified by network security protocols and help to offload the computation intensive burdens from network processors (NPs). This article presents a high performance NSP system architecture implementation intended for both internet protocol security (IPSec) and secure socket layer (SSL) protocol acceleration, which are widely employed in virtual private network (VPN) and e-commerce applications. The efficient dual one-way pipelined data transfer skeleton and optimised integration scheme of the heterogenous parallel crypto engine arrays lead to a Gbps rate NSP, which is programmable with domain specific descriptor-based instructions. The descriptor-based control flow fragments large data packets and distributes them to the crypto engine arrays, which fully utilises the parallel computation resources and improves the overall system data throughput. A prototyping platform for this NSP design is implemented with a Xilinx XC3S5000 based FPGA chip set. Results show that the design gives a peak throughput for the IPSec ESP tunnel mode of 2.85 Gbps with over 2100 full SSL handshakes per second at a clock rate of 95 MHz.

  12. Computing with Beowulf

    NASA Technical Reports Server (NTRS)

    Cohen, Jarrett

    1999-01-01

    Parallel computers built out of mass-market parts are cost-effectively performing data processing and simulation tasks. The Supercomputing (now known as "SC") series of conferences celebrated its 10th anniversary last November. While vendors have come and gone, the dominant paradigm for tackling big problems still is a shared-resource, commercial supercomputer. Growing numbers of users needing a cheaper or dedicated-access alternative are building their own supercomputers out of mass-market parts. Such machines are generally called Beowulf-class systems after the 11th century epic. This modern-day Beowulf story began in 1994 at NASA's Goddard Space Flight Center. A laboratory for the Earth and space sciences, computing managers there threw down a gauntlet to develop a $50,000 gigaFLOPS workstation for processing satellite data sets. Soon, Thomas Sterling and Don Becker were working on the Beowulf concept at the University Space Research Association (USRA)-run Center of Excellence in Space Data and Information Sciences (CESDIS). Beowulf clusters mix three primary ingredients: commodity personal computers or workstations, low-cost Ethernet networks, and the open-source Linux operating system. One of the larger Beowulfs is Goddard's Highly-parallel Integrated Virtual Environment, or HIVE for short.

  13. Impurity-doped optical shock, detonation and damage location sensor

    DOEpatents

    Weiss, Jonathan D.

    1995-01-01

    A shock, detonation, and damage location sensor providing continuous fiber-optic means of measuring shock speed and damage location, and could be designed through proper cabling to have virtually any desired crush pressure. The sensor has one or a plurality of parallel multimode optical fibers, or a singlemode fiber core, surrounded by an elongated cladding, doped along their entire length with impurities to fluoresce in response to light at a different wavelength entering one end of the fiber(s). The length of a fiber would be continuously shorted as it is progressively destroyed by a shock wave traveling parallel to its axis. The resulting backscattered and shifted light would eventually enter a detector and be converted into a proportional electrical signals which would be evaluated to determine shock velocity and damage location. The corresponding reduction in output, because of the shortening of the optical fibers, is used as it is received to determine the velocity and position of the shock front as a function of time. As a damage location sensor the sensor fiber cracks along with the structure to which it is mounted. The size of the resulting drop in detector output is indicative of the location of the crack.

  14. The Effects of In-Nature and Virtual-Nature Field Trip Experiences On Proenvironmental Attitudes and Behaviors, And Environmental Knowledge Of Middle School Students

    NASA Astrophysics Data System (ADS)

    Ferderbar, Catherine A.

    To develop sustainable solutions to remediate the complex ecological problems of earth's soil, water, and air degradation requires the talents and skills of knowledgeable, motivated people (UNESCO, 1977; UNESCO, 2010). Researchers historically emphasized that time spent in outdoor, nature activities (Wells & Lekies, 2006), particularly with an adult mentor (Chawla & Cushing, 2007), promotes environmental knowledge and nature-relatedness, precursors to environmental literacy. Research has also demonstrated that technology is integral to the lives of youth, who spend 7:38 hours daily (Rideout, et al., 2010), engaged in electronics. Educators would benefit from knowing if in-nature and virtual-nature field trip experiences provide comparable levels of knowledge and connectedness, to nurture student proenvironmentalism. To investigate field trip phenomena, the researcher studied the impact of virtual-nature and in-nature experiences during which students analyzed water quality along Midwestern rivers. The quasi-experimental, mixed method convergent parallel design with a purposeful sample (n=131) of middle school students from two Midwestern K-8 schools, utilized scientist participant observer field records and narrative response, written assessment aligned to field trip content to evaluate knowledge acquisition. To gain insight into student environmental dispositions, participant observers recorded student comments and behaviors throughout field trips. A survey, administered Pre-Treatment, Post-Treatment 1 and Post-Treatment 2, focused on family water-related behaviors and student perceptions of the need for local government water protection. The findings demonstrated both field trips increased content knowledge significantly, with large effect size. Content knowledge gain from one experience transferred to and was augmented by the second experience. Skill gain (technical and observational) varied by type of field trip and did not transfer. Technical skill was often paired with critical thinking/reasoning. Survey results demonstrated that the virtual-nature, in-nature order evinced a greater proenvironmental attitude and behavioral change. The initial experience provided greater proenvironmental impact, regardless of order. Several students exhibited a Connection to Life Experience that reinforced their nature-relatedness during either field trip. These findings inform best practices associated with environmental education. The implications include teacher-practitioner collaboration with IT personnel, naturalists, hydrologists, zoological and botanical experts, to design local, site-based virtual-nature and in-nature (or hybrid) field trips to nurture environmental literacy goals.

  15. Managing coherence via put/get windows

    DOEpatents

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton on Hudson, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Hoenicke, Dirk [Ossining, NY; Ohmacht, Martin [Yorktown Heights, NY

    2011-01-11

    A method and apparatus for managing coherence between two processors of a two processor node of a multi-processor computer system. Generally the present invention relates to a software algorithm that simplifies and significantly speeds the management of cache coherence in a message passing parallel computer, and to hardware apparatus that assists this cache coherence algorithm. The software algorithm uses the opening and closing of put/get windows to coordinate the activated required to achieve cache coherence. The hardware apparatus may be an extension to the hardware address decode, that creates, in the physical memory address space of the node, an area of virtual memory that (a) does not actually exist, and (b) is therefore able to respond instantly to read and write requests from the processing elements.

  16. Managing coherence via put/get windows

    DOEpatents

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton on Hudson, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Hoenicke, Dirk [Ossining, NY; Ohmacht, Martin [Yorktown Heights, NY

    2012-02-21

    A method and apparatus for managing coherence between two processors of a two processor node of a multi-processor computer system. Generally the present invention relates to a software algorithm that simplifies and significantly speeds the management of cache coherence in a message passing parallel computer, and to hardware apparatus that assists this cache coherence algorithm. The software algorithm uses the opening and closing of put/get windows to coordinate the activated required to achieve cache coherence. The hardware apparatus may be an extension to the hardware address decode, that creates, in the physical memory address space of the node, an area of virtual memory that (a) does not actually exist, and (b) is therefore able to respond instantly to read and write requests from the processing elements.

  17. The Language Grid: supporting intercultural collaboration

    NASA Astrophysics Data System (ADS)

    Ishida, T.

    2018-03-01

    A variety of language resources already exist online. Unfortunately, since many language resources have usage restrictions, it is virtually impossible for each user to negotiate with every language resource provider when combining several resources to achieve the intended purpose. To increase the accessibility and usability of language resources (dictionaries, parallel texts, part-of-speech taggers, machine translators, etc.), we proposed the Language Grid [1]; it wraps existing language resources as atomic services and enables users to create new services by combining the atomic services, and reduces the negotiation costs related to intellectual property rights [4]. Our slogan is “language services from language resources.” We believe that modularization with recombination is the key to creating a full range of customized language environments for various user communities.

  18. Method for control of subsurface coal gasification

    DOEpatents

    Komar, Charles A.

    1976-12-14

    The burn front in an in situ underground coal gasification operation is controlled by utilizing at least two parallel groups of vertical bore holes disposed in the coalbed at spaced-apart locations in planes orthogonal to the plane of maximum permeability in the coalbed. The combustion of the coal is initiated in the coalbed adjacent to one group of the bore holes to establish a combustion zone extending across the group while the pressure of the combustion supporting gas mixture and/or the combustion products is regulated at each well head by valving to control the burn rate and maintain a uniform propagation of the burn front between the spaced-apart hole groups to gasify virtually all the coal lying therebetween.

  19. A clinical investigation of the efficacy of two dentifrices for the reduction of supragingival calculus formation.

    PubMed

    Schiff, Thomas; Delgado, Evaristo; DeVizio, William; Proskin, Howard M

    2008-01-01

    The objective of this double-blind clinical study, conducted in harmony with Volpe-Manhold design for studies of dental calculus, was to compare the efficacy of a dentifrice containing 0.3% triclosan/2.0% polyvinylmethyl ether/maleic acid (PVM/MA) copolymer/0.243% sodium fluoride in a 17% dual silica base (Colgate Total Advanced Toothpaste) to that of a commercially available dentifrice containing 0.243% sodium fluoride in a silica base (Crest Cavity Protection Toothpaste) with respect to the reduction of supragingival calculus formation. Adult male and female subjects from the San Francisco area were entered into the eight-week pre-test phase of the study. Subjects received an evaluation of oral soft and hard tissues and were given a complete oral prophylaxis. They were provided with a non-tartar control placebo dentifrice and a soft-bristled adult toothbrush, and were instructed to brush their teeth twice daily (morning and evening) for one minute. After eight weeks of using the placebo dentifrice, subjects were examined for baseline supragingival calculus formation using the Volpe-Manhold Calculus Index. Qualifying subjects were randomized into two treatment groups which were balanced for gender and baseline calculus scores. All subjects entered into the twelve-week test phase were given a complete oral prophylaxis, and were provided with their assigned dentifrice and a soft-bristled adult toothbrush for home use. Subjects were instructed to brush their teeth for one minute twice daily (in the morning and evening). Prior to each study visit, subjects refrained from brushing their teeth and eating and drinking for four hours. Seventy-seven (77) subjects complied with the protocol and completed the study. At the twelve-week examination, the Test Dentifrice group presented a mean Volpe-Manhold Calculus Index score of 13.22 and the Control Dentifrice group presented a score of 20.29. After twelve weeks of product use, the Test Dentifrice group exhibited 34.8% less supragingival calculus formation than the Control Dentifrice group (statistically significant at p < 0.05). The overall results of this double-blind clinical study support the conclusion that after twelve weeks' use of a dentifrice containing 0.3% triclosan/2.0% PVM/MA copolymer/0.243% sodium fluoride in a 17% dual silica base provides significantly greater control of supragingival calculus formation relative to that of a commercially available dentifrice containing 0.243% sodium fluoride in a silica base.

  20. Efficacy of virtual reality exposure therapy for treatment of dental phobia: a randomized control trial.

    PubMed

    Raghav, Kumar; Van Wijk, A J; Abdullah, Fawzia; Islam, Md Nurul; Bernatchez, Marc; De Jongh, Ad

    2016-02-27

    Virtual Reality Exposure Therapy (VRET) is found to be a promising and a viable alternative for in vivo exposure in the treatment of specific phobias. However, its usefulness for treating dental phobia is unexplored. The aims of the present study are to determine: (a) the efficacy of VRET versus informational pamphlet (IP) control group in terms of dental trait and state anxiety reductions at 1 week, 3 months and 6 months follow-up (b) the real-time physiological arousal [heart rate (HR)] of VRET group participants during and following therapy (c) the relation between subjective (presence) and objective (HR) measures during VRET. This study is a single blind, randomized controlled trial with two parallel arms in which participants will be allocated to VRET or IP with a ratio of 1:1. Thirty participants (18-50 years) meeting the Phobia Checklist criteria of dental phobia will undergo block randomization with allocation concealment. The primary outcome measures include participants' dental trait anxiety (Modified Dental Anxiety Scale and Dental Fear Survey) and state anxiety (Visual Analogue Scale) measured at baseline (T0), at intervention (T1), 1-week (T2), 3 months (T3) and 6 months (T4) follow-up. A behavior test will be conducted before and after the intervention. The secondary outcome measures are real-time evaluation of HR and VR (Virtual Reality) experience (presence, realism, nausea) during and following the VRET intervention respectively. The data will be analyzed using intention-to-treat and per-protocol analysis. This study uses novel non-invasive VRET, which may provide a possible alternative treatment for dental anxiety and phobia. ISRCTN25824611 , Date of registration: 26 October 2015.

Top