Accelerating Climate Simulations Through Hybrid Computing
NASA Technical Reports Server (NTRS)
Zhou, Shujia; Sinno, Scott; Cruz, Carlos; Purcell, Mark
2009-01-01
Unconventional multi-core processors (e.g., IBM Cell B/E and NYIDIDA GPU) have emerged as accelerators in climate simulation. However, climate models typically run on parallel computers with conventional processors (e.g., Intel and AMD) using MPI. Connecting accelerators to this architecture efficiently and easily becomes a critical issue. When using MPI for connection, we identified two challenges: (1) identical MPI implementation is required in both systems, and; (2) existing MPI code must be modified to accommodate the accelerators. In response, we have extended and deployed IBM Dynamic Application Virtualization (DAV) in a hybrid computing prototype system (one blade with two Intel quad-core processors, two IBM QS22 Cell blades, connected with Infiniband), allowing for seamlessly offloading compute-intensive functions to remote, heterogeneous accelerators in a scalable, load-balanced manner. Currently, a climate solar radiation model running with multiple MPI processes has been offloaded to multiple Cell blades with approx.10% network overhead.
Accelerating Climate and Weather Simulations through Hybrid Computing
NASA Technical Reports Server (NTRS)
Zhou, Shujia; Cruz, Carlos; Duffy, Daniel; Tucker, Robert; Purcell, Mark
2011-01-01
Unconventional multi- and many-core processors (e.g. IBM (R) Cell B.E.(TM) and NVIDIA (R) GPU) have emerged as effective accelerators in trial climate and weather simulations. Yet these climate and weather models typically run on parallel computers with conventional processors (e.g. Intel, AMD, and IBM) using Message Passing Interface. To address challenges involved in efficiently and easily connecting accelerators to parallel computers, we investigated using IBM's Dynamic Application Virtualization (TM) (IBM DAV) software in a prototype hybrid computing system with representative climate and weather model components. The hybrid system comprises two Intel blades and two IBM QS22 Cell B.E. blades, connected with both InfiniBand(R) (IB) and 1-Gigabit Ethernet. The system significantly accelerates a solar radiation model component by offloading compute-intensive calculations to the Cell blades. Systematic tests show that IBM DAV can seamlessly offload compute-intensive calculations from Intel blades to Cell B.E. blades in a scalable, load-balanced manner. However, noticeable communication overhead was observed, mainly due to IP over the IB protocol. Full utilization of IB Sockets Direct Protocol and the lower latency production version of IBM DAV will reduce this overhead.
Hot Chips and Hot Interconnects for High End Computing Systems
NASA Technical Reports Server (NTRS)
Saini, Subhash
2005-01-01
I will discuss several processors: 1. The Cray proprietary processor used in the Cray X1; 2. The IBM Power 3 and Power 4 used in an IBM SP 3 and IBM SP 4 systems; 3. The Intel Itanium and Xeon, used in the SGI Altix systems and clusters respectively; 4. IBM System-on-a-Chip used in IBM BlueGene/L; 5. HP Alpha EV68 processor used in DOE ASCI Q cluster; 6. SPARC64 V processor, which is used in the Fujitsu PRIMEPOWER HPC2500; 7. An NEC proprietary processor, which is used in NEC SX-6/7; 8. Power 4+ processor, which is used in Hitachi SR11000; 9. NEC proprietary processor, which is used in Earth Simulator. The IBM POWER5 and Red Storm Computing Systems will also be discussed. The architectures of these processors will first be presented, followed by interconnection networks and a description of high-end computer systems based on these processors and networks. The performance of various hardware/programming model combinations will then be compared, based on latest NAS Parallel Benchmark results (MPI, OpenMP/HPF and hybrid (MPI + OpenMP). The tutorial will conclude with a discussion of general trends in the field of high performance computing, (quantum computing, DNA computing, cellular engineering, and neural networks).
2008-04-01
Space GmbH as follows: B. TECHNICAL PRPOPOSA/DESCRIPTION OF WORK Cell: A Revolutionary High Performance Computing Platform On 29 June 2005 [1...IBM has announced that is has partnered with Mercury Computer Systems, a maker of specialized computers . The Cell chip provides massive floating-point...the computing industry away from the traditional processor technology dominated by Intel. While in the past, the development of computing power has
NASA Astrophysics Data System (ADS)
Sanford, James L.; Schlig, Eugene S.; Prache, Olivier; Dove, Derek B.; Ali, Tariq A.; Howard, Webster E.
2002-02-01
The IBM Research Division and eMagin Corp. jointly have developed a low-power VGA direct view active matrix OLED display, fabricated on a crystalline silicon CMOS chip. The display is incorporated in IBM prototype wristwatch computers running the Linus operating system. IBM designed the silicon chip and eMagin developed the organic stack and performed the back-end-of line processing and packaging. Each pixel is driven by a constant current source controlled by a CMOS RAM cell, and the display receives its data from the processor memory bus. This paper describes the OLED technology and packaging, and outlines the design of the pixel and display electronics and the processor interface. Experimental results are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Shujia; Duffy, Daniel; Clune, Thomas
The call for ever-increasing model resolutions and physical processes in climate and weather models demands a continual increase in computing power. The IBM Cell processor's order-of-magnitude peak performance increase over conventional processors makes it very attractive to fulfill this requirement. However, the Cell's characteristics, 256KB local memory per SPE and the new low-level communication mechanism, make it very challenging to port an application. As a trial, we selected the solar radiation component of the NASA GEOS-5 climate model, which: (1) is representative of column physics components (half the total computational time), (2) has an extremely high computational intensity: the ratiomore » of computational load to main memory transfers, and (3) exhibits embarrassingly parallel column computations. In this paper, we converted the baseline code (single-precision Fortran) to C and ported it to an IBM BladeCenter QS20. For performance, we manually SIMDize four independent columns and include several unrolling optimizations. Our results show that when compared with the baseline implementation running on one core of Intel's Xeon Woodcrest, Dempsey, and Itanium2, the Cell is approximately 8.8x, 11.6x, and 12.8x faster, respectively. Our preliminary analysis shows that the Cell can also accelerate the dynamics component (~;;25percent total computational time). We believe these dramatic performance improvements make the Cell processor very competitive as an accelerator.« less
Parallel Implementation of the Wideband DOA Algorithm on the IBM Cell BE Processor
2010-05-01
Abstract—The Multiple Signal Classification ( MUSIC ) algorithm is a powerful technique for determining the Direction of Arrival (DOA) of signals...Broadband Engine Processor (Cell BE). The process of adapting the serial based MUSIC algorithm to the Cell BE will be analyzed in terms of parallelism and...using Multiple Signal Classification MUSIC algorithm [4] • Computation of Focus matrix • Computation of number of sources • Separation of Signal
Impacts of the IBM Cell Processor to Support Climate Models
NASA Technical Reports Server (NTRS)
Zhou, Shujia; Duffy, Daniel; Clune, Tom; Suarez, Max; Williams, Samuel; Halem, Milt
2008-01-01
NASA is interested in the performance and cost benefits for adapting its applications to the IBM Cell processor. However, its 256KB local memory per SPE and the new communication mechanism, make it very challenging to port an application. We selected the solar radiation component of the NASA GEOS-5 climate model, which: (1) is representative of column physics (approximately 50% computational time), (2) has a high computational load relative to transferring data from and to main memory, (3) performs independent calculations across multiple columns. We converted the baseline code (single-precision, Fortran) to C and ported it with manually SIMDizing 4 independent columns and found that a Cell with 8 SPEs can process 2274 columns per second. Compared with the baseline results, the Cell is approximately 5.2X, approximately 8.2X, approximately 15.1X faster than a core on Intel Woodcrest, Dempsey, and Itanium2, respectively. We believe this dramatic performance improvement makes a hybrid cluster with Cell and traditional nodes competitive.
State University of New York Institute of Technology (SUNYIT) Summer Scholar Program
2009-10-01
COVERED (From - To) March 2007 – April 2009 4 . TITLE AND SUBTITLE STATE UNIVERSITY OF NEW YORK INSTITUTE OF TECHNOLOGY (SUNYIT) SUMMER SCHOLAR...Even with access to the Arctic Regional Supercomputer Center (ARSC), evolving a 9/7 wavelet with four multi-resolution levels (MRA 4 ) involves...evaluated over the multiple processing elements in the Cell processor. It was tested on Cell processors in a Sony Playstation 3 and on an IBM QS20 blade
Multi-Core Programming Design Patterns: Stream Processing Algorithms for Dynamic Scene Perceptions
2014-05-01
processor developed by IBM and other companies , incorpo- rates the verb—POWER5— processor as the Power Processor Element (PPE), one of the early general...deliver an power efficient single-precision peak performance of more than 256 GFlops. Substantially more raw power became available later, when nVIDIA ...algorithms, including IBM’s Cell/B.E., GPUs from NVidia and AMD and many-core CPUs from Intel.27 The vast growth of digital video content has been a
Integration, Development and Performance of the 500 TFLOPS Heterogeneous Cluster (Condor)
2012-08-01
PlayStation 3 for High Performance Cluster Computing” LAPACK Working Note 185, 2007. [ 4 ] Feng, W., X. Feng, and R. Ge, “Green Supercomputing Comes of...CONFERENCE PAPER (Post Print) 3. DATES COVERED (From - To) JUN 2010 – MAY 2013 4 . TITLE AND SUBTITLE INTEGRATION, DEVELOPMENT AND PERFORMANCE OF...and streaming processing; the PlayStation 3 uses the IBM Cell BE processor, which adopts the multi-processor, single-instruction-multiple- data (SIMD
Accuracy of the lattice-Boltzmann method using the Cell processor
NASA Astrophysics Data System (ADS)
Harvey, M. J.; de Fabritiis, G.; Giupponi, G.
2008-11-01
Accelerator processors like the new Cell processor are extending the traditional platforms for scientific computation, allowing orders of magnitude more floating-point operations per second (flops) compared to standard central processing units. However, they currently lack double-precision support and support for some IEEE 754 capabilities. In this work, we develop a lattice-Boltzmann (LB) code to run on the Cell processor and test the accuracy of this lattice method on this platform. We run tests for different flow topologies, boundary conditions, and Reynolds numbers in the range Re=6 350 . In one case, simulation results show a reduced mass and momentum conservation compared to an equivalent double-precision LB implementation. All other cases demonstrate the utility of the Cell processor for fluid dynamics simulations. Benchmarks on two Cell-based platforms are performed, the Sony Playstation3 and the QS20/QS21 IBM blade, obtaining a speed-up factor of 7 and 21, respectively, compared to the original PC version of the code, and a conservative sustained performance of 28 gigaflops per single Cell processor. Our results suggest that choice of IEEE 754 rounding mode is possibly as important as double-precision support for this specific scientific application.
MULTI-CORE AND OPTICAL PROCESSOR RELATED APPLICATIONS RESEARCH AT OAK RIDGE NATIONAL LABORATORY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barhen, Jacob; Kerekes, Ryan A; ST Charles, Jesse Lee
2008-01-01
High-speed parallelization of common tasks holds great promise as a low-risk approach to achieving the significant increases in signal processing and computational performance required for next generation innovations in reconfigurable radio systems. Researchers at the Oak Ridge National Laboratory have been working on exploiting the parallelization offered by this emerging technology and applying it to a variety of problems. This paper will highlight recent experience with four different parallel processors applied to signal processing tasks that are directly relevant to signal processing required for SDR/CR waveforms. The first is the EnLight Optical Core Processor applied to matched filter (MF) correlationmore » processing via fast Fourier transform (FFT) of broadband Dopplersensitive waveforms (DSW) using active sonar arrays for target tracking. The second is the IBM CELL Broadband Engine applied to 2-D discrete Fourier transform (DFT) kernel for image processing and frequency domain processing. And the third is the NVIDIA graphical processor applied to document feature clustering. EnLight Optical Core Processor. Optical processing is inherently capable of high-parallelism that can be translated to very high performance, low power dissipation computing. The EnLight 256 is a small form factor signal processing chip (5x5 cm2) with a digital optical core that is being developed by an Israeli startup company. As part of its evaluation of foreign technology, ORNL's Center for Engineering Science Advanced Research (CESAR) had access to a precursor EnLight 64 Alpha hardware for a preliminary assessment of capabilities in terms of large Fourier transforms for matched filter banks and on applications related to Doppler-sensitive waveforms. This processor is optimized for array operations, which it performs in fixed-point arithmetic at the rate of 16 TeraOPS at 8-bit precision. This is approximately 1000 times faster than the fastest DSP available today. The optical core performs the matrix-vector multiplications, where the nominal matrix size is 256x256. The system clock is 125MHz. At each clock cycle, 128K multiply-and-add operations per second (OPS) are carried out, which yields a peak performance of 16 TeraOPS. IBM Cell Broadband Engine. The Cell processor is the extraordinary resulting product of 5 years of sustained, intensive R&D collaboration (involving over $400M investment) between IBM, Sony, and Toshiba. Its architecture comprises one multithreaded 64-bit PowerPC processor element (PPE) with VMX capabilities and two levels of globally coherent cache, and 8 synergistic processor elements (SPEs). Each SPE consists of a processor (SPU) designed for streaming workloads, local memory, and a globally coherent direct memory access (DMA) engine. Computations are performed in 128-bit wide single instruction multiple data streams (SIMD). An integrated high-bandwidth element interconnect bus (EIB) connects the nine processors and their ports to external memory and to system I/O. The Applied Software Engineering Research (ASER) Group at the ORNL is applying the Cell to a variety of text and image analysis applications. Research on Cell-equipped PlayStation3 (PS3) consoles has led to the development of a correlation-based image recognition engine that enables a single PS3 to process images at more than 10X the speed of state-of-the-art single-core processors. NVIDIA Graphics Processing Units. The ASER group is also employing the latest NVIDIA graphical processing units (GPUs) to accelerate clustering of thousands of text documents using recently developed clustering algorithms such as document flocking and affinity propagation.« less
Enhancing Image Processing Performance for PCID in a Heterogeneous Network of Multi-core Processors
2009-09-01
TFLOPS of Playstation 3 (PS3) nodes with IBM Cell Broadband Engine multi-cores and 15 dual-quad Xeon head nodes. The interconnect fabric includes... 4 3. INFORMATION MANAGEMENT FOR PARALLELIZATION AND...STREAMING............................................................. 7 4 . RESULTS
Conversion of Mass Storage Hierarchy in an IBM Computer Network
1989-03-01
storage devices GUIDE IBM users’ group for DOS operating systems IBM International Business Machines IBM 370/145 CPU introduced in 1970 IBM 370/168 CPU...February 12, 1985, Information Systems Group, International Business Machines Corporation. "IBM 3090 Processor Complex" and Mass Storage System...34 Mainframe Journal, pp. 15-26, 64-65, Dallas, Texas, September-October 1987. 3. International Business Machines Corporation, Introduction to IBM 3S80 Storage
MATCHED FILTER COMPUTATION ON FPGA, CELL, AND GPU
DOE Office of Scientific and Technical Information (OSTI.GOV)
BAKER, ZACHARY K.; GOKHALE, MAYA B.; TRIPP, JUSTIN L.
2007-01-08
The matched filter is an important kernel in the processing of hyperspectral data. The filter enables researchers to sift useful data from instruments that span large frequency bands. In this work, they evaluate the performance of a matched filter algorithm implementation on accelerated co-processor (XD1000), the IBM Cell microprocessor, and the NVIDIA GeForce 6900 GTX GPU graphics card. They provide extensive discussion of the challenges and opportunities afforded by each platform. In particular, they explore the problems of partitioning the filter most efficiently between the host CPU and the co-processor. Using their results, they derive several performance metrics that providemore » the optimal solution for a variety of application situations.« less
Compute Server Performance Results
NASA Technical Reports Server (NTRS)
Stockdale, I. E.; Barton, John; Woodrow, Thomas (Technical Monitor)
1994-01-01
Parallel-vector supercomputers have been the workhorses of high performance computing. As expectations of future computing needs have risen faster than projected vector supercomputer performance, much work has been done investigating the feasibility of using Massively Parallel Processor systems as supercomputers. An even more recent development is the availability of high performance workstations which have the potential, when clustered together, to replace parallel-vector systems. We present a systematic comparison of floating point performance and price-performance for various compute server systems. A suite of highly vectorized programs was run on systems including traditional vector systems such as the Cray C90, and RISC workstations such as the IBM RS/6000 590 and the SGI R8000. The C90 system delivers 460 million floating point operations per second (FLOPS), the highest single processor rate of any vendor. However, if the price-performance ration (PPR) is considered to be most important, then the IBM and SGI processors are superior to the C90 processors. Even without code tuning, the IBM and SGI PPR's of 260 and 220 FLOPS per dollar exceed the C90 PPR of 160 FLOPS per dollar when running our highly vectorized suite,
A Survey of Parallel Sorting Algorithms.
1981-12-01
see that, in this algorithm, each Processor i, for 1 itp -2, interacts directly only with Processors i+l and i-l. Processor j 0 only interacts with...Chan76] Chandra, A.K., "Maximal Parallelism in Matrix Multiplication," IBM Report RC. 6193, Watson Research Center, Yorktown Heights, N.Y., October 1976
Goldfinger, D; Medici, M A; Hsi, R; McPherson, J; Connelly, M
1983-01-01
Clinical studies have suggested that granulocyte transfusions may be of value in the treatment of septic neonatal patients who present with severe granulocytopenia. We have developed a protocol for the preparation of granulocyte concentrates from freshly collected units of whole blood, using an automated blood cell processor. The red cells were washed with saline. Then, the buffy coats were collected from the washed red cells and studied for their suitability as granulocyte concentrates for neonatal transfusion. The mean number of granulocytes per concentrate was 1.6 X 10(9) in a mean volume of 25 ml. Studies of granulocyte function, including viability, random mobility, chemotaxis, phagocytosis and nitro-blue tetrazolium reduction, demonstrated that the granulocytes were functionally unimpaired following preparation of the concentrates. These studies suggest that concentrates of functional granulocytes, suitable for transfusion to neonatal patients, can be prepared from fresh units of whole blood, using a cell processor. This procedure is more cost-effective than leukapheresis and allows for delivery of granulocytes for transfusion in a more timely fashion.
Initial Performance Results on IBM POWER6
NASA Technical Reports Server (NTRS)
Saini, Subbash; Talcott, Dale; Jespersen, Dennis; Djomehri, Jahed; Jin, Haoqiang; Mehrotra, Piysuh
2008-01-01
The POWER5+ processor has a faster memory bus than that of the previous generation POWER5 processor (533 MHz vs. 400 MHz), but the measured per-core memory bandwidth of the latter is better than that of the former (5.7 GB/s vs. 4.3 GB/s). The reason for this is that in the POWER5+, the two cores on the chip share the L2 cache, L3 cache and memory bus. The memory controller is also on the chip and is shared by the two cores. This serializes the path to memory. For consistently good performance on a wide range of applications, the performance of the processor, the memory subsystem, and the interconnects (both latency and bandwidth) should be balanced. Recognizing this, IBM has designed the Power6 processor so as to avoid the bottlenecks due to the L2 cache, memory controller and buffer chips of the POWER5+. Unlike the POWER5+, each core in the POWER6 has its own L2 cache (4 MB - double that of the Power5+), memory controller and buffer chips. Each core in the POWER6 runs at 4.7 GHz instead of 1.9 GHz in POWER5+. In this paper, we evaluate the performance of a dual-core Power6 based IBM p6-570 system, and we compare its performance with that of a dual-core Power5+ based IBM p575+ system. In this evaluation, we have used the High- Performance Computing Challenge (HPCC) benchmarks, NAS Parallel Benchmarks (NPB), and four real-world applications--three from computational fluid dynamics and one from climate modeling.
I Love to Rite! Spelling Checkers in the Writing Classroom.
ERIC Educational Resources Information Center
Eiser, Leslie
1986-01-01
Highlights the advantages of word processors and spelling checkers in improving student writing skills. Explains how spelling checkers work and describes the types of available checkers. Also provides lists of Apple, IBM, and Commodore word processors and checkers. (ML)
Parallel hyperbolic PDE simulation on clusters: Cell versus GPU
NASA Astrophysics Data System (ADS)
Rostrup, Scott; De Sterck, Hans
2010-12-01
Increasingly, high-performance computing is looking towards data-parallel computational devices to enhance computational performance. Two technologies that have received significant attention are IBM's Cell Processor and NVIDIA's CUDA programming model for graphics processing unit (GPU) computing. In this paper we investigate the acceleration of parallel hyperbolic partial differential equation simulation on structured grids with explicit time integration on clusters with Cell and GPU backends. The message passing interface (MPI) is used for communication between nodes at the coarsest level of parallelism. Optimizations of the simulation code at the several finer levels of parallelism that the data-parallel devices provide are described in terms of data layout, data flow and data-parallel instructions. Optimized Cell and GPU performance are compared with reference code performance on a single x86 central processing unit (CPU) core in single and double precision. We further compare the CPU, Cell and GPU platforms on a chip-to-chip basis, and compare performance on single cluster nodes with two CPUs, two Cell processors or two GPUs in a shared memory configuration (without MPI). We finally compare performance on clusters with 32 CPUs, 32 Cell processors, and 32 GPUs using MPI. Our GPU cluster results use NVIDIA Tesla GPUs with GT200 architecture, but some preliminary results on recently introduced NVIDIA GPUs with the next-generation Fermi architecture are also included. This paper provides computational scientists and engineers who are considering porting their codes to accelerator environments with insight into how structured grid based explicit algorithms can be optimized for clusters with Cell and GPU accelerators. It also provides insight into the speed-up that may be gained on current and future accelerator architectures for this class of applications. Program summaryProgram title: SWsolver Catalogue identifier: AEGY_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGY_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPL v3 No. of lines in distributed program, including test data, etc.: 59 168 No. of bytes in distributed program, including test data, etc.: 453 409 Distribution format: tar.gz Programming language: C, CUDA Computer: Parallel Computing Clusters. Individual compute nodes may consist of x86 CPU, Cell processor, or x86 CPU with attached NVIDIA GPU accelerator. Operating system: Linux Has the code been vectorised or parallelized?: Yes. Tested on 1-128 x86 CPU cores, 1-32 Cell Processors, and 1-32 NVIDIA GPUs. RAM: Tested on Problems requiring up to 4 GB per compute node. Classification: 12 External routines: MPI, CUDA, IBM Cell SDK Nature of problem: MPI-parallel simulation of Shallow Water equations using high-resolution 2D hyperbolic equation solver on regular Cartesian grids for x86 CPU, Cell Processor, and NVIDIA GPU using CUDA. Solution method: SWsolver provides 3 implementations of a high-resolution 2D Shallow Water equation solver on regular Cartesian grids, for CPU, Cell Processor, and NVIDIA GPU. Each implementation uses MPI to divide work across a parallel computing cluster. Additional comments: Sub-program numdiff is used for the test run.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sudman, D.L.
For 17 years, the sensor-based IBM 1800 computer successfully fulfilled Sun's requirements for data acquisition and process control at its petroleum refinery in Toledo, Ohio. However, faltering reliability due to deterioration, coupled with IBM's announced withdrawal of contractual hardware maintenance, prompted Sun to approach IBM regarding potential solutions to the problem of economically maintaining the IBM 1800 as a viable system in the Toledo Refinery. In concert, IBM and Sun identified several options, but an IBM proposal which held the most promise for long term success was the direct replacement of the IBM 1800 processor and software systems with anmore » IBM 4300 running IBM's licensed program product ''Advanced Control System,'' i.e., ACS. Sun chose this solution. The intent of this paper is to chronicle the highlights of the project which successfully revitalized the process computer facilities in Sun's Toledo Refinery in only 10 months, under financial constraints, and using limited human resources.« less
Parallel network simulations with NEURON.
Migliore, M; Cannia, C; Lytton, W W; Markram, Henry; Hines, M L
2006-10-01
The NEURON simulation environment has been extended to support parallel network simulations. Each processor integrates the equations for its subnet over an interval equal to the minimum (interprocessor) presynaptic spike generation to postsynaptic spike delivery connection delay. The performance of three published network models with very different spike patterns exhibits superlinear speedup on Beowulf clusters and demonstrates that spike communication overhead is often less than the benefit of an increased fraction of the entire problem fitting into high speed cache. On the EPFL IBM Blue Gene, almost linear speedup was obtained up to 100 processors. Increasing one model from 500 to 40,000 realistic cells exhibited almost linear speedup on 2,000 processors, with an integration time of 9.8 seconds and communication time of 1.3 seconds. The potential for speed-ups of several orders of magnitude makes practical the running of large network simulations that could otherwise not be explored.
Parallel Network Simulations with NEURON
Migliore, M.; Cannia, C.; Lytton, W.W; Markram, Henry; Hines, M. L.
2009-01-01
The NEURON simulation environment has been extended to support parallel network simulations. Each processor integrates the equations for its subnet over an interval equal to the minimum (interprocessor) presynaptic spike generation to postsynaptic spike delivery connection delay. The performance of three published network models with very different spike patterns exhibits superlinear speedup on Beowulf clusters and demonstrates that spike communication overhead is often less than the benefit of an increased fraction of the entire problem fitting into high speed cache. On the EPFL IBM Blue Gene, almost linear speedup was obtained up to 100 processors. Increasing one model from 500 to 40,000 realistic cells exhibited almost linear speedup on 2000 processors, with an integration time of 9.8 seconds and communication time of 1.3 seconds. The potential for speed-ups of several orders of magnitude makes practical the running of large network simulations that could otherwise not be explored. PMID:16732488
Dynamic Load Balancing For Grid Partitioning on a SP-2 Multiprocessor: A Framework
NASA Technical Reports Server (NTRS)
Sohn, Andrew; Simon, Horst; Lasinski, T. A. (Technical Monitor)
1994-01-01
Computational requirements of full scale computational fluid dynamics change as computation progresses on a parallel machine. The change in computational intensity causes workload imbalance of processors, which in turn requires a large amount of data movement at runtime. If parallel CFD is to be successful on a parallel or massively parallel machine, balancing of the runtime load is indispensable. Here a framework is presented for dynamic load balancing for CFD applications, called Jove. One processor is designated as a decision maker Jove while others are assigned to computational fluid dynamics. Processors running CFD send flags to Jove in a predetermined number of iterations to initiate load balancing. Jove starts working on load balancing while other processors continue working with the current data and load distribution. Jove goes through several steps to decide if the new data should be taken, including preliminary evaluate, partition, processor reassignment, cost evaluation, and decision. Jove running on a single IBM SP2 node has been completely implemented. Preliminary experimental results show that the Jove approach to dynamic load balancing can be effective for full scale grid partitioning on the target machine IBM SP2.
ORNL Cray X1 evaluation status report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agarwal, P.K.; Alexander, R.A.; Apra, E.
2004-05-01
On August 15, 2002 the Department of Energy (DOE) selected the Center for Computational Sciences (CCS) at Oak Ridge National Laboratory (ORNL) to deploy a new scalable vector supercomputer architecture for solving important scientific problems in climate, fusion, biology, nanoscale materials and astrophysics. ''This program is one of the first steps in an initiative designed to provide U.S. scientists with the computational power that is essential to 21st century scientific leadership,'' said Dr. Raymond L. Orbach, director of the department's Office of Science. In FY03, CCS procured a 256-processor Cray X1 to evaluate the processors, memory subsystem, scalability of themore » architecture, software environment and to predict the expected sustained performance on key DOE applications codes. The results of the micro-benchmarks and kernel bench marks show the architecture of the Cray X1 to be exceptionally fast for most operations. The best results are shown on large problems, where it is not possible to fit the entire problem into the cache of the processors. These large problems are exactly the types of problems that are important for the DOE and ultra-scale simulation. Application performance is found to be markedly improved by this architecture: - Large-scale simulations of high-temperature superconductors run 25 times faster than on an IBM Power4 cluster using the same number of processors. - Best performance of the parallel ocean program (POP v1.4.3) is 50 percent higher than on Japan s Earth Simulator and 5 times higher than on an IBM Power4 cluster. - A fusion application, global GYRO transport, was found to be 16 times faster on the X1 than on an IBM Power3. The increased performance allowed simulations to fully resolve questions raised by a prior study. - The transport kernel in the AGILE-BOLTZTRAN astrophysics code runs 15 times faster than on an IBM Power4 cluster using the same number of processors. - Molecular dynamics simulations related to the phenomenon of photon echo run 8 times faster than previously achieved. Even at 256 processors, the Cray X1 system is already outperforming other supercomputers with thousands of processors for a certain class of applications such as climate modeling and some fusion applications. This evaluation is the outcome of a number of meetings with both high-performance computing (HPC) system vendors and application experts over the past 9 months and has received broad-based support from the scientific community and other agencies.« less
RISC Processors and High Performance Computing
NASA Technical Reports Server (NTRS)
Saini, Subhash; Bailey, David H.; Lasinski, T. A. (Technical Monitor)
1995-01-01
In this tutorial, we will discuss top five current RISC microprocessors: The IBM Power2, which is used in the IBM RS6000/590 workstation and in the IBM SP2 parallel supercomputer, the DEC Alpha, which is in the DEC Alpha workstation and in the Cray T3D; the MIPS R8000, which is used in the SGI Power Challenge; the HP PA-RISC 7100, which is used in the HP 700 series workstations and in the Convex Exemplar; and the Cray proprietary processor, which is used in the new Cray J916. The architecture of these microprocessors will first be presented. The effective performance of these processors will then be compared, both by citing standard benchmarks and also in the context of implementing a real applications. In the process, different programming models such as data parallel (CM Fortran and HPF) and message passing (PVM and MPI) will be introduced and compared. The latest NAS Parallel Benchmark (NPB) absolute performance and performance per dollar figures will be presented. The next generation of the NP13 will also be described. The tutorial will conclude with a discussion of general trends in the field of high performance computing, including likely future developments in hardware and software technology, and the relative roles of vector supercomputers tightly coupled parallel computers, and clusters of workstations. This tutorial will provide a unique cross-machine comparison not available elsewhere.
Massively parallel quantum computer simulator
NASA Astrophysics Data System (ADS)
De Raedt, K.; Michielsen, K.; De Raedt, H.; Trieu, B.; Arnold, G.; Richter, M.; Lippert, Th.; Watanabe, H.; Ito, N.
2007-01-01
We describe portable software to simulate universal quantum computers on massive parallel computers. We illustrate the use of the simulation software by running various quantum algorithms on different computer architectures, such as a IBM BlueGene/L, a IBM Regatta p690+, a Hitachi SR11000/J1, a Cray X1E, a SGI Altix 3700 and clusters of PCs running Windows XP. We study the performance of the software by simulating quantum computers containing up to 36 qubits, using up to 4096 processors and up to 1 TB of memory. Our results demonstrate that the simulator exhibits nearly ideal scaling as a function of the number of processors and suggest that the simulation software described in this paper may also serve as benchmark for testing high-end parallel computers.
NASA Astrophysics Data System (ADS)
Sexton, James C.
1990-08-01
The GF11 project at IBM's T. J. Watson Research Center is entering full production for QCD numerical calculations. This paper describes the GF11 hardware and system software, and discusses the first production program which has been developed to run on GF11. This program is a variation of the Cabbibo Marinari pure gauge Monte Carlo program for SU(3) and is currently sustaining almost 6 gigaflops on 360 processors in GF11.
Implementation of a cone-beam backprojection algorithm on the cell broadband engine processor
NASA Astrophysics Data System (ADS)
Bockenbach, Olivier; Knaup, Michael; Kachelrieß, Marc
2007-03-01
Tomographic image reconstruction is computationally very demanding. In all cases the backprojection represents the performance bottleneck due to the high operational count and due to the high demand put on the memory subsystem. In the past, solving this problem has lead to the implementation of specific architectures, connecting Application Specific Integrated Circuits (ASICs) or Field Programmable Gate Arrays (FPGAs) to memory through dedicated high speed busses. More recently, there have also been attempt to use Graphic Processing Units (GPUs) to perform the backprojection step. Originally aimed at the gaming market, IBM, Toshiba and Sony have introduced the Cell Broadband Engine (CBE) processor, often considered as a multicomputer on a chip. Clocked at 3 GHz, the Cell allows for a theoretical performance of 192 GFlops and a peak data transfer rate over the internal bus of 200 GB/s. This performance indeed makes the Cell a very attractive architecture for implementing tomographic image reconstruction algorithms. In this study, we investigate the relative performance of a perspective backprojection algorithm when implemented on a standard PC and on the Cell processor. We compare these results to the performance achievable with FPGAs based boards and high end GPUs. The cone-beam backprojection performance was assessed by backprojecting a full circle scan of 512 projections of 1024x1024 pixels into a volume of size 512x512x512 voxels. It took 3.2 minutes on the PC (single CPU) and is as fast as 13.6 seconds on the Cell.
Validation of the 1/12 degrees Arctic Cap Nowcast/Forecast System (ACNFS)
2010-11-04
IBM Power 6 ( Davinci ) at NAVOCEANO with a 2 hr time step for the ice model and a 30 min time step for the ocean model. All model boundaries are...run using 320 processors on the Navy DSRC IBM Power 6 ( Davinci ) at NAVOCEANO. A typical one-day hindcast takes approximately 1.0 wall clock hour...meter. As more observations become available, further studies of ice draft will be used as a validation tool . The IABP program archived 102 Argos
Validation of the 1/12 deg Arctic Cap Nowcast/Forecast System (ACNFS)
2010-11-04
IBM Power 6 ( Davinci ) at NAVOCEANO with a 2 hr time step for the ice model and a 30 min time step for the ocean model. All model boundaries are...run using 320 processors on the Navy DSRC IBM Power 6 ( Davinci ) at NAVOCEANO. A typical one-day hindcast takes approximately 1.0 wall clock hour...meter. As more observations become available, further studies of ice draft will be used as a validation tool . The IABP program archived 102 Argos
Software design and documentation language, revision 1
NASA Technical Reports Server (NTRS)
Kleine, H.
1979-01-01
The Software Design and Documentation Language (SDDL) developed to provide an effective communications medium to support the design and documentation of complex software applications is described. Features of the system include: (1) a processor which can convert design specifications into an intelligible, informative machine-reproducible document; (2) a design and documentation language with forms and syntax that are simple, unrestrictive, and communicative; and (3) methodology for effective use of the language and processor. The SDDL processor is written in the SIMSCRIPT II programming language and is implemented on the UNIVAC 1108, the IBM 360/370, and Control Data machines.
PS3 CELL Development for Scientific Computation and Research
NASA Astrophysics Data System (ADS)
Christiansen, M.; Sevre, E.; Wang, S. M.; Yuen, D. A.; Liu, S.; Lyness, M. D.; Broten, M.
2007-12-01
The Cell processor is one of the most powerful processors on the market, and researchers in the earth sciences may find its parallel architecture to be very useful. A cell processor, with 7 cores, can easily be obtained for experimentation by purchasing a PlayStation 3 (PS3) and installing linux and the IBM SDK. Each core of the PS3 is capable of 25 GFLOPS giving a potential limit of 150 GFLOPS when using all 6 SPUs (synergistic processing units) by using vectorized algorithms. We have used the Cell's computational power to create a program which takes simulated tsunami datasets, parses them, and returns a colorized height field image using ray casting techniques. As expected, the time required to create an image is inversely proportional to the number of SPUs used. We believe that this trend will continue when multiple PS3s are chained using OpenMP functionality and are in the process of researching this. By using the Cell to visualize tsunami data, we have found that its greatest feature is its power. This fact entwines well with the needs of the scientific community where the limiting factor is time. Any algorithm, such as the heat equation, that can be subdivided into multiple parts can take advantage of the PS3 Cell's ability to split the computations across the 6 SPUs reducing required run time by one sixth. Further vectorization of the code can allow for 4 simultanious floating point operations by using the SIMD (single instruction multiple data) capabilities of the SPU increasing efficiency 24 times.
Communication overhead on the Intel Paragon, IBM SP2 and Meiko CS-2
NASA Technical Reports Server (NTRS)
Bokhari, Shahid H.
1995-01-01
Interprocessor communication overhead is a crucial measure of the power of parallel computing systems-its impact can severely limit the performance of parallel programs. This report presents measurements of communication overhead on three contemporary commercial multicomputer systems: the Intel Paragon, the IBM SP2 and the Meiko CS-2. In each case the time to communicate between processors is presented as a function of message length. The time for global synchronization and memory access is discussed. The performance of these machines in emulating hypercubes and executing random pairwise exchanges is also investigated. It is shown that the interprocessor communication time depends heavily on the specific communication pattern required. These observations contradict the commonly held belief that communication overhead on contemporary machines is independent of the placement of tasks on processors. The information presented in this report permits the evaluation of the efficiency of parallel algorithm implementations against standard baselines.
Multiple Embedded Processors for Fault-Tolerant Computing
NASA Technical Reports Server (NTRS)
Bolotin, Gary; Watson, Robert; Katanyoutanant, Sunant; Burke, Gary; Wang, Mandy
2005-01-01
A fault-tolerant computer architecture has been conceived in an effort to reduce vulnerability to single-event upsets (spurious bit flips caused by impingement of energetic ionizing particles or photons). As in some prior fault-tolerant architectures, the redundancy needed for fault tolerance is obtained by use of multiple processors in one computer. Unlike prior architectures, the multiple processors are embedded in a single field-programmable gate array (FPGA). What makes this new approach practical is the recent commercial availability of FPGAs that are capable of having multiple embedded processors. A working prototype (see figure) consists of two embedded IBM PowerPC 405 processor cores and a comparator built on a Xilinx Virtex-II Pro FPGA. This relatively simple instantiation of the architecture implements an error-detection scheme. A planned future version, incorporating four processors and two comparators, would correct some errors in addition to detecting them.
Mission and data operations IBM 360 user's guide
NASA Technical Reports Server (NTRS)
Balakirsky, J.
1973-01-01
The M and DO computer systems are introduced and supplemented. The hardware and software status is discussed, along with standard processors and user libraries. Data management techniques are presented, as well as machine independence, debugging facilities, and overlay considerations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yao; Balaprakash, Prasanna; Meng, Jiayuan
We present Raexplore, a performance modeling framework for architecture exploration. Raexplore enables rapid, automated, and systematic search of architecture design space by combining hardware counter-based performance characterization and analytical performance modeling. We demonstrate Raexplore for two recent manycore processors IBM Blue- Gene/Q compute chip and Intel Xeon Phi, targeting a set of scientific applications. Our framework is able to capture complex interactions between architectural components including instruction pipeline, cache, and memory, and to achieve a 3–22% error for same-architecture and cross-architecture performance predictions. Furthermore, we apply our framework to assess the two processors, and discover and evaluate a list ofmore » architectural scaling options for future processor designs.« less
1979-01-01
specifications have been prepared for a DoD communications processor on an IBM minicomputer, a minicomputer time sharing system for the DEC PDP-11 and...the Honeywell Level 6. a virtual machine monitor for the IBM 370, and Multics [10] for the Honeywell Level 68. MECHANISMS FOR KERNEL IMPLEMENTATION...HOL INA ZJO : ANERIONS g PROCESSORn , c ...THEOREMS 1 ITP I-THEOREMS PROOF EVIDENCE - p II KV./370 FORMAL DESIGN PROCESS M4ODULAR DECOMPOSITION * NON
2013-05-25
graphics processors by IBM, AMD, and nVIDIA . They are between general-purpose pro- cessors and special-purpose processors. In Phase II. 3.10 Measure of...particular, Dr. Kevin Irick started a company Silicon Scapes and he has been the CEO. 5 Implications for Related/Future Research We speculate that...final project report in Jan. 2011. At the test and validation stage of the project. FANTOM’s partner at Raytheon quit from his company and hence from
Software fault tolerance in computer operating systems
NASA Technical Reports Server (NTRS)
Iyer, Ravishankar K.; Lee, Inhwan
1994-01-01
This chapter provides data and analysis of the dependability and fault tolerance for three operating systems: the Tandem/GUARDIAN fault-tolerant system, the VAX/VMS distributed system, and the IBM/MVS system. Based on measurements from these systems, basic software error characteristics are investigated. Fault tolerance in operating systems resulting from the use of process pairs and recovery routines is evaluated. Two levels of models are developed to analyze error and recovery processes inside an operating system and interactions among multiple instances of an operating system running in a distributed environment. The measurements show that the use of process pairs in Tandem systems, which was originally intended for tolerating hardware faults, allows the system to tolerate about 70% of defects in system software that result in processor failures. The loose coupling between processors which results in the backup execution (the processor state and the sequence of events occurring) being different from the original execution is a major reason for the measured software fault tolerance. The IBM/MVS system fault tolerance almost doubles when recovery routines are provided, in comparison to the case in which no recovery routines are available. However, even when recovery routines are provided, there is almost a 50% chance of system failure when critical system jobs are involved.
Program Processes Thermocouple Readings
NASA Technical Reports Server (NTRS)
Quave, Christine A.; Nail, William, III
1995-01-01
Digital Signal Processor for Thermocouples (DART) computer program implements precise and fast method of converting voltage to temperature for large-temperature-range thermocouple applications. Written using LabVIEW software. DART available only as object code for use on Macintosh II FX or higher-series computers running System 7.0 or later and IBM PC-series and compatible computers running Microsoft Windows 3.1. Macintosh version of DART (SSC-00032) requires LabVIEW 2.2.1 or 3.0 for execution. IBM PC version (SSC-00031) requires LabVIEW 3.0 for Windows 3.1. LabVIEW software product of National Instruments and not included with program.
NAVO MSRC Navigator. Spring 2006
2006-01-01
all of these upgrades are complete, the effective computing power of the NAVO MSRC will be essentially tripled, as measured by sustainable ... performance on the HPCMP benchmark suite. All four of these systems will be configured with two gigabytes of memory per processor, IBM’s “Federation” inter
A microcomputer interface for a digital audio processor-based data recording system.
Croxton, T L; Stump, S J; Armstrong, W M
1987-10-01
An inexpensive interface is described that performs direct transfer of digitized data from the digital audio processor and video cassette recorder based data acquisition system designed by Bezanilla (1985, Biophys. J., 47:437-441) to an IBM PC/XT microcomputer. The FORTRAN callable software that drives this interface is capable of controlling the video cassette recorder and starting data collection immediately after recognition of a segment of previously collected data. This permits piecewise analysis of long intervals of data that would otherwise exceed the memory capability of the microcomputer.
A microcomputer interface for a digital audio processor-based data recording system.
Croxton, T L; Stump, S J; Armstrong, W M
1987-01-01
An inexpensive interface is described that performs direct transfer of digitized data from the digital audio processor and video cassette recorder based data acquisition system designed by Bezanilla (1985, Biophys. J., 47:437-441) to an IBM PC/XT microcomputer. The FORTRAN callable software that drives this interface is capable of controlling the video cassette recorder and starting data collection immediately after recognition of a segment of previously collected data. This permits piecewise analysis of long intervals of data that would otherwise exceed the memory capability of the microcomputer. PMID:3676444
Benchmarking gate-based quantum computers
NASA Astrophysics Data System (ADS)
Michielsen, Kristel; Nocon, Madita; Willsch, Dennis; Jin, Fengping; Lippert, Thomas; De Raedt, Hans
2017-11-01
With the advent of public access to small gate-based quantum processors, it becomes necessary to develop a benchmarking methodology such that independent researchers can validate the operation of these processors. We explore the usefulness of a number of simple quantum circuits as benchmarks for gate-based quantum computing devices and show that circuits performing identity operations are very simple, scalable and sensitive to gate errors and are therefore very well suited for this task. We illustrate the procedure by presenting benchmark results for the IBM Quantum Experience, a cloud-based platform for gate-based quantum computing.
NASA Technical Reports Server (NTRS)
Aucoin, P. J.; Stewart, J.; Mckay, M. F. (Principal Investigator)
1980-01-01
This document presents instructions for analysts who use the EOD-LARSYS as programmed on the Purdue University IBM 370/148 (recently replaced by the IBM 3031) computer. It presents sample applications, control cards, and error messages for all processors in the system and gives detailed descriptions of the mathematical procedures and information needed to execute the system and obtain the desired output. EOD-LARSYS is the JSC version of an integrated batch system for analysis of multispectral scanner imagery data. The data included is designed for use with the as built documentation (volume 3) and the program listings (volume 4). The system is operational from remote terminals at Johnson Space Center under the virtual machine/conversational monitor system environment.
Graphing and Percentage Applications Using the Personal Computer.
ERIC Educational Resources Information Center
Innes, Jay
1985-01-01
The paper describes how "IBM Graphing Assistant" and "Apple Softgraph" can foster a multifaceted approach to application of mathematical concepts and how a survey can be undertaken using the computer as word processor, data bank, and source of visual displays. Mathematical skills reinforced include estimating, rounding, graphing, and solving…
Expert Systems on Multiprocessor Architectures. Volume 2. Technical Reports
1991-06-01
Report RC 12936 (#58037). IBM T. J. Wartson Reiearch Center. July 1987. Alan Jay Smith. Cache memories. Coniputing Sitrry., 1.1(3): I.3-5:30...basic-shared is an instrument for ashared memory design. The components panels are processor- qload-scrolling-bar-panel, memory-qload-scrolling-bar-panel
Optimization strategies for molecular dynamics programs on Cray computers and scalar work stations
NASA Astrophysics Data System (ADS)
Unekis, Michael J.; Rice, Betsy M.
1994-12-01
We present results of timing runs and different optimization strategies for a prototype molecular dynamics program that simulates shock waves in a two-dimensional (2-D) model of a reactive energetic solid. The performance of the program may be improved substantially by simple changes to the Fortran or by employing various vendor-supplied compiler optimizations. The optimum strategy varies among the machines used and will vary depending upon the details of the program. The effect of various compiler options and vendor-supplied subroutine calls is demonstrated. Comparison is made between two scalar workstations (IBM RS/6000 Model 370 and Model 530) and several Cray supercomputers (X-MP/48, Y-MP8/128, and C-90/16256). We find that for a scientific application program dominated by sequential, scalar statements, a relatively inexpensive high-end work station such as the IBM RS/60006 RISC series will outperform single processor performance of the Cray X-MP/48 and perform competitively with single processor performance of the Y-MP8/128 and C-9O/16256.
Lange, Sandra; Steder, Anne; Killian, Doreen; Knuebel, Gudrun; Sekora, Anett; Vogel, Heike; Lindner, Iris; Dunkelmann, Simone; Prall, Friedrich; Murua Escobar, Hugo; Freund, Mathias; Junghanss, Christian
2017-02-01
An intra-bone marrow (IBM) hematopoietic stem cell transplantation (HSCT) is assumed to optimize the homing process and therefore to improve engraftment as well as hematopoietic recovery compared with conventional i.v. HSCT. This study investigated the feasibility and efficacy of IBM HSCT after nonmyeloablative conditioning in an allogeneic canine HSCT model. Two study cohorts received IBM HSCT of either density gradient (IBM-I, n = 7) or buffy coat (IBM-II, n = 6) enriched bone marrow cells. An historical i.v. HSCT cohort served as control. Before allogeneic HSCT experiments were performed, we investigated the feasibility of IBM HSCT by using technetium-99m marked autologous grafts. Scintigraphic analyses confirmed that most IBM-injected autologous cells remained at the injection sites, independent of the applied volume. In addition, cell migration to other bones occurred. The enrichment process led to different allogeneic graft volumes (IBM-I, 2 × 5 mL; IBM-II, 2 × 25 mL) and significantly lower counts of total nucleated cells in IBM-I grafts compared with IBM-II grafts (1.6 × 10 8 /kg versus 3.8 × 10 8 /kg). After allogeneic HSCT, dogs of the IBM-I group showed a delayed engraftment with lower levels of donor chimerism when compared with IBM-II or to i.v. HSCT. Dogs of the IBM-II group tended to reveal slightly faster early leukocyte engraftment kinetics than intravenously transplanted animals. However, thrombocytopenia was significantly prolonged in both IBM groups when compared with i.v. HSCT. In conclusion, IBM HSCT is feasible in a nonmyeloablative HSCT setting but failed to significantly improve engraftment kinetics and hematopoietic recovery in comparison with conventional i.v. HSCT. Copyright © 2017 The American Society for Blood and Marrow Transplantation. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Sohn, Andrew; Biswas, Rupak; Simon, Horst D.
1996-01-01
The computational requirements for an adaptive solution of unsteady problems change as the simulation progresses. This causes workload imbalance among processors on a parallel machine which, in turn, requires significant data movement at runtime. We present a new dynamic load-balancing framework, called JOVE, that balances the workload across all processors with a global view. Whenever the computational mesh is adapted, JOVE is activated to eliminate the load imbalance. JOVE has been implemented on an IBM SP2 distributed-memory machine in MPI for portability. Experimental results for two model meshes demonstrate that mesh adaption with load balancing gives more than a sixfold improvement over one without load balancing. We also show that JOVE gives a 24-fold speedup on 64 processors compared to sequential execution.
NASA Technical Reports Server (NTRS)
1994-01-01
The objective of this contract was the investigation of the potential performance gains that would result from an upgrade of the Space Station Freedom (SSF) Data Management System (DMS) Embedded Data Processor (EDP) '386' design with the Intel Pentium (registered trade-mark of Intel Corp.) '586' microprocessor. The Pentium ('586') is the latest member of the industry standard Intel X86 family of CISC (Complex Instruction Set Computer) microprocessors. This contract was scheduled to run in parallel with an internal IBM Federal Systems Company (FSC) Internal Research and Development (IR&D) task that had the goal to generate a baseline flight design for an upgraded EDP using the Pentium. This final report summarizes the activities performed in support of Contract NAS2-13758. Our plan was to baseline performance analyses and measurements on the latest state-of-the-art commercially available Pentium processor, representative of the proposed space station design, and then phase to an IBM capital funded breadboard version of the flight design (if available from IR&D and Space Station work) for additional evaluation of results. Unfortunately, the phase-over to the flight design breadboard did not take place, since the IBM Data Management System (DMS) for the Space Station Freedom was terminated by NASA before the referenced capital funded EDP breadboard could be completed. The baseline performance analyses and measurements, however, were successfully completed, as planned, on the commercial Pentium hardware. The results of those analyses, evaluations, and measurements are presented in this final report.
Checking the Goldbach conjecture up to 4\\cdot 10^11
NASA Astrophysics Data System (ADS)
Sinisalo, Matti K.
1993-10-01
One of the most studied problems in additive number theory, Goldbach's conjecture, states that every even integer greater than or equal to 4 can be expressed as a sum of two primes. In this paper checking of this conjecture up to 4 \\cdot {10^{11}} by the IBM 3083 mainframe with vector processor is reported.
Penny S. Lawson; R. Edward Thomas; Elizabeth S Walker
1996-01-01
OPTIGRAMI V2 is a computer program available for IBM persaonl computer with 80286 and higher processors. OPTIGRAMI V2 determines the least-cost lumber grade mix required to produce a given cutting order for clear parts from rough lumber of known grades in a crosscut-first rough mill operation. It is a user-friendly integrated application that includes optimization...
Coherent Transient Systems Evaluation
1993-06-17
europium doped yttrium silicate in collaboration with IBM Almaden Research Center. Research into divalent ion doped crystals as photon gated materials...noise limited model and ignore the non-ideal properties of the medium, nonlinear effects, spatial crosstalk, gating efficiencies, local heating, the...demonstration of the coherent transient continuous optical processor was performed in europium doped yttrium silicate. Though hyperfine split ground
Imprinted-like biopolymeric micelles as efficient nanovehicles for curcumin delivery.
Zhang, Lili; Qi, Zeyou; Huang, Qiyu; Zeng, Ke; Sun, Xiaoyi; Li, Juan; Liu, You-Nian
2014-11-01
To enhance the solubility and improve the bioavailability of hydrophobic curcumin, a new kind of imprinted-like biopolymeric micelles (IBMs) was designed. The IBMs were prepared via co-assembly of gelatin-dextran conjugates with hydrophilic tea polyphenol, then crosslinking the assembled micelles and finally removing the template tea polyphenol by dialysis. The obtained IBMs show selective binding for polyphenol analogous drugs over other drugs. Furthermore, curcumin can be effectively encapsulated into the IBMs with 5×10(4)-fold enhancement of aqueous solubility. We observed the sustained drug release behavior from the curcumin-loaded IBMs (CUR@IBMs) in typical biological buffers. In addition, we found the cell uptake of CUR@IBMs is much higher than that of free curcumin. The cell cytotoxicity results illustrated that CUR@IBMs can improve the growth inhibition of HeLa cells compared with free curcumin, while the blank IBMs have little cytotoxicity. The in vivo animal study demonstrated that the IBMs could significantly improve the oral bioavailability of curcumin. Copyright © 2014 Elsevier B.V. All rights reserved.
A Future Accelerated Cognitive Distributed Hybrid Testbed for Big Data Science Analytics
NASA Astrophysics Data System (ADS)
Halem, M.; Prathapan, S.; Golpayegani, N.; Huang, Y.; Blattner, T.; Dorband, J. E.
2016-12-01
As increased sensor spectral data volumes from current and future Earth Observing satellites are assimilated into high-resolution climate models, intensive cognitive machine learning technologies are needed to data mine, extract and intercompare model outputs. It is clear today that the next generation of computers and storage, beyond petascale cluster architectures, will be data centric. They will manage data movement and process data in place. Future cluster nodes have been announced that integrate multiple CPUs with high-speed links to GPUs and MICS on their backplanes with massive non-volatile RAM and access to active flash RAM disk storage. Active Ethernet connected key value store disk storage drives with 10Ge or higher are now available through the Kinetic Open Storage Alliance. At the UMBC Center for Hybrid Multicore Productivity Research, a future state-of-the-art Accelerated Cognitive Computer System (ACCS) for Big Data science is being integrated into the current IBM iDataplex computational system `bluewave'. Based on the next gen IBM 200 PF Sierra processor, an interim two node IBM Power S822 testbed is being integrated with dual Power 8 processors with 10 cores, 1TB Ram, a PCIe to a K80 GPU and an FPGA Coherent Accelerated Processor Interface card to 20TB Flash Ram. This system is to be updated to the Power 8+, an NVlink 1.0 with the Pascal GPU late in 2016. Moreover, the Seagate 96TB Kinetic Disk system with 24 Ethernet connected active disks is integrated into the ACCS storage system. A Lightweight Virtual File System developed at the NASA GSFC is installed on bluewave. Since remote access to publicly available quantum annealing computers is available at several govt labs, the ACCS will offer an in-line Restricted Boltzmann Machine optimization capability to the D-Wave 2X quantum annealing processor over the campus high speed 100 Gb network to Internet 2 for large files. As an evaluation test of the cognitive functionality of the architecture, the following studies utilizing all the system components will be presented; (i) a near real time climate change study generating CO2 fluxes and (ii) a deep dive capability into an 8000 x8000 pixel image pyramid display and (iii) Large dense and sparse eigenvalue decomposition.
As-built design specification for proportion estimate software subsystem
NASA Technical Reports Server (NTRS)
Obrien, S. (Principal Investigator)
1980-01-01
The Proportion Estimate Processor evaluates four estimation techniques in order to get an improved estimate of the proportion of a scene that is planted in a selected crop. The four techniques to be evaluated were provided by the techniques development section and are: (1) random sampling; (2) proportional allocation, relative count estimate; (3) proportional allocation, Bayesian estimate; and (4) sequential Bayesian allocation. The user is given two options for computation of the estimated mean square error. These are referred to as the cluster calculation option and the segment calculation option. The software for the Proportion Estimate Processor is operational on the IBM 3031 computer.
NASA Technical Reports Server (NTRS)
Kikuchi, Hideaki; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya; Shimojo, Fuyuki; Saini, Subhash
2003-01-01
Scalability of a low-cost, Intel Xeon-based, multi-Teraflop Linux cluster is tested for two high-end scientific applications: Classical atomistic simulation based on the molecular dynamics method and quantum mechanical calculation based on the density functional theory. These scalable parallel applications use space-time multiresolution algorithms and feature computational-space decomposition, wavelet-based adaptive load balancing, and spacefilling-curve-based data compression for scalable I/O. Comparative performance tests are performed on a 1,024-processor Linux cluster and a conventional higher-end parallel supercomputer, 1,184-processor IBM SP4. The results show that the performance of the Linux cluster is comparable to that of the SP4. We also study various effects, such as the sharing of memory and L2 cache among processors, on the performance.
Efficacy of Code Optimization on Cache-Based Processors
NASA Technical Reports Server (NTRS)
VanderWijngaart, Rob F.; Saphir, William C.; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
In this paper a number of techniques for improving the cache performance of a representative piece of numerical software is presented. Target machines are popular processors from several vendors: MIPS R5000 (SGI Indy), MIPS R8000 (SGI PowerChallenge), MIPS R10000 (SGI Origin), DEC Alpha EV4 + EV5 (Cray T3D & T3E), IBM RS6000 (SP Wide-node), Intel PentiumPro (Ames' Whitney), Sun UltraSparc (NERSC's NOW). The optimizations all attempt to increase the locality of memory accesses. But they meet with rather varied and often counterintuitive success on the different computing platforms. We conclude that it may be genuinely impossible to obtain portable performance on the current generation of cache-based machines. At the least, it appears that the performance of modern commodity processors cannot be described with parameters defining the cache alone.
Parallel ALLSPD-3D: Speeding Up Combustor Analysis Via Parallel Processing
NASA Technical Reports Server (NTRS)
Fricker, David M.
1997-01-01
The ALLSPD-3D Computational Fluid Dynamics code for reacting flow simulation was run on a set of benchmark test cases to determine its parallel efficiency. These test cases included non-reacting and reacting flow simulations with varying numbers of processors. Also, the tests explored the effects of scaling the simulation with the number of processors in addition to distributing a constant size problem over an increasing number of processors. The test cases were run on a cluster of IBM RS/6000 Model 590 workstations with ethernet and ATM networking plus a shared memory SGI Power Challenge L workstation. The results indicate that the network capabilities significantly influence the parallel efficiency, i.e., a shared memory machine is fastest and ATM networking provides acceptable performance. The limitations of ethernet greatly hamper the rapid calculation of flows using ALLSPD-3D.
Equation solvers for distributed-memory computers
NASA Technical Reports Server (NTRS)
Storaasli, Olaf O.
1994-01-01
A large number of scientific and engineering problems require the rapid solution of large systems of simultaneous equations. The performance of parallel computers in this area now dwarfs traditional vector computers by nearly an order of magnitude. This talk describes the major issues involved in parallel equation solvers with particular emphasis on the Intel Paragon, IBM SP-1 and SP-2 processors.
Deep learning for medical image segmentation - using the IBM TrueNorth neurosynaptic system
NASA Astrophysics Data System (ADS)
Moran, Steven; Gaonkar, Bilwaj; Whitehead, William; Wolk, Aidan; Macyszyn, Luke; Iyer, Subramanian S.
2018-03-01
Deep convolutional neural networks have found success in semantic image segmentation tasks in computer vision and medical imaging. These algorithms are executed on conventional von Neumann processor architectures or GPUs. This is suboptimal. Neuromorphic processors that replicate the structure of the brain are better-suited to train and execute deep learning models for image segmentation by relying on massively-parallel processing. However, given that they closely emulate the human brain, on-chip hardware and digital memory limitations also constrain them. Adapting deep learning models to execute image segmentation tasks on such chips, requires specialized training and validation. In this work, we demonstrate for the first-time, spinal image segmentation performed using a deep learning network implemented on neuromorphic hardware of the IBM TrueNorth Neurosynaptic System and validate the performance of our network by comparing it to human-generated segmentations of spinal vertebrae and disks. To achieve this on neuromorphic hardware, the training model constrains the coefficients of individual neurons to {-1,0,1} using the Energy Efficient Deep Neuromorphic (EEDN)1 networks training algorithm. Given the 1 million neurons and 256 million synapses, the scale and size of the neural network implemented by the IBM TrueNorth allows us to execute the requisite mapping between segmented images and non-uniform intensity MR images >20 times faster than on a GPU-accelerated network and using <0.1 W. This speed and efficiency implies that a trained neuromorphic chip can be deployed in intra-operative environments where real-time medical image segmentation is necessary.
Specification and preliminary design of an array processor
NASA Technical Reports Server (NTRS)
Slotnick, D. L.; Graham, M. L.
1975-01-01
The design of a computer suited to the class of problems typified by the general circulation of the atmosphere was investigated. A fundamental goal was that the resulting machine should have roughly 100 times the computing capability of an IBM 360/95 computer. A second requirement was that the machine should be programmable in a higher level language similar to FORTRAN. Moreover, the new machine would have to be compatible with the IBM 360/95 since the IBM machine would continue to be used for pre- and post-processing. A third constraint was that the cost of the new machine was to be significantly less than that of other extant machines of similar computing capability, such as the ILLIAC IV and CDC STAR. A final constraint was that it should be feasible to fabricate a complete system and put it in operation by early 1978. Although these objectives were generally met, considerable work remains to be done on the routing system.
Performance assessment of KORAT-3D on the ANL IBM-SP computer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexeyev, A.V.; Zvenigorodskaya, O.A.; Shagaliev, R.M.
1999-09-01
The TENAR code is currently being developed at the Russian Federal Nuclear Center (VNIIEF) as a coupled dynamics code for the simulation of transients in VVER and RBMK systems and other nuclear systems. The neutronic module in this code system is KORAT-3D. This module is also one of the most computationally intensive components of the code system. A parallel version of KORAT-3D has been implemented to achieve the goal of obtaining transient solutions in reasonable computational time, particularly for RBMK calculations that involve the application of >100,000 nodes. An evaluation of the KORAT-3D code performance was recently undertaken on themore » Argonne National Laboratory (ANL) IBM ScalablePower (SP) parallel computer located in the Mathematics and Computer Science Division of ANL. At the time of the study, the ANL IBM-SP computer had 80 processors. This study was conducted under the auspices of a technical staff exchange program sponsored by the International Nuclear Safety Center (INSC).« less
Superconducting Qubit with Integrated Single Flux Quantum Controller Part I: Theory and Fabrication
NASA Astrophysics Data System (ADS)
Beck, Matthew; Leonard, Edward, Jr.; Thorbeck, Ted; Zhu, Shaojiang; Howington, Caleb; Nelson, Jj; Plourde, Britton; McDermott, Robert
As the size of quantum processors grow, so do the classical control requirements. The single flux quantum (SFQ) Josephson digital logic family offers an attractive route to proximal classical control of multi-qubit processors. Here we describe coherent control of qubits via trains of SFQ pulses. We discuss the fabrication of an SFQ-based pulse generator and a superconducting transmon qubit on a single chip. Sources of excess microwave loss stemming from the complex multilayer fabrication of the SFQ circuit are discussed. We show how to mitigate this loss through judicious choice of process workflow and appropriate use of sacrificial protection layers. Present address: IBM T.J. Watson Research Center.
Development of 3-Year Roadmap to Transform the Discipline of Systems Engineering
2010-03-31
quickly humans could physically construct them. Indeed, magnetic core memory was entirely constructed by human hands until it was superseded by...For their mainframe computers, IBM develops the applications, operating system, computer hardware and microprocessors (off the shelf standard memory ...processor developers work on potential computational and memory pipelines to support the required performance capabilities and use the available transistors
Dynamic Load Balancing for Grid Partitioning on a SP-2 Multiprocessor: A Framework
NASA Technical Reports Server (NTRS)
Sohn, Andrew; Simon, Horst; Lasinski, T. A. (Technical Monitor)
1994-01-01
Computational requirements of full scale computational fluid dynamics change as computation progresses on a parallel machine. The change in computational intensity causes workload imbalance of processors, which in turn requires a large amount of data movement at runtime. If parallel CFD is to be successful on a parallel or massively parallel machine, balancing of the runtime load is indispensable. Here a framework is presented for dynamic load balancing for CFD applications, called Jove. One processor is designated as a decision maker Jove while others are assigned to computational fluid dynamics. Processors running CFD send flags to Jove in a predetermined number of iterations to initiate load balancing. Jove starts working on load balancing while other processors continue working with the current data and load distribution. Jove goes through several steps to decide if the new data should be taken, including preliminary evaluate, partition, processor reassignment, cost evaluation, and decision. Jove running on a single EBM SP2 node has been completely implemented. Preliminary experimental results show that the Jove approach to dynamic load balancing can be effective for full scale grid partitioning on the target machine IBM SP2.
Morosetti, Roberta; Mirabella, Massimiliano; Gliubizzi, Carla; Broccolini, Aldobrando; De Angelis, Luciana; Tagliafico, Enrico; Sampaolesi, Maurilio; Gidaro, Teresa; Papacci, Manuela; Roncaglia, Enrica; Rutella, Sergio; Ferrari, Stefano; Tonali, Pietro Attilio; Ricci, Enzo; Cossu, Giulio
2006-11-07
Inflammatory myopathies (IM) are acquired diseases of skeletal muscle comprising dermatomyositis (DM), polymyositis (PM), and inclusion-body myositis (IBM). Immunosuppressive therapies, usually beneficial for DM and PM, are poorly effective in IBM. We report the isolation and characterization of mesoangioblasts, vessel-associated stem cells, from diagnostic muscle biopsies of IM. The number of cells isolated, proliferation rate and lifespan, markers expression, and ability to differentiate into smooth muscle do not differ among normal and IM mesoangioblasts. At variance with normal, DM and PM mesoangioblasts, cells isolated from IBM, fail to differentiate into skeletal myotubes. These data correlate with lack in connective tissue of IBM muscle of alkaline phosphatase (ALP)-positive cells, conversely dramatically increased in PM and DM. A myogenic inhibitory basic helix-loop-helix factor B3 is highly expressed in IBM mesoangioblasts. Indeed, silencing this gene or overexpressing MyoD rescues the myogenic defect of IBM mesoangioblasts, opening novel cell-based therapeutic strategies for this crippling disorder.
Morosetti, Roberta; Mirabella, Massimiliano; Gliubizzi, Carla; Broccolini, Aldobrando; De Angelis, Luciana; Tagliafico, Enrico; Sampaolesi, Maurilio; Gidaro, Teresa; Papacci, Manuela; Roncaglia, Enrica; Rutella, Sergio; Ferrari, Stefano; Tonali, Pietro Attilio; Ricci, Enzo; Cossu, Giulio
2006-01-01
Inflammatory myopathies (IM) are acquired diseases of skeletal muscle comprising dermatomyositis (DM), polymyositis (PM), and inclusion-body myositis (IBM). Immunosuppressive therapies, usually beneficial for DM and PM, are poorly effective in IBM. We report the isolation and characterization of mesoangioblasts, vessel-associated stem cells, from diagnostic muscle biopsies of IM. The number of cells isolated, proliferation rate and lifespan, markers expression, and ability to differentiate into smooth muscle do not differ among normal and IM mesoangioblasts. At variance with normal, DM and PM mesoangioblasts, cells isolated from IBM, fail to differentiate into skeletal myotubes. These data correlate with lack in connective tissue of IBM muscle of alkaline phosphatase (ALP)-positive cells, conversely dramatically increased in PM and DM. A myogenic inhibitory basic helix–loop–helix factor B3 is highly expressed in IBM mesoangioblasts. Indeed, silencing this gene or overexpressing MyoD rescues the myogenic defect of IBM mesoangioblasts, opening novel cell-based therapeutic strategies for this crippling disorder. PMID:17077152
ACARA - AVAILABILITY, COST AND RESOURCE ALLOCATION
NASA Technical Reports Server (NTRS)
Viterna, L. A.
1994-01-01
ACARA is a program for analyzing availability, lifecycle cost, and resource scheduling. It uses a statistical Monte Carlo method to simulate a system's capacity states as well as component failure and repair. Component failures are modelled using a combination of exponential and Weibull probability distributions. ACARA schedules component replacement to achieve optimum system performance. The scheduling will comply with any constraints on component production, resupply vehicle capacity, on-site spares, or crew manpower and equipment. ACARA is capable of many types of analyses and trade studies because of its integrated approach. It characterizes the system performance in terms of both state availability and equivalent availability (a weighted average of state availability). It can determine the probability of exceeding a capacity state to assess reliability and loss of load probability. It can also evaluate the effect of resource constraints on system availability and lifecycle cost. ACARA interprets the results of a simulation and displays tables and charts for: (1) performance, i.e., availability and reliability of capacity states, (2) frequency of failure and repair, (3) lifecycle cost, including hardware, transportation, and maintenance, and (4) usage of available resources, including mass, volume, and maintenance man-hours. ACARA incorporates a user-friendly, menu-driven interface with full screen data entry. It provides a file management system to store and retrieve input and output datasets for system simulation scenarios. ACARA is written in APL2 using the APL2 interpreter for IBM PC compatible systems running MS-DOS. Hardware requirements for the APL2 system include 640K of RAM, 2Mb of extended memory, and an 80386 or 80486 processor with an 80x87 math co-processor. A dot matrix printer is required if the user wishes to print a graph from a results table. A sample MS-DOS executable is provided on the distribution medium. The executable contains licensed material from the APL2 for the IBM PC product which is program property of IBM; Copyright IBM Corporation 1988 - All rights reserved. It is distributed with IBM's permission. The standard distribution medium for this program is a set of three 5.25 inch 360K MS-DOS format diskettes. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. ACARA was developed in 1992.
Landsat image registration for agricultural applications
NASA Technical Reports Server (NTRS)
Wolfe, R. H., Jr.; Juday, R. D.; Wacker, A. G.; Kaneko, T.
1982-01-01
An image registration system has been developed at the NASA Johnson Space Center (JSC) to spatially align multi-temporal Landsat acquisitions for use in agriculture and forestry research. Working in conjunction with the Master Data Processor (MDP) at the Goddard Space Flight Center, it functionally replaces the long-standing LACIE Registration Processor as JSC's data supplier. The system represents an expansion of the techniques developed for the MDP and LACIE Registration Processor, and it utilizes the experience gained in an IBM/JSC effort evaluating the performance of the latter. These techniques are discussed in detail. Several tests were developed to evaluate the registration performance of the system. The results indicate that 1/15-pixel accuracy (about 4m for Landsat MSS) is achievable in ideal circumstances, sub-pixel accuracy (often to 0.2 pixel or better) was attained on a representative set of U.S. acquisitions, and a success rate commensurate with the LACIE Registration Processor was realized. The system has been employed in a production mode on U.S. and foreign data, and a performance similar to the earlier tests has been noted.
European Scientific Notes. Volume 35, Number 12,
1981-12-31
been redesigned to work A. Osorio, which was organized some 3 with the Intel 8085 microprocessor, it years ago and contains about half of the has the...operational set. attempt to derive a set of invariants MOISE is based on the Intel 8085A upon which virtually speaker-invariant microprocessor, and...FACILITY software interface; a Research Signal Processor (RSP) using reduced computational It has been IBM International’s complexity algorithms for
NASA Astrophysics Data System (ADS)
Ethier, Stephane; Lin, Zhihong
2001-10-01
Earlier this year, the National Energy Research Scientific Computing center (NERSC) took delivery of the second most powerful computer in the world. With its 2,528 processors running at a peak performance of 1.5 GFlops, this IBM SP machine has a theoretical performance of almost 3.8 TFlops. To efficiently harness such computing power in one single code is not an easy task and requires a good knowledge of the computer's architecture. Here we present the steps that we followed to improve our gyrokinetic micro-turbulence code GTC in order to take advantage of the new 16-way shared memory nodes of the NERSC IBM SP. Performance results are shown as well as details about the improved mixed-mode MPI-OpenMP model that we use. The enhancements to the code allowed us to tackle much bigger problem sizes, getting closer to our goal of simulating an ITER-size tokamak with both kinetic ions and electrons.(This work is supported by DOE Contract No. DE-AC02-76CH03073 (PPPL), and in part by the DOE Fusion SciDAC Project.)
[Hardware for graphics systems].
Goetz, C
1991-02-01
In all personal computer applications, be it for private or professional use, the decision of which "brand" of computer to buy is of central importance. In the USA Apple computers are mainly used in universities, while in Europe computers of the so-called "industry standard" by IBM (or clones thereof) have been increasingly used for many years. Independently of any brand name considerations, the computer components purchased must meet the current (and projected) needs of the user. Graphic capabilities and standards, processor speed, the use of co-processors, as well as input and output devices such as "mouse", printers and scanners are discussed. This overview is meant to serve as a decision aid. Potential users are given a short but detailed summary of current technical features.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barry, R.F.
LEOPARD is a unit cell homogenization and spectrum generation (MUFT-SOFOCATE type) program with a fuel depletion option.IBM360;UNIVAC1108; FORTRAN IV(H) (IBM360) and FORTRAN V (UNIVAC1108); OS/360 (IBM360) and EXEC2 (UNIVAC1108); 50K (decimal) memory.
Benchmarking and tuning the MILC code on clusters and supercomputers
NASA Astrophysics Data System (ADS)
Gottlieb, Steven
2002-03-01
Recently, we have benchmarked and tuned the MILC code on a number of architectures including Intel Itanium and Pentium IV (PIV), dual-CPU Athlon, and the latest Compaq Alpha nodes. Results will be presented for many of these, and we shall discuss some simple code changes that can result in a very dramatic speedup of the KS conjugate gradient on processors with more advanced memory systems such as PIV, IBM SP and Alpha.
Benchmarking and tuning the MILC code on clusters and supercomputers
NASA Astrophysics Data System (ADS)
Gottlieb, Steven
Recently, we have benchmarked and tuned the MILC code on a number of architectures including Intel Itanium and Pentium IV (PIV), dual-CPU Athlon, and the latest Compaq Alpha nodes. Results will be presented for many of these, and we shall discuss some simple code changes that can result in a very dramatic speedup of the KS conjugate gradient on processors with more advanced memory systems such as PIV, IBM SP and Alpha.
1989-12-01
Interrupt Procedures ....... 29 13. Support for a Larger Memory Model ................ 29 C. IMPLEMENTATION ........................................ 29...describe the programmer’s model of the hardware utilized in the microcomputers and interrupt driven serial communication considerations. Chapter III...Central Processor Unit The programming model of Table 2.1 is common to the Intel 8088, 8086 and 80x86 series of microprocessors used in the IBM PC/AT
Perfmon2: a leap forward in performance monitoring
NASA Astrophysics Data System (ADS)
Jarp, S.; Jurga, R.; Nowak, A.
2008-07-01
This paper describes the software component, perfmon2, that is about to be added to the Linux kernel as the standard interface to the Performance Monitoring Unit (PMU) on common processors, including x86 (AMD and Intel), Sun SPARC, MIPS, IBM Power and Intel Itanium. It also describes a set of tools for doing performance monitoring in practice and details how the CERN openlab team has participated in the testing and development of these tools.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Newman, G.A.; Commer, M.
Three-dimensional (3D) geophysical imaging is now receiving considerable attention for electrical conductivity mapping of potential offshore oil and gas reservoirs. The imaging technology employs controlled source electromagnetic (CSEM) and magnetotelluric (MT) fields and treats geological media exhibiting transverse anisotropy. Moreover when combined with established seismic methods, direct imaging of reservoir fluids is possible. Because of the size of the 3D conductivity imaging problem, strategies are required exploiting computational parallelism and optimal meshing. The algorithm thus developed has been shown to scale to tens of thousands of processors. In one imaging experiment, 32,768 tasks/processors on the IBM Watson Research Blue Gene/Lmore » supercomputer were successfully utilized. Over a 24 hour period we were able to image a large scale field data set that previously required over four months of processing time on distributed clusters based on Intel or AMD processors utilizing 1024 tasks on an InfiniBand fabric. Electrical conductivity imaging using massively parallel computational resources produces results that cannot be obtained otherwise and are consistent with timeframes required for practical exploration problems.« less
VENTURE/PC manual: A multidimensional multigroup neutron diffusion code system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shapiro, A.; Huria, H.C.; Cho, K.W.
1991-12-01
VENTURE/PC is a recompilation of part of the Oak Ridge BOLD VENTURE code system, which will operate on an IBM PC or compatible computer. Neutron diffusion theory solutions are obtained for multidimensional, multigroup problems. This manual contains information associated with operating the code system. The purpose of the various modules used in the code system, and the input for these modules are discussed. The PC code structure is also given. Version 2 included several enhancements not given in the original version of the code. In particular, flux iterations can be done in core rather than by reading and writing tomore » disk, for problems which allow sufficient memory for such in-core iterations. This speeds up the iteration process. Version 3 does not include any of the special processors used in the previous versions. These special processors utilized formatted input for various elements of the code system. All such input data is now entered through the Input Processor, which produces standard interface files for the various modules in the code system. In addition, a Standard Interface File Handbook is included in the documentation which is distributed with the code, to assist in developing the input for the Input Processor.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hopwood, J.E.; Affeldt, B.
An IBM personal computer (PC), a Gerber coordinate digitizer, and a collection of other instruments make up a system known as the Coordinate Digitizer Interactive Processor (CDIP). The PC extracts coordinate data from the digitizer through a special interface, and then, after reformatting, transmits the data to a remote VAX computer, a floppy disk, and a display terminal. This system has improved the efficiency of producing printed circuit-board artwork and extended the useful life of the Gerber GCD-1 Digitizer. 1 ref., 12 figs.
Challenges and Opportunities in Propulsion Simulations
2015-09-24
leverage Nvidia GPU accelerators • Release common computational infrastructure as Distro A for collaboration • Add physics modules as either...Gemini (6.4 GB/s) Dual Rail EDR-IB (23 GB/s) Interconnect Topology 3D Torus Non-blocking Fat Tree Processors AMD Opteron™ NVIDIA Kepler™ IBM...POWER9 NVIDIA Volta™ File System 32 PB, 1 TB/s, Lustre® 120 PB, 1 TB/s, GPFS™ Peak power consumption 9 MW 10 MW Titan vs. Summit Source: R
Programming for 1.6 Millon cores: Early experiences with IBM's BG/Q SMP architecture
NASA Astrophysics Data System (ADS)
Glosli, James
2013-03-01
With the stall in clock cycle improvements a decade ago, the drive for computational performance has continues along a path of increasing core counts on a processor. The multi-core evolution has been expressed in both a symmetric multi processor (SMP) architecture and cpu/GPU architecture. Debates rage in the high performance computing (HPC) community which architecture best serves HPC. In this talk I will not attempt to resolve that debate but perhaps fuel it. I will discuss the experience of exploiting Sequoia, a 98304 node IBM Blue Gene/Q SMP at Lawrence Livermore National Laboratory. The advantages and challenges of leveraging the computational power BG/Q will be detailed through the discussion of two applications. The first application is a Molecular Dynamics code called ddcMD. This is a code developed over the last decade at LLNL and ported to BG/Q. The second application is a cardiac modeling code called Cardioid. This is a code that was recently designed and developed at LLNL to exploit the fine scale parallelism of BG/Q's SMP architecture. Through the lenses of these efforts I'll illustrate the need to rethink how we express and implement our computational approaches. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Increased aging in primary muscle cultures of sporadic inclusion-body myositis.
Morosetti, Roberta; Broccolini, Aldobrando; Sancricca, Cristina; Gliubizzi, Carla; Gidaro, Teresa; Tonali, Pietro A; Ricci, Enzo; Mirabella, Massimiliano
2010-07-01
Ageing is thought to participate to the pathogenesis of sporadic inclusion-body myositis (s-IBM). Although the regenerative potential of s-IBM muscle is reduced in vivo, age-related abnormalities of satellite cells possibly accounting for the decline of muscle repair have not been demonstrated. Here we show that proliferation rate and clonogenicity of s-IBM myoblasts are significantly lower and doubling time is longer than normal age-matched controls, indicating that proliferative capacity of s-IBM muscles becomes exhausted earlier. Telomere shortening is detected in s-IBM cells suggesting premature senescence. Differently from controls, s-IBM myoblasts show increased active beta-catenin mainly localized within myonuclei, indicating active Wnt stimulation. After many rounds of muscle growth, only s-IBM myoblasts accumulate congophilic inclusions and immunoreactive Abeta(1-40) deposits. Therefore, s-IBM myoblasts seem to have a constitutively impaired regenerative capacity and the intrinsic property, upon sufficient aging in vitro, to accumulate Abeta. Our results might be valuable in understanding molecular mechanisms associated with muscle aging underlying the defective regeneration of s-IBM muscle and provide new clues for future therapeutic strategies. Copyright 2008 Elsevier Inc. All rights reserved.
1990-07-31
examples on their use is available with the PASS User Documentation Manual. 2 The data structure of PASS requires a three- lvel organizational...files, and missing control variables. A specific problem noted involved the absence of 8087 mathematical co-processor on the target IBM-XT 21 machine...System, required an operational understanding of the advanced mathematical technique used in the model. Problems with the original release of the PASS
Structural Dynamics of Maneuvering Aircraft.
1987-09-01
MANDYN. Written in Fortran 77, it was compiled and executed with Microsoft Fortran, Vers. 4.0 on an IBM PC-AT, with a co-processor, and a 20M hard disk...to the pivot area. Pre- sumably, the pivot area is a hard point in the wing structure. -41- NADC M1i4-0 ResulIts The final mass and flexural rigidity...lowest mode) is an important parameter. If it is less than three, the load factor approach can be problema - tical. In assessing the effect of one maneuver
Multibillion-atom Molecular Dynamics Simulations of Plasticity, Spall, and Ejecta
NASA Astrophysics Data System (ADS)
Germann, Timothy C.
2007-06-01
Modern supercomputing platforms, such as the IBM BlueGene/L at Lawrence Livermore National Laboratory and the Roadrunner hybrid supercomputer being built at Los Alamos National Laboratory, are enabling large-scale classical molecular dynamics simulations of phenomena that were unthinkable just a few years ago. Using either the embedded atom method (EAM) description of simple (close-packed) metals, or modified EAM (MEAM) models of more complex solids and alloys with mixed covalent and metallic character, simulations containing billions to trillions of atoms are now practical, reaching volumes in excess of a cubic micron. In order to obtain any new physical insights, however, it is equally important that the analysis of such systems be tractable. This is in fact possible, in large part due to our highly efficient parallel visualization code, which enables the rendering of atomic spheres, Eulerian cells, and other geometric objects in a matter of minutes, even for tens of thousands of processors and billions of atoms. After briefly describing the BlueGene/L and Roadrunner architectures, and the code optimization strategies that were employed, results obtained thus far on BlueGene/L will be reviewed, including: (1) shock compression and release of a defective EAM Cu sample, illustrating the plastic deformation accompanying void collapse as well as the subsequent void growth and linkup upon release; (2) solid-solid martensitic phase transition in shock-compressed MEAM Ga; and (3) Rayleigh-Taylor fluid instability modeled using large-scale direct simulation Monte Carlo (DSMC) simulations. I will also describe our initial experiences utilizing Cell Broadband Engine processors (developed for the Sony PlayStation 3), and planned simulation studies of ejecta and spall failure in polycrystalline metals that will be carried out when the full Petaflop Opteron/Cell Roadrunner supercomputer is assembled in mid-2008.
Dewaraja, Yuni K; Ljungberg, Michael; Majumdar, Amitava; Bose, Abhijit; Koral, Kenneth F
2002-02-01
This paper reports the implementation of the SIMIND Monte Carlo code on an IBM SP2 distributed memory parallel computer. Basic aspects of running Monte Carlo particle transport calculations on parallel architectures are described. Our parallelization is based on equally partitioning photons among the processors and uses the Message Passing Interface (MPI) library for interprocessor communication and the Scalable Parallel Random Number Generator (SPRNG) to generate uncorrelated random number streams. These parallelization techniques are also applicable to other distributed memory architectures. A linear increase in computing speed with the number of processors is demonstrated for up to 32 processors. This speed-up is especially significant in Single Photon Emission Computed Tomography (SPECT) simulations involving higher energy photon emitters, where explicit modeling of the phantom and collimator is required. For (131)I, the accuracy of the parallel code is demonstrated by comparing simulated and experimental SPECT images from a heart/thorax phantom. Clinically realistic SPECT simulations using the voxel-man phantom are carried out to assess scatter and attenuation correction.
NASA Astrophysics Data System (ADS)
Newman, Gregory A.; Commer, Michael
2009-07-01
Three-dimensional (3D) geophysical imaging is now receiving considerable attention for electrical conductivity mapping of potential offshore oil and gas reservoirs. The imaging technology employs controlled source electromagnetic (CSEM) and magnetotelluric (MT) fields and treats geological media exhibiting transverse anisotropy. Moreover when combined with established seismic methods, direct imaging of reservoir fluids is possible. Because of the size of the 3D conductivity imaging problem, strategies are required exploiting computational parallelism and optimal meshing. The algorithm thus developed has been shown to scale to tens of thousands of processors. In one imaging experiment, 32,768 tasks/processors on the IBM Watson Research Blue Gene/L supercomputer were successfully utilized. Over a 24 hour period we were able to image a large scale field data set that previously required over four months of processing time on distributed clusters based on Intel or AMD processors utilizing 1024 tasks on an InfiniBand fabric. Electrical conductivity imaging using massively parallel computational resources produces results that cannot be obtained otherwise and are consistent with timeframes required for practical exploration problems.
Multiphase complete exchange on Paragon, SP2 and CS-2
NASA Technical Reports Server (NTRS)
Bokhari, Shahid H.
1995-01-01
The overhead of interprocessor communication is a major factor in limiting the performance of parallel computer systems. The complete exchange is the severest communication pattern in that it requires each processor to send a distinct message to every other processor. This pattern is at the heart of many important parallel applications. On hypercubes, multiphase complete exchange has been developed and shown to provide optimal performance over varying message sizes. Most commercial multicomputer systems do not have a hypercube interconnect. However, they use special purpose hardware and dedicated communication processors to achieve very high performance communication and can be made to emulate the hypercube quite well. Multiphase complete exchange has been implemented on three contemporary parallel architectures: the Intel Paragon, IBM SP2 and Meiko CS-2. The essential features of these machines are described and their basic interprocessor communication overheads are discussed. The performance of multiphase complete exchange is evaluated on each machine. It is shown that the theoretical ideas developed for hypercubes are also applicable in practice to these machines and that multiphase complete exchange can lead to major savings in execution time over traditional solutions.
Research in Parallel Algorithms and Software for Computational Aerosciences
NASA Technical Reports Server (NTRS)
Domel, Neal D.
1996-01-01
Phase I is complete for the development of a Computational Fluid Dynamics parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.
Research in Parallel Algorithms and Software for Computational Aerosciences
NASA Technical Reports Server (NTRS)
Domel, Neal D.
1996-01-01
Phase 1 is complete for the development of a computational fluid dynamics CFD) parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.
VENTURE/PC manual: A multidimensional multigroup neutron diffusion code system. Version 3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shapiro, A.; Huria, H.C.; Cho, K.W.
1991-12-01
VENTURE/PC is a recompilation of part of the Oak Ridge BOLD VENTURE code system, which will operate on an IBM PC or compatible computer. Neutron diffusion theory solutions are obtained for multidimensional, multigroup problems. This manual contains information associated with operating the code system. The purpose of the various modules used in the code system, and the input for these modules are discussed. The PC code structure is also given. Version 2 included several enhancements not given in the original version of the code. In particular, flux iterations can be done in core rather than by reading and writing tomore » disk, for problems which allow sufficient memory for such in-core iterations. This speeds up the iteration process. Version 3 does not include any of the special processors used in the previous versions. These special processors utilized formatted input for various elements of the code system. All such input data is now entered through the Input Processor, which produces standard interface files for the various modules in the code system. In addition, a Standard Interface File Handbook is included in the documentation which is distributed with the code, to assist in developing the input for the Input Processor.« less
Strong scaling and speedup to 16,384 processors in cardiac electro-mechanical simulations.
Reumann, Matthias; Fitch, Blake G; Rayshubskiy, Aleksandr; Keller, David U J; Seemann, Gunnar; Dossel, Olaf; Pitman, Michael C; Rice, John J
2009-01-01
High performance computing is required to make feasible simulations of whole organ models of the heart with biophysically detailed cellular models in a clinical setting. Increasing model detail by simulating electrophysiology and mechanical models increases computation demands. We present scaling results of an electro - mechanical cardiac model of two ventricles and compare them to our previously published results using an electrophysiological model only. The anatomical data-set was given by both ventricles of the Visible Female data-set in a 0.2 mm resolution. Fiber orientation was included. Data decomposition for the distribution onto the distributed memory system was carried out by orthogonal recursive bisection. Load weight ratios for non-tissue vs. tissue elements used in the data decomposition were 1:1, 1:2, 1:5, 1:10, 1:25, 1:38.85, 1:50 and 1:100. The ten Tusscher et al. (2004) electrophysiological cell model was used and the Rice et al. (1999) model for the computation of the calcium transient dependent force. Scaling results for 512, 1024, 2048, 4096, 8192 and 16,384 processors were obtained for 1 ms simulation time. The simulations were carried out on an IBM Blue Gene/L supercomputer. The results show linear scaling from 512 to 16,384 processors with speedup factors between 1.82 and 2.14 between partitions. The most optimal load ratio was 1:25 for on all partitions. However, a shift towards load ratios with higher weight for the tissue elements can be recognized as can be expected when adding computational complexity to the model while keeping the same communication setup. This work demonstrates that it is potentially possible to run simulations of 0.5 s using the presented electro-mechanical cardiac model within 1.5 hours.
An Evaluation of Architectural Platforms for Parallel Navier-Stokes Computations
NASA Technical Reports Server (NTRS)
Jayasimha, D. N.; Hayder, M. E.; Pillay, S. K.
1996-01-01
We study the computational, communication, and scalability characteristics of a computational fluid dynamics application, which solves the time accurate flow field of a jet using the compressible Navier-Stokes equations, on a variety of parallel architecture platforms. The platforms chosen for this study are a cluster of workstations (the LACE experimental testbed at NASA Lewis), a shared memory multiprocessor (the Cray YMP), and distributed memory multiprocessors with different topologies - the IBM SP and the Cray T3D. We investigate the impact of various networks connecting the cluster of workstations on the performance of the application and the overheads induced by popular message passing libraries used for parallelization. The work also highlights the importance of matching the memory bandwidth to the processor speed for good single processor performance. By studying the performance of an application on a variety of architectures, we are able to point out the strengths and weaknesses of each of the example computing platforms.
Parallelizing Navier-Stokes Computations on a Variety of Architectural Platforms
NASA Technical Reports Server (NTRS)
Jayasimha, D. N.; Hayder, M. E.; Pillay, S. K.
1997-01-01
We study the computational, communication, and scalability characteristics of a Computational Fluid Dynamics application, which solves the time accurate flow field of a jet using the compressible Navier-Stokes equations, on a variety of parallel architectural platforms. The platforms chosen for this study are a cluster of workstations (the LACE experimental testbed at NASA Lewis), a shared memory multiprocessor (the Cray YMP), distributed memory multiprocessors with different topologies-the IBM SP and the Cray T3D. We investigate the impact of various networks, connecting the cluster of workstations, on the performance of the application and the overheads induced by popular message passing libraries used for parallelization. The work also highlights the importance of matching the memory bandwidth to the processor speed for good single processor performance. By studying the performance of an application on a variety of architectures, we are able to point out the strengths and weaknesses of each of the example computing platforms.
Morosetti, Roberta; Gliubizzi, Carla; Sancricca, Cristina; Broccolini, Aldobrando; Gidaro, Teresa; Lucchini, Matteo; Mirabella, Massimiliano
2012-04-01
Tumor necrosis factor-like weak inducer of apoptosis (TWEAK) and its receptor Fn14 exert pleiotropic effects, including regulation of myogenesis. Sporadic inclusion-body myositis (IBM) is the most common muscle disease of the elderly population and leads to severe disability. IBM mesoangioblasts, different from mesoangioblasts in other inflammatory myopathies, display a myogenic differentiation defect. The objective of the present study was to investigate TWEAK-Fn14 expression in IBM and other inflammatory myopathies and explore whether TWEAK modulation affects myogenesis in IBM mesoangioblasts. TWEAK, Fn14, and NF-κB expression was assessed by immunohistochemistry and Western blot in cell samples from both muscle biopsies and primary cultures. Mesoangioblasts isolated from samples of IBM, dermatomyositis, polymyositis, and control muscles were treated with recombinant human TWEAK, Fn14-Fc chimera, and anti-TWEAK antibody. TWEAK-RNA interference was performed in IBM and dermatomyositis mesoangioblasts. TWEAK levels in culture media were determined by enzyme-linked immunosorbent assay. In IBM muscle, we found increased TWEAK-Fn14 expression. Increased levels of TWEAK were found in differentiation medium from IBM mesoangioblasts. Moreover, TWEAK inhibited myogenic differentiation of mesoangioblasts. Consistent with this evidence, TWEAK inhibition by Fn14-Fc chimera or short interfering RNA induced myogenic differentiation of IBM mesoangioblasts. We provide evidence that TWEAK is a negative regulator of human mesoangioblast differentiation. Dysregulation of the TWEAK-Fn14 axis in IBM muscle may induce progressive muscle atrophy and reduce activation and differentiation of muscle precursor cells. Copyright © 2012 American Society for Investigative Pathology. Published by Elsevier Inc. All rights reserved.
1992-11-01
November 1992 1992 INTERNATIONAL AEROSPACE AND GROUND CONFERENCE 6. Perfrming Orgnis.aten Code ON LIGHTNING AND STATIC ELECTRICITY - ADDENDUM 111...October 6-8 1992 Program and the Federal Aviation Administration 14. Sponsoring Agency Code Technical Center ACD-230 15. Supplementary Metes The NICG...area]. The program runs well on an IBM PC or compatible 386 with a math co-processor 387 chip and a VGA monitor. For this study, streamers were added
Analyzing checkpointing trends for applications on the IBM Blue Gene/P system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Naik, H.; Gupta, R.; Beckman, P.
Current petascale systems have tens of thousands of hardware components and complex system software stacks, which increase the probability of faults occurring during the lifetime of a process. Checkpointing has been a popular method of providing fault tolerance in high-end systems. While considerable research has been done to optimize checkpointing, in practice the method still involves a high-cost overhead for users. In this paper, we study the checkpointing overhead seen by applications running on leadership-class machines such as the IBM Blue Gene/P at Argonne National Laboratory. We study various applications and design a methodology to assist users in understanding andmore » choosing checkpointing frequency and reducing the overhead incurred. In particular, we study three popular applications -- the Grid-Based Projector-Augmented Wave application, the Carr-Parrinello Molecular Dynamics application, and a Nek5000 computational fluid dynamics application -- and analyze their memory usage and possible checkpointing trends on 32,768 processors of the Blue Gene/P system.« less
Martin, G. T.; Yoon, S. S.; Mott, K. E.
1991-01-01
Schistosomiasis, a group of parasitic diseases caused by Schistosoma parasites, is associated with water resources development and affects more than 200 million people in 76 countries. Depending on the species of parasite involved, disease of the liver, spleen, gastrointestinal or urinary tract, or kidneys may result. A computer-assisted teaching package has been developed by WHO for use in the training of public health workers involved in schistosomiasis control. The package consists of the software, ZOOM, and a schistosomiasis information file, Dr Schisto, and uses hypermedia technology to link pictures and text. ZOOM runs on the IBM-PC and IBM-compatible computers, is user-friendly, requires a minimal hardware configuration, and can interact with the user in English, French, Spanish or Portuguese. The information files for ZOOM can be created or modified by the instructor using a word processor, and thus can be designed to suit the need of students. No programming knowledge is required to create the stacks. PMID:1786618
Efficient parallel implicit methods for rotary-wing aerodynamics calculations
NASA Astrophysics Data System (ADS)
Wissink, Andrew M.
Euler/Navier-Stokes Computational Fluid Dynamics (CFD) methods are commonly used for prediction of the aerodynamics and aeroacoustics of modern rotary-wing aircraft. However, their widespread application to large complex problems is limited lack of adequate computing power. Parallel processing offers the potential for dramatic increases in computing power, but most conventional implicit solution methods are inefficient in parallel and new techniques must be adopted to realize its potential. This work proposes alternative implicit schemes for Euler/Navier-Stokes rotary-wing calculations which are robust and efficient in parallel. The first part of this work proposes an efficient parallelizable modification of the Lower Upper-Symmetric Gauss Seidel (LU-SGS) implicit operator used in the well-known Transonic Unsteady Rotor Navier Stokes (TURNS) code. The new hybrid LU-SGS scheme couples a point-relaxation approach of the Data Parallel-Lower Upper Relaxation (DP-LUR) algorithm for inter-processor communication with the Symmetric Gauss Seidel algorithm of LU-SGS for on-processor computations. With the modified operator, TURNS is implemented in parallel using Message Passing Interface (MPI) for communication. Numerical performance and parallel efficiency are evaluated on the IBM SP2 and Thinking Machines CM-5 multi-processors for a variety of steady-state and unsteady test cases. The hybrid LU-SGS scheme maintains the numerical performance of the original LU-SGS algorithm in all cases and shows a good degree of parallel efficiency. It experiences a higher degree of robustness than DP-LUR for third-order upwind solutions. The second part of this work examines use of Krylov subspace iterative solvers for the nonlinear CFD solutions. The hybrid LU-SGS scheme is used as a parallelizable preconditioner. Two iterative methods are tested, Generalized Minimum Residual (GMRES) and Orthogonal s-Step Generalized Conjugate Residual (OSGCR). The Newton method demonstrates good parallel performance on the IBM SP2, with OS-GCR giving slightly better performance than GMRES on large numbers of processors. For steady and quasi-steady calculations, the convergence rate is accelerated but the overall solution time remains about the same as the standard hybrid LU-SGS scheme. For unsteady calculations, however, the Newton method maintains a higher degree of time-accuracy which allows tbe use of larger timesteps and results in CPU savings of 20-35%.
Tough2{_}MP: A parallel version of TOUGH2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Keni; Wu, Yu-Shu; Ding, Chris
2003-04-09
TOUGH2{_}MP is a massively parallel version of TOUGH2. It was developed for running on distributed-memory parallel computers to simulate large simulation problems that may not be solved by the standard, single-CPU TOUGH2 code. The new code implements an efficient massively parallel scheme, while preserving the full capacity and flexibility of the original TOUGH2 code. The new software uses the METIS software package for grid partitioning and AZTEC software package for linear-equation solving. The standard message-passing interface is adopted for communication among processors. Numerical performance of the current version code has been tested on CRAY-T3E and IBM RS/6000 SP platforms. Inmore » addition, the parallel code has been successfully applied to real field problems of multi-million-cell simulations for three-dimensional multiphase and multicomponent fluid and heat flow, as well as solute transport. In this paper, we will review the development of the TOUGH2{_}MP, and discuss the basic features, modules, and their applications.« less
Hyperspectral anomaly detection using Sony PlayStation 3
NASA Astrophysics Data System (ADS)
Rosario, Dalton; Romano, João; Sepulveda, Rene
2009-05-01
We present a proof-of-principle demonstration using Sony's IBM Cell processor-based PlayStation 3 (PS3) to run-in near real-time-a hyperspectral anomaly detection algorithm (HADA) on real hyperspectral (HS) long-wave infrared imagery. The PS3 console proved to be ideal for doing precisely the kind of heavy computational lifting HS based algorithms require, and the fact that it is a relatively open platform makes programming scientific applications feasible. The PS3 HADA is a unique parallel-random sampling based anomaly detection approach that does not require prior spectra of the clutter background. The PS3 HADA is designed to handle known underlying difficulties (e.g., target shape/scale uncertainties) often ignored in the development of autonomous anomaly detection algorithms. The effort is part of an ongoing cooperative contribution between the Army Research Laboratory and the Army's Armament, Research, Development and Engineering Center, which aims at demonstrating performance of innovative algorithmic approaches for applications requiring autonomous anomaly detection using passive sensors.
NASA Technical Reports Server (NTRS)
Saini, Subash; Bailey, David; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
High Performance Fortran (HPF), the high-level language for parallel Fortran programming, is based on Fortran 90. HALF was defined by an informal standards committee known as the High Performance Fortran Forum (HPFF) in 1993, and modeled on TMC's CM Fortran language. Several HPF features have since been incorporated into the draft ANSI/ISO Fortran 95, the next formal revision of the Fortran standard. HPF allows users to write a single parallel program that can execute on a serial machine, a shared-memory parallel machine, or a distributed-memory parallel machine. HPF eliminates the complex, error-prone task of explicitly specifying how, where, and when to pass messages between processors on distributed-memory machines, or when to synchronize processors on shared-memory machines. HPF is designed in a way that allows the programmer to code an application at a high level, and then selectively optimize portions of the code by dropping into message-passing or calling tuned library routines as 'extrinsics'. Compilers supporting High Performance Fortran features first appeared in late 1994 and early 1995 from Applied Parallel Research (APR) Digital Equipment Corporation, and The Portland Group (PGI). IBM introduced an HPF compiler for the IBM RS/6000 SP/2 in April of 1996. Over the past two years, these implementations have shown steady improvement in terms of both features and performance. The performance of various hardware/ programming model (HPF and MPI (message passing interface)) combinations will be compared, based on latest NAS (NASA Advanced Supercomputing) Parallel Benchmark (NPB) results, thus providing a cross-machine and cross-model comparison. Specifically, HPF based NPB results will be compared with MPI based NPB results to provide perspective on performance currently obtainable using HPF versus MPI or versus hand-tuned implementations such as those supplied by the hardware vendors. In addition we would also present NPB (Version 1.0) performance results for the following systems: DEC Alpha Server 8400 5/440, Fujitsu VPP Series (VX, VPP300, and VPP700), HP/Convex Exemplar SPP2000, IBM RS/6000 SP P2SC node (120 MHz) NEC SX-4/32, SGI/CRAY T3E, SGI Origin2000.
ETARA - EVENT TIME AVAILABILITY, RELIABILITY ANALYSIS
NASA Technical Reports Server (NTRS)
Viterna, L. A.
1994-01-01
The ETARA system was written to evaluate the performance of the Space Station Freedom Electrical Power System, but the methodology and software can be modified to simulate any system that can be represented by a block diagram. ETARA is an interactive, menu-driven reliability, availability, and maintainability (RAM) simulation program. Given a Reliability Block Diagram representation of a system, the program simulates the behavior of the system over a specified period of time using Monte Carlo methods to generate block failure and repair times as a function of exponential and/or Weibull distributions. ETARA can calculate availability parameters such as equivalent availability, state availability (percentage of time at a particular output state capability), continuous state duration and number of state occurrences. The program can simulate initial spares allotment and spares replenishment for a resupply cycle. The number of block failures are tabulated both individually and by block type. ETARA also records total downtime, repair time, and time waiting for spares. Maintenance man-hours per year and system reliability, with or without repair, at or above a particular output capability can also be calculated. The key to using ETARA is the development of a reliability or availability block diagram. The block diagram is a logical graphical illustration depicting the block configuration necessary for a function to be successfully accomplished. Each block can represent a component, a subsystem, or a system. The function attributed to each block is considered for modeling purposes to be either available or unavailable; there are no degraded modes of block performance. A block does not have to represent physically connected hardware in the actual system to be connected in the block diagram. The block needs only to have a role in contributing to an available system function. ETARA can model the RAM characteristics of systems represented by multilayered, nesting block diagrams. There are no restrictions on the number of total blocks or on the number of blocks in a series, parallel, or M-of-N parallel subsystem. In addition, the same block can appear in more than one subsystem if such an arrangement is necessary for an accurate model. ETARA 3.3 is written in APL2 for IBM PC series computers or compatibles running MS-DOS and the APL2 interpreter. Hardware requirements for the APL2 system include 640K of RAM, 2Mb of extended memory, and an 80386 or 80486 processor with an 80x87 math co-processor. The standard distribution medium for this package is a set of two 5.25 inch 360K MS-DOS format diskettes. A sample executable is included. The executable contains licensed material from the APL2 for the IBM PC product which is program property of IBM; Copyright IBM Corporation 1988 - All rights reserved. It is distributed with IBM's permission. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. ETARA was developed in 1990 and last updated in 1991.
Enhancing Image Processing Performance for PCID in a Heterogeneous Network of Multi-code Processors
NASA Astrophysics Data System (ADS)
Linderman, R.; Spetka, S.; Fitzgerald, D.; Emeny, S.
The Physically-Constrained Iterative Deconvolution (PCID) image deblurring code is being ported to heterogeneous networks of multi-core systems, including Intel Xeons and IBM Cell Broadband Engines. This paper reports results from experiments using the JAWS supercomputer at MHPCC (60 TFLOPS of dual-dual Xeon nodes linked with Infiniband) and the Cell Cluster at AFRL in Rome, NY. The Cell Cluster has 52 TFLOPS of Playstation 3 (PS3) nodes with IBM Cell Broadband Engine multi-cores and 15 dual-quad Xeon head nodes. The interconnect fabric includes Infiniband, 10 Gigabit Ethernet and 1 Gigabit Ethernet to each of the 336 PS3s. The results compare approaches to parallelizing FFT executions across the Xeons and the Cell's Synergistic Processing Elements (SPEs) for frame-level image processing. The experiments included Intel's Performance Primitives and Math Kernel Library, FFTW3.2, and Carnegie Mellon's SPIRAL. Optimization of FFTs in the PCID code led to a decrease in relative processing time for FFTs. Profiling PCID version 6.2, about one year ago, showed the 13 functions that accounted for the highest percentage of processing were all FFT processing functions. They accounted for over 88% of processing time in one run on Xeons. FFT optimizations led to improvement in the current PCID version 8.0. A recent profile showed that only two of the 19 functions with the highest processing time were FFT processing functions. Timing measurements showed that FFT processing for PCID version 8.0 has been reduced to less than 19% of overall processing time. We are working toward a goal of scaling to 200-400 cores per job (1-2 imagery frames/core). Running a pair of cores on each set of frames reduces latency by implementing parallel FFT processing. Our current results show scaling well out to 100 pairs of cores. These results support the next higher level of parallelism in PCID, where groups of several hundred frames each producing one resolved image are sent to cliques of several hundred cores in a round robin fashion. Current efforts toward further performance enhancement for PCID are shifting toward using the Playstations in conjunction with the Xeons to take advantage of outstanding price/performance as well as the Flops/Watt cost advantage. We are fine-tuning the PCID parallization strategy to balance processing over Xeons and Cell BEs to find an optimal partitioning of PCID over the heterogeneous processors. A high performance information management system that exploits native Infiniband multicast is used to improve latency among the head nodes. Using a publication/subscription oriented information management system to implement a unified communications platform makes runs on large HPCs with thousands of intercommunicating cores more flexible and more fault tolerant. It features a loose couplingof publishers to subscribers through intervening brokers. We are also working on enhancing performance for both Xeons and Cell BEs, buy moving selected operations to single precision. Techniques for adapting the code to single precision and performance results are reported.
The Efficiency and the Scalability of an Explicit Operator on an IBM POWER4 System
NASA Technical Reports Server (NTRS)
Frumkin, Michael; Biegel, Bryan A. (Technical Monitor)
2002-01-01
We present an evaluation of the efficiency and the scalability of an explicit CFD operator on an IBM POWER4 system. The POWER4 architecture exhibits a common trend in HPC architectures: boosting CPU processing power by increasing the number of functional units, while hiding the latency of memory access by increasing the depth of the memory hierarchy. The overall machine performance depends on the ability of the caches-buses-fabric-memory to feed the functional units with the data to be processed. In this study we evaluate the efficiency and scalability of one explicit CFD operator on an IBM POWER4. This operator performs computations at the points of a Cartesian grid and involves a few dozen floating point numbers and on the order of 100 floating point operations per grid point. The computations in all grid points are independent. Specifically, we estimate the efficiency of the RHS operator (SP of NPB) on a single processor as the observed/peak performance ratio. Then we estimate the scalability of the operator on a single chip (2 CPUs), a single MCM (8 CPUs), 16 CPUs, and the whole machine (32 CPUs). Then we perform the same measurements for a chache-optimized version of the RHS operator. For our measurements we use the HPM (Hardware Performance Monitor) counters available on the POWER4. These counters allow us to analyze the obtained performance results.
Single-event upset in highly scaled commercial silicon-on-insulator PowerPc microprocessors
NASA Technical Reports Server (NTRS)
Irom, Farokh; Farmanesh, Farhad H.
2004-01-01
Single event upset effects from heavy ions are measured for Motorola and IBM silicon-on-insulator (SOI) microprocessors with different feature sizes, and core voltages. The results are compared with results for similar devices with build substrates. The cross sections of the SOI processors are lower than their bulk counterparts, but the threshold is about the same, even though the charge collections depth is more than an order of magnitude smaller in the SOI devices. The scaling of the cross section with reduction of feature size and core voltage dependence for SOI microprocessors discussed.
A System Description of the Cocaine Trade
1994-01-01
72 C.17. Drug Market Hirarchy Tables (Cells A112 to N155) ............ 74 C.18. Purity Levels (Cells A156 to E71...This report also provides detailed information on how to use the model. The spreadsheets are available for either IBM (DOS) or Apple -based machines upon...red square (IBM) or arrow ( Apple ) in the upper right- hand corner have a note "behind" the cell explaining something about the data in the cell, or if
Accelerating molecular dynamic simulation on the cell processor and Playstation 3.
Luttmann, Edgar; Ensign, Daniel L; Vaidyanathan, Vishal; Houston, Mike; Rimon, Noam; Øland, Jeppe; Jayachandran, Guha; Friedrichs, Mark; Pande, Vijay S
2009-01-30
Implementation of molecular dynamics (MD) calculations on novel architectures will vastly increase its power to calculate the physical properties of complex systems. Herein, we detail algorithmic advances developed to accelerate MD simulations on the Cell processor, a commodity processor found in PlayStation 3 (PS3). In particular, we discuss issues regarding memory access versus computation and the types of calculations which are best suited for streaming processors such as the Cell, focusing on implicit solvation models. We conclude with a comparison of improved performance on the PS3's Cell processor over more traditional processors. (c) 2008 Wiley Periodicals, Inc.
Hiniker, Annie; Daniels, Brianne H; Lee, Han S; Margeta, Marta
2013-07-01
Inclusion body myositis (IBM) is a slowly progressive inflammatory myopathy of the elderly that does not show significant clinical improvement in response to steroid therapy. Distinguishing IBM from polymyositis (PM) is clinically important since PM is steroid-responsive; however, the two conditions can show substantial histologic overlap. We performed quantitative immunohistochemistry for (1) autophagic markers LC3 and p62 and (2) protein aggregation marker TDP-43 in 53 subjects with pathologically diagnosed PM, IBM, and two intermediate T cell-mediated inflammatory myopathies (polymyositis with COX-negative fibers and possible IBM). The percentage of stained fibers was significantly higher in IBM than PM for all three immunostains, but the markers varied in sensitivity and specificity. In particular, both LC3 and p62 were sensitive markers of IBM, but the tradeoff between sensitivity and specificity was smaller (and diagnostic utility thus greater) for LC3 than for p62. In contrast, TDP-43 immunopositivity was highly specific for IBM, but the sensitivity of this test was low, with definitive staining present in just 67% of IBM cases. To differentiate IBM from PM, we thus recommend using a panel of LC3 and TDP-43 antibodies: the finding of <14% LC3-positive fibers helps exclude IBM, while >7% of TDP-43-positive fibers strongly supports a diagnosis of IBM. These data provide support for the hypothesis that disruption of autophagy and protein aggregation contribute to IBM pathogenesis.
Parallel Semi-Implicit Spectral Element Atmospheric Model
NASA Astrophysics Data System (ADS)
Fournier, A.; Thomas, S.; Loft, R.
2001-05-01
The shallow-water equations (SWE) have long been used to test atmospheric-modeling numerical methods. The SWE contain essential wave-propagation and nonlinear effects of more complete models. We present a semi-implicit (SI) improvement of the Spectral Element Atmospheric Model to solve the SWE (SEAM, Taylor et al. 1997, Fournier et al. 2000, Thomas & Loft 2000). SE methods are h-p finite element methods combining the geometric flexibility of size-h finite elements with the accuracy of degree-p spectral methods. Our work suggests that exceptional parallel-computation performance is achievable by a General-Circulation-Model (GCM) dynamical core, even at modest climate-simulation resolutions (>1o). The code derivation involves weak variational formulation of the SWE, Gauss(-Lobatto) quadrature over the collocation points, and Legendre cardinal interpolators. Appropriate weak variation yields a symmetric positive-definite Helmholtz operator. To meet the Ladyzhenskaya-Babuska-Brezzi inf-sup condition and avoid spurious modes, we use a staggered grid. The SI scheme combines leapfrog and Crank-Nicholson schemes for the nonlinear and linear terms respectively. The localization of operations to elements ideally fits the method to cache-based microprocessor computer architectures --derivatives are computed as collections of small (8x8), naturally cache-blocked matrix-vector products. SEAM also has desirable boundary-exchange communication, like finite-difference models. Timings on on the IBM SP and Compaq ES40 supercomputers indicate that the SI code (20-min timestep) requires 1/3 the CPU time of the explicit code (2-min timestep) for T42 resolutions. Both codes scale nearly linearly out to 400 processors. We achieved single-processor performance up to 30% of peak for both codes on the 375-MHz IBM Power-3 processors. Fast computation and linear scaling lead to a useful climate-simulation dycore only if enough model time is computed per unit wall-clock time. An efficient SI solver is essential to substantially increase this rate. Parallel preconditioning for an iterative conjugate-gradient elliptic solver is described. We are building a GCM dycore capable of 200 GF% lOPS sustained performance on clustered RISC/cache architectures using hybrid MPI/OpenMP programming.
Control of a small working robot on a large flexible manipulator for suppressing vibrations
NASA Technical Reports Server (NTRS)
Lee, Soo Han
1991-01-01
The short term objective of this research is the completion of experimental configuration of the Small Articulated Robot (SAM) and the derivations of the actuator dynamics of the Robotic Arm, Large and Flexible (RALF). In order to control vibrations SAM should have larger bandwidth than that of the vibrations. The bandwidth of SAM consist of 3 parts; structural rigidity, processing speed of controller, and motor speed. The structural rigidity was increased to a reasonably high value by attaching aluminum angles at weak points and replacing thin side plates by thicker ones. The high processing speed of the controller was achieved by using parallel processors (three 68000 process, three interface board, and one main processor (IBM-XT)). Maximum joint speed and acceleration of SAM is known as about 4 rad/s and 15 rad/sq s. Hence SAM can move only .04 rad at 3 Hz which is the natural frequency of RALF. This will be checked by experiment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gupta, R.; Naik, H.; Beckman, P.
Providing fault tolerance in high-end petascale systems, consisting of millions of hardware components and complex software stacks, is becoming an increasingly challenging task. Checkpointing continues to be the most prevalent technique for providing fault tolerance in such high-end systems. Considerable research has focussed on optimizing checkpointing; however, in practice, checkpointing still involves a high-cost overhead for users. In this paper, we study the checkpointing overhead seen by various applications running on leadership-class machines like the IBM Blue Gene/P at Argonne National Laboratory. In addition to studying popular applications, we design a methodology to help users understand and intelligently choose anmore » optimal checkpointing frequency to reduce the overall checkpointing overhead incurred. In particular, we study the Grid-Based Projector-Augmented Wave application, the Carr-Parrinello Molecular Dynamics application, the Nek5000 computational fluid dynamics application and the Parallel Ocean Program application-and analyze their memory usage and possible checkpointing trends on 65,536 processors of the Blue Gene/P system.« less
Performance of a parallel thermal-hydraulics code TEMPEST
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fann, G.I.; Trent, D.S.
The authors describe the parallelization of the Tempest thermal-hydraulics code. The serial version of this code is used for production quality 3-D thermal-hydraulics simulations. Good speedup was obtained with a parallel diagonally preconditioned BiCGStab non-symmetric linear solver, using a spatial domain decomposition approach for the semi-iterative pressure-based and mass-conserved algorithm. The test case used here to illustrate the performance of the BiCGStab solver is a 3-D natural convection problem modeled using finite volume discretization in cylindrical coordinates. The BiCGStab solver replaced the LSOR-ADI method for solving the pressure equation in TEMPEST. BiCGStab also solves the coupled thermal energy equation. Scalingmore » performance of 3 problem sizes (221220 nodes, 358120 nodes, and 701220 nodes) are presented. These problems were run on 2 different parallel machines: IBM-SP and SGI PowerChallenge. The largest problem attains a speedup of 68 on an 128 processor IBM-SP. In real terms, this is over 34 times faster than the fastest serial production time using the LSOR-ADI solver.« less
Performing quantum computing experiments in the cloud
NASA Astrophysics Data System (ADS)
Devitt, Simon J.
2016-09-01
Quantum computing technology has reached a second renaissance in the past five years. Increased interest from both the private and public sector combined with extraordinary theoretical and experimental progress has solidified this technology as a major advancement in the 21st century. As anticipated my many, some of the first realizations of quantum computing technology has occured over the cloud, with users logging onto dedicated hardware over the classical internet. Recently, IBM has released the Quantum Experience, which allows users to access a five-qubit quantum processor. In this paper we take advantage of this online availability of actual quantum hardware and present four quantum information experiments. We utilize the IBM chip to realize protocols in quantum error correction, quantum arithmetic, quantum graph theory, and fault-tolerant quantum computation by accessing the device remotely through the cloud. While the results are subject to significant noise, the correct results are returned from the chip. This demonstrates the power of experimental groups opening up their technology to a wider audience and will hopefully allow for the next stage of development in quantum information technology.
Solutions and debugging for data consistency in multiprocessors with noncoherent caches
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernstein, D.; Mendelson, B.; Breternitz, M. Jr.
1995-02-01
We analyze two important problems that arise in shared-memory multiprocessor systems. The stale data problem involves ensuring that data items in local memory of individual processors are current, independent of writes done by other processors. False sharing occurs when two processors have copies of the same shared data block but update different portions of the block. The false sharing problem involves guaranteeing that subsequent writes are properly combined. In modern architectures these problems are usually solved in hardware, by exploiting mechanisms for hardware controlled cache consistency. This leads to more expensive and nonscalable designs. Therefore, we are concentrating on softwaremore » methods for ensuring cache consistency that would allow for affordable and scalable multiprocessing systems. Unfortunately, providing software control is nontrivial, both for the compiler writer and for the application programmer. For this reason we are developing a debugging environment that will facilitate the development of compiler-based techniques and will help the programmer to tune his or her application using explicit cache management mechanisms. We extend the notion of a race condition for IBM Shared Memory System POWER/4, taking into consideration its noncoherent caches, and propose techniques for detection of false sharing problems. Identification of the stale data problem is discussed as well, and solutions are suggested.« less
2013-01-01
Background Inclusion body myositis (IBM) is a slowly progressive inflammatory myopathy of the elderly that does not show significant clinical improvement in response to steroid therapy. Distinguishing IBM from polymyositis (PM) is clinically important since PM is steroid-responsive; however, the two conditions can show substantial histologic overlap. Results We performed quantitative immunohistochemistry for (1) autophagic markers LC3 and p62 and (2) protein aggregation marker TDP-43 in 53 subjects with pathologically diagnosed PM, IBM, and two intermediate T cell-mediated inflammatory myopathies (polymyositis with COX-negative fibers and possible IBM). The percentage of stained fibers was significantly higher in IBM than PM for all three immunostains, but the markers varied in sensitivity and specificity. In particular, both LC3 and p62 were sensitive markers of IBM, but the tradeoff between sensitivity and specificity was smaller (and diagnostic utility thus greater) for LC3 than for p62. In contrast, TDP-43 immunopositivity was highly specific for IBM, but the sensitivity of this test was low, with definitive staining present in just 67% of IBM cases. Conclusions To differentiate IBM from PM, we thus recommend using a panel of LC3 and TDP-43 antibodies: the finding of <14% LC3-positive fibers helps exclude IBM, while >7% of TDP-43-positive fibers strongly supports a diagnosis of IBM. These data provide support for the hypothesis that disruption of autophagy and protein aggregation contribute to IBM pathogenesis. PMID:24252466
High-performance ultra-low power VLSI analog processor for data compression
NASA Technical Reports Server (NTRS)
Tawel, Raoul (Inventor)
1996-01-01
An apparatus for data compression employing a parallel analog processor. The apparatus includes an array of processor cells with N columns and M rows wherein the processor cells have an input device, memory device, and processor device. The input device is used for inputting a series of input vectors. Each input vector is simultaneously input into each column of the array of processor cells in a pre-determined sequential order. An input vector is made up of M components, ones of which are input into ones of M processor cells making up a column of the array. The memory device is used for providing ones of M components of a codebook vector to ones of the processor cells making up a column of the array. A different codebook vector is provided to each of the N columns of the array. The processor device is used for simultaneously comparing the components of each input vector to corresponding components of each codebook vector, and for outputting a signal representative of the closeness between the compared vector components. A combination device is used to combine the signal output from each processor cell in each column of the array and to output a combined signal. A closeness determination device is then used for determining which codebook vector is closest to an input vector from the combined signals, and for outputting a codebook vector index indicating which of the N codebook vectors was the closest to each input vector input into the array.
Present status and future of the sophisticated work station
NASA Astrophysics Data System (ADS)
Ishida, Haruhisa
The excellency of the work station is explained, by comparing the functions of software and hardware of work station with those of personal computer. As one of the examples utilizing the functions of work station, desk top publishing is explained. By describing the competition between the Group of ATT · Sun Microsystems which intends to have the leadership by integrating Berkeley version which is most popular at this moment and System V version, and the group led by IBM, future of UNIX as OS of work station is predicted. Development of RISC processor, TRON Plan and Sigma Projects by MITI are also mentioned as its background.
AN OPTIMIZED 64X64 POINT TWO-DIMENSIONAL FAST FOURIER TRANSFORM
NASA Technical Reports Server (NTRS)
Miko, J.
1994-01-01
Scientists at Goddard have developed an efficient and powerful program-- An Optimized 64x64 Point Two-Dimensional Fast Fourier Transform-- which combines the performance of real and complex valued one-dimensional Fast Fourier Transforms (FFT's) to execute a two-dimensional FFT and its power spectrum coefficients. These coefficients can be used in many applications, including spectrum analysis, convolution, digital filtering, image processing, and data compression. The program's efficiency results from its technique of expanding all arithmetic operations within one 64-point FFT; its high processing rate results from its operation on a high-speed digital signal processor. For non-real-time analysis, the program requires as input an ASCII data file of 64x64 (4096) real valued data points. As output, this analysis produces an ASCII data file of 64x64 power spectrum coefficients. To generate these coefficients, the program employs a row-column decomposition technique. First, it performs a radix-4 one-dimensional FFT on each row of input, producing complex valued results. Then, it performs a one-dimensional FFT on each column of these results to produce complex valued two-dimensional FFT results. Finally, the program sums the squares of the real and imaginary values to generate the power spectrum coefficients. The program requires a Banshee accelerator board with 128K bytes of memory from Atlanta Signal Processors (404/892-7265) installed on an IBM PC/AT compatible computer (DOS ver. 3.0 or higher) with at least one 16-bit expansion slot. For real-time operation, an ASPI daughter board is also needed. The real-time configuration reads 16-bit integer input data directly into the accelerator board, operating on 64x64 point frames of data. The program's memory management also allows accumulation of the coefficient results. The real-time processing rate to calculate and accumulate the 64x64 power spectrum output coefficients is less than 17.0 mSec. Documentation is included in the price of the program. Source code is written in C, 8086 Assembly, and Texas Instruments TMS320C30 Assembly Languages. This program is available on a 5.25 inch 360K MS-DOS format diskette. IBM and IBM PC are registered trademarks of International Business Machines. MS-DOS is a registered trademark of Microsoft Corporation.
MOROSETTI, R.; GLIUBIZZI, C.; BROCCOLINI, A.; SANCRICCA, C.; MIRABELLA, M.
2011-01-01
SUMMARY Mesoangioblasts are a class of adult stem cells of mesoderm origin, potentially useful for the treatment of primitive myopathies of different etiology. Extensive in vitro and in vivo studies in animal models of muscular dystrophy have demonstrated the ability of mesoangioblast to repair skeletal muscle when injected intra-arterially. In a previous work we demonstrated that mesoangioblasts obtained from diagnostic muscle biopsies of IBM patients display a defective differentiation down skeletal muscle and this block can be corrected in vitro by transient MyoD transfection. We are currently investigating different pathways involved in mesoangioblasts skeletal muscle differentiation and exploring alternative stimulatory approaches not requiring extensive cell manipulation. This will allow to obtain safe, easy and efficient molecular or pharmacological modulation of pro-myogenic pathways in IBM mesoangioblasts. It is of crucial importance to identify factors (ie. cytokines, growth factors) produced by muscle or inflammatory cells and released in the surrounding milieu that are able to regulate the differentiation ability of IBM mesoangioblasts. To promote myogenic differentiation of endogenous mesoangioblasts in IBM muscle, the modulation of such target molecules selectively dysregulated would be a more handy approach to enhance muscle regeneration compared to transplantation techniques. Studies on the biological characteristics of IBM mesoangioblasts with their aberrant differentiation behavior, the signaling pathways possibly involved in their differentiation block and the possible strategies to overcome it in vivo, might provide new insights to better understand the etiopathogenesis of this crippling disorder and to identify molecular targets susceptible of therapeutic modulation. PMID:21842589
Performance Analysis of a Hybrid Overset Multi-Block Application on Multiple Architectures
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed; Biswas, Rupak
2003-01-01
This paper presents a detailed performance analysis of a multi-block overset grid compu- tational fluid dynamics app!ication on multiple state-of-the-art computer architectures. The application is implemented using a hybrid MPI+OpenMP programming paradigm that exploits both coarse and fine-grain parallelism; the former via MPI message passing and the latter via OpenMP directives. The hybrid model also extends the applicability of multi-block programs to large clusters of SNIP nodes by overcoming the restriction that the number of processors be less than the number of grid blocks. A key kernel of the application, namely the LU-SGS linear solver, had to be modified to enhance the performance of the hybrid approach on the target machines. Investigations were conducted on cacheless Cray SX6 vector processors, cache-based IBM Power3 and Power4 architectures, and single system image SGI Origin3000 platforms. Overall results for complex vortex dynamics simulations demonstrate that the SX6 achieves the highest performance and outperforms the RISC-based architectures; however, the best scaling performance was achieved on the Power3.
NASA Technical Reports Server (NTRS)
Shields, Michael F.
1993-01-01
The need to manage large amounts of data on robotically controlled devices has been critical to the mission of this Agency for many years. In many respects this Agency has helped pioneer, with their industry counterparts, the development of a number of products long before these systems became commercially available. Numerous attempts have been made to field both robotically controlled tape and optical disk technology and systems to satisfy our tertiary storage needs. Custom developed products were architected, designed, and developed without vendor partners over the past two decades to field workable systems to handle our ever increasing storage requirements. Many of the attendees of this symposium are familiar with some of the older products, such as: the Braegen Automated Tape Libraries (ATL's), the IBM 3850, the Ampex TeraStore, just to name a few. In addition, we embarked on an in-house development of a shared disk input/output support processor to manage our every increasing tape storage needs. For all intents and purposes, this system was a file server by current definitions which used CDC Cyber computers as the control processors. It served us well and was just recently removed from production usage.
Time-variant analysis of rotorcraft systems dynamics - An exploitation of vector processors
NASA Technical Reports Server (NTRS)
Amirouche, F. M. L.; Xie, M.; Shareef, N. H.
1993-01-01
In this paper a generalized algorithmic procedure is presented for handling constraints in mechanical transmissions. The latter are treated as multibody systems of interconnected rigid/flexible bodies. The constraint Jacobian matrices are generated automatically and suitably updated in time, depending on the geometrical and kinematical constraint conditions describing the interconnection between shafts or gears. The type of constraints are classified based on the interconnection of the bodies by assuming that one or more points of contact exist between them. The effects due to elastic deformation of the flexible bodies are included by allowing each body element to undergo small deformations. The procedure is based on recursively formulated Kane's dynamical equations of motion and the finite element method, including the concept of geometrical stiffening effects. The method is implemented on an IBM-3090-600j vector processor with pipe-lining capabilities. A significant increase in the speed of execution is achieved by vectorizing the developed code in computationally intensive areas. An example consisting of two meshing disks rotating at high angular velocity is presented. Applications are intended for the study of the dynamic behavior of helicopter transmissions.
Bermuda Triangle: a subsystem of the 168/E interfacing scheme used by Group B at SLAC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oxoby, G.J.; Levinson, L.J.; Trang, Q.H.
1979-12-01
The Bermuda Triangle system is a method of interfacing several 168/E microprocessors to a central system for control of the processors and overlaying their memories. The system is a three-way interface with I/O ports to a large buffer memory, a PDP11 Unibus and a bus to the 168/E processors. Data may be transferred bidirectionally between any two ports. Two Bermuda Triangles are used, one for the program memory and one for the data memory. The program buffer memory stores the overlay programs for the 168/E, and the data buffer memory, the incoming raw data, the data portion of the overlays,more » and the outgoing processed events. This buffering is necessary since the memories of 168/E microprocessors are small compared to the main program and the amount of data being processed. The link to the computer facility is via a Unibus to IBM channel interface. A PDP11/04 controls the data flow. 7 figures, 4 tables. (RWR)« less
Long-range interactions and parallel scalability in molecular simulations
NASA Astrophysics Data System (ADS)
Patra, Michael; Hyvönen, Marja T.; Falck, Emma; Sabouri-Ghomi, Mohsen; Vattulainen, Ilpo; Karttunen, Mikko
2007-01-01
Typical biomolecular systems such as cellular membranes, DNA, and protein complexes are highly charged. Thus, efficient and accurate treatment of electrostatic interactions is of great importance in computational modeling of such systems. We have employed the GROMACS simulation package to perform extensive benchmarking of different commonly used electrostatic schemes on a range of computer architectures (Pentium-4, IBM Power 4, and Apple/IBM G5) for single processor and parallel performance up to 8 nodes—we have also tested the scalability on four different networks, namely Infiniband, GigaBit Ethernet, Fast Ethernet, and nearly uniform memory architecture, i.e. communication between CPUs is possible by directly reading from or writing to other CPUs' local memory. It turns out that the particle-mesh Ewald method (PME) performs surprisingly well and offers competitive performance unless parallel runs on PC hardware with older network infrastructure are needed. Lipid bilayers of sizes 128, 512 and 2048 lipid molecules were used as the test systems representing typical cases encountered in biomolecular simulations. Our results enable an accurate prediction of computational speed on most current computing systems, both for serial and parallel runs. These results should be helpful in, for example, choosing the most suitable configuration for a small departmental computer cluster.
Comparing the Performance of Blue Gene/Q with Leading Cray XE6 and InfiniBand Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerbyson, Darren J.; Barker, Kevin J.; Vishnu, Abhinav
2013-01-21
Abstract—Three types of systems dominate the current High Performance Computing landscape: the Cray XE6, the IBM Blue Gene, and commodity clusters using InfiniBand. These systems have quite different characteristics making the choice for a particular deployment difficult. The XE6 uses Cray’s proprietary Gemini 3-D torus interconnect with two nodes at each network endpoint. The latest IBM Blue Gene/Q uses a single socket integrating processor and communication in a 5-D torus network. InfiniBand provides the flexibility of using nodes from many vendors connected in many possible topologies. The performance characteristics of each vary vastly along with their utilization model. In thismore » work we compare the performance of these three systems using a combination of micro-benchmarks and a set of production applications. In particular we discuss the causes of variability in performance across the systems and also quantify where performance is lost using a combination of measurements and models. Our results show that significant performance can be lost in normal production operation of the Cray XT6 and InfiniBand Clusters in comparison to Blue Gene/Q.« less
UWGSP6: a diagnostic radiology workstation of the future
NASA Astrophysics Data System (ADS)
Milton, Stuart W.; Han, Sang; Choi, Hyung-Sik; Kim, Yongmin
1993-06-01
The Univ. of Washington's Image Computing Systems Lab. (ICSL) has been involved in research into the development of a series of PACS workstations since the middle 1980's. The most recent research, a joint UW-IBM project, attempted to create a diagnostic radiology workstation using an IBM RISC System 6000 (RS6000) computer workstation and the X-Window system. While the results are encouraging, there are inherent limitations in the workstation hardware which prevent it from providing an acceptable level of functionality for diagnostic radiology. Realizing the RS6000 workstation's limitations, a parallel effort was initiated to design a workstation, UWGSP6 (Univ. of Washington Graphics System Processor #6), that provides the required functionality. This paper documents the design of UWGSP6, which not only addresses the requirements for a diagnostic radiology workstation in terms of display resolution, response time, etc., but also includes the processing performance necessary to support key functions needed in the implementation of algorithms for computer-aided diagnosis. The paper includes a description of the workstation architecture, and specifically its image processing subsystem. Verification of the design through hardware simulation is then discussed, and finally, performance of selected algorithms based on detailed simulation is provided.
Geometrical model for DBMS: an experimental DBMS using IBM solid modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ali, D.E.D.L.
1985-01-01
This research presents a new model for data base management systems (DBMS). The new model, Geometrical DBMS, is based on using solid modelling technology in designing and implementing DBMS. The Geometrical DBMS is implemented using the IBM solid modelling Geometric Design Processor (GDP). Built basically on computer-graphics concepts, Geometrical DBMS is indeed a unique model. Traditionally, researchers start with one of the existent DBMS models and then put a graphical front end on it. In Geometrical DBMS, the graphical aspect of the model is not an alien concept tailored to the model but is, as a matter of fact, themore » atom around which the model is designed. The main idea in Geometrical DBMS is to allow the user and the system to refer to and manipulate data items as a solid object in 3D space, and representing a record as a group of logically related solid objects. In Geometical DBMS, hierarchical structure is used to present the data relations and the user sees the data as a group of arrays; yet, for the user and the system together, the data structure is a multidimensional tree.« less
A Non-Cut Cell Immersed Boundary Method for Use in Icing Simulations
NASA Technical Reports Server (NTRS)
Sarofeen, Christian M.; Noack, Ralph W.; Kreeger, Richard E.
2013-01-01
This paper describes a computational fluid dynamic method used for modelling changes in aircraft geometry due to icing. While an aircraft undergoes icing, the accumulated ice results in a geometric alteration of the aerodynamic surfaces. In computational simulations for icing, it is necessary that the corresponding geometric change is taken into consideration. The method used, herein, for the representation of the geometric change due to icing is a non-cut cell Immersed Boundary Method (IBM). Computational cells that are in a body fitted grid of a clean aerodynamic geometry that are inside a predicted ice formation are identified. An IBM is then used to change these cells from being active computational cells to having properties of viscous solid bodies. This method has been implemented in the NASA developed node centered, finite volume computational fluid dynamics code, FUN3D. The presented capability is tested for two-dimensional airfoils including a clean airfoil, an iced airfoil, and an airfoil in harmonic pitching motion about its quarter chord. For these simulations velocity contours, pressure distributions, coefficients of lift, coefficients of drag, and coefficients of pitching moment about the airfoil's quarter chord are computed and used for comparison against experimental results, a higher order panel method code with viscous effects, XFOIL, and the results from FUN3D's original solution process. The results of the IBM simulations show that the accuracy of the IBM compares satisfactorily with the experimental results, XFOIL results, and the results from FUN3D's original solution process.
FLY MPI-2: a parallel tree code for LSS
NASA Astrophysics Data System (ADS)
Becciani, U.; Comparato, M.; Antonuccio-Delogu, V.
2006-04-01
New version program summaryProgram title: FLY 3.1 Catalogue identifier: ADSC_v2_0 Licensing provisions: yes Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSC_v2_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland No. of lines in distributed program, including test data, etc.: 158 172 No. of bytes in distributed program, including test data, etc.: 4 719 953 Distribution format: tar.gz Programming language: Fortran 90, C Computer: Beowulf cluster, PC, MPP systems Operating system: Linux, Aix RAM: 100M words Catalogue identifier of previous version: ADSC_v1_0 Journal reference of previous version: Comput. Phys. Comm. 155 (2003) 159 Does the new version supersede the previous version?: yes Nature of problem: FLY is a parallel collisionless N-body code for the calculation of the gravitational force Solution method: FLY is based on the hierarchical oct-tree domain decomposition introduced by Barnes and Hut (1986) Reasons for the new version: The new version of FLY is implemented by using the MPI-2 standard: the distributed version 3.1 was developed by using the MPICH2 library on a PC Linux cluster. Today the FLY performance allows us to consider the FLY code among the most powerful parallel codes for tree N-body simulations. Another important new feature regards the availability of an interface with hydrodynamical Paramesh based codes. Simulations must follow a box large enough to accurately represent the power spectrum of fluctuations on very large scales so that we may hope to compare them meaningfully with real data. The number of particles then sets the mass resolution of the simulation, which we would like to make as fine as possible. The idea to build an interface between two codes, that have different and complementary cosmological tasks, allows us to execute complex cosmological simulations with FLY, specialized for DM evolution, and a code specialized for hydrodynamical components that uses a Paramesh block structure. Summary of revisions: The parallel communication schema was totally changed. The new version adopts the MPICH2 library. Now FLY can be executed on all Unix systems having an MPI-2 standard library. The main data structure, is declared in a module procedure of FLY (fly_h.F90 routine). FLY creates the MPI Window object for one-sided communication for all the shared arrays, with a call like the following: CALL MPI_WIN_CREATE(POS, SIZE, REAL8, MPI_INFO_NULL, MPI_COMM_WORLD, WIN_POS, IERR) the following main window objects are created: win_pos, win_vel, win_acc: particles positions velocities and accelerations, win_pos_cell, win_mass_cell, win_quad, win_subp, win_grouping: cells positions, masses, quadrupole momenta, tree structure and grouping cells. Other windows are created for dynamic load balance and global counters. Restrictions: The program uses the leapfrog integrator schema, but could be changed by the user. Unusual features: FLY uses the MPI-2 standard: the MPICH2 library on Linux systems was adopted. To run this version of FLY the working directory must be shared among all the processors that execute FLY. Additional comments: Full documentation for the program is included in the distribution in the form of a README file, a User Guide and a Reference manuscript. Running time: IBM Linux Cluster 1350, 512 nodes with 2 processors for each node and 2 GB RAM for each processor, at Cineca, was adopted to make performance tests. Processor type: Intel Xeon Pentium IV 3.0 GHz and 512 KB cache (128 nodes have Nocona processors). Internal Network: Myricom LAN Card "C" Version and "D" Version. Operating System: Linux SuSE SLES 8. The code was compiled using the mpif90 compiler version 8.1 and with basic optimization options in order to have performances that could be useful compared with other generic clusters Processors
Design of a modular digital computer system, CDRL no. D001, final design plan
NASA Technical Reports Server (NTRS)
Easton, R. A.
1975-01-01
The engineering breadboard implementation for the CDRL no. D001 modular digital computer system developed during design of the logic system was documented. This effort followed the architecture study completed and documented previously, and was intended to verify the concepts of a fault tolerant, automatically reconfigurable, modular version of the computer system conceived during the architecture study. The system has a microprogrammed 32 bit word length, general register architecture and an instruction set consisting of a subset of the IBM System 360 instruction set plus additional fault tolerance firmware. The following areas were covered: breadboard packaging, central control element, central processing element, memory, input/output processor, and maintenance/status panel and electronics.
Morosetti, R; Gliubizzi, C; Broccolini, A; Sancricca, C; Mirabella, M
2011-06-01
Mesoangioblasts are a class of adult stem cells of mesoderm origin, potentially useful for the treatment of primitive myopathies of different etiology. Extensive in vitro and in vivo studies in animal models of muscular dystrophy have demonstrated the ability of mesoangioblast to repair skeletal muscle when injected intra-arterially. In a previous work we demonstrated that mesoangioblasts obtained from diagnostic muscle biopsies of IBM patients display a defective differentiation down skeletal muscle and this block can be corrected in vitro by transient MyoD transfection. We are currently investigating different pathways involved in mesoangioblasts skeletal muscle differentiation and exploring alternative stimulatory approaches not requiring extensive cell manipulation. This will allow to obtain safe, easy and efficient molecular or pharmacological modulation of pro-myogenic pathways in IBM mesoangioblasts. It is of crucial importance to identify factors (ie. cytokines, growth factors) produced by muscle or inflammatory cells and released in the surrounding milieu that are able to regulate the differentiation ability of IBM mesoangioblasts. To promote myogenic differentiation of endogenous mesoangioblasts in IBM muscle, the modulation of such target molecules selectively dysregulated would be a more handy approach to enhance muscle regeneration compared to transplantation techniques. Studies on the biological characteristics of IBM mesoangioblasts with their aberrant differentiation behavior, the signaling pathways possibly involved in their differentiation block and the possible strategies to overcome it in vivo, might provide new insights to better understand the etiopathogenesis of this crippling disorder and to identify molecular targets susceptible of therapeutic modulation.
Parallel community climate model: Description and user`s guide
DOE Office of Scientific and Technical Information (OSTI.GOV)
Drake, J.B.; Flanery, R.E.; Semeraro, B.D.
This report gives an overview of a parallel version of the NCAR Community Climate Model, CCM2, implemented for MIMD massively parallel computers using a message-passing programming paradigm. The parallel implementation was developed on an Intel iPSC/860 with 128 processors and on the Intel Delta with 512 processors, and the initial target platform for the production version of the code is the Intel Paragon with 2048 processors. Because the implementation uses a standard, portable message-passing libraries, the code has been easily ported to other multiprocessors supporting a message-passing programming paradigm. The parallelization strategy used is to decompose the problem domain intomore » geographical patches and assign each processor the computation associated with a distinct subset of the patches. With this decomposition, the physics calculations involve only grid points and data local to a processor and are performed in parallel. Using parallel algorithms developed for the semi-Lagrangian transport, the fast Fourier transform and the Legendre transform, both physics and dynamics are computed in parallel with minimal data movement and modest change to the original CCM2 source code. Sequential or parallel history tapes are written and input files (in history tape format) are read sequentially by the parallel code to promote compatibility with production use of the model on other computer systems. A validation exercise has been performed with the parallel code and is detailed along with some performance numbers on the Intel Paragon and the IBM SP2. A discussion of reproducibility of results is included. A user`s guide for the PCCM2 version 2.1 on the various parallel machines completes the report. Procedures for compilation, setup and execution are given. A discussion of code internals is included for those who may wish to modify and use the program in their own research.« less
Three-wheel air turbocompressor for PEM fuel cell systems
Rehg, Tim; Gee, Mark; Emerson, Terence P.; Ferrall, Joe; Sokolov, Pavel
2003-08-19
A fuel cell system comprises a compressor and a fuel processor downstream of the compressor. A fuel cell stack is in communication with the fuel processor and compressor. A combustor is downstream of the fuel cell stack. First and second turbines are downstream of the fuel processor and in parallel flow communication with one another. A distribution valve is in communication with the first and second turbines. The first and second turbines are mechanically engaged to the compressor. A bypass valve is intermediate the compressor and the second turbine, with the bypass valve enabling a compressed gas from the compressor to bypass the fuel processor.
Hines, Michael L; Eichner, Hubert; Schürmann, Felix
2008-08-01
Neuron tree topology equations can be split into two subtrees and solved on different processors with no change in accuracy, stability, or computational effort; communication costs involve only sending and receiving two double precision values by each subtree at each time step. Splitting cells is useful in attaining load balance in neural network simulations, especially when there is a wide range of cell sizes and the number of cells is about the same as the number of processors. For compute-bound simulations load balance results in almost ideal runtime scaling. Application of the cell splitting method to two published network models exhibits good runtime scaling on twice as many processors as could be effectively used with whole-cell balancing.
A Comparison of Two Panasonic Lithium-Ion Batteries and Cells for the IBM Thinkpad
NASA Technical Reports Server (NTRS)
Jeevarajan, Judith A.; Cook, Joseph S.; Davies, Francis J.; Collins, Jacob; Bragg, Bobby J.
2003-01-01
The IBM Thinkpad 760XD has been used in the Orbiter and International Space Station since 2000. The Thinkpad is powered by a Panasonic Li-ion battery that has a voltage of 10.8 V and 3.0 Ah capacity. This Thinkpad is now being replaced by the IBM Thinkpad A31P which has a Panasonic Li-ion battery that has a voltage of 10.8 V and 4.0 Ah capacity. Both batteries have protective circuit boards. The Panasonic battery for the Thinkpad 760XD had 12 Panasonic 17500 cells of 0.75 Ah capacity in a 4P3S cOnfiguration. The new Panasonic battery has 6 Panasonic 18650 cells of 2.0 Ah capacity in a 2P3S configuration. The batteries and cells for both models have been evaluated for performance and safety. A comparison of the cells under similar test conditions will be presented. The performance of the cells has been evaluated under different rates of charge and discharge and different temperatures. The cells have been tested under abuse conditions and the safety features in the cells evaluated. The protective circuit board in the battery was also tested under conditions of overcharge, overdischarge, short circuit and unbalanced cell configurations. The results of the studies will be presented in this paper.
Automated system for analyzing the activity of individual neurons
NASA Technical Reports Server (NTRS)
Bankman, Isaac N.; Johnson, Kenneth O.; Menkes, Alex M.; Diamond, Steve D.; Oshaughnessy, David M.
1993-01-01
This paper presents a signal processing system that: (1) provides an efficient and reliable instrument for investigating the activity of neuronal assemblies in the brain; and (2) demonstrates the feasibility of generating the command signals of prostheses using the activity of relevant neurons in disabled subjects. The system operates online, in a fully automated manner and can recognize the transient waveforms of several neurons in extracellular neurophysiological recordings. Optimal algorithms for detection, classification, and resolution of overlapping waveforms are developed and evaluated. Full automation is made possible by an algorithm that can set appropriate decision thresholds and an algorithm that can generate templates on-line. The system is implemented with a fast IBM PC compatible processor board that allows on-line operation.
Issues in ATM Support of High-Performance, Geographically Distributed Computing
NASA Technical Reports Server (NTRS)
Claus, Russell W.; Dowd, Patrick W.; Srinidhi, Saragur M.; Blade, Eric D.G
1995-01-01
This report experimentally assesses the effect of the underlying network in a cluster-based computing environment. The assessment is quantified by application-level benchmarking, process-level communication, and network file input/output. Two testbeds were considered, one small cluster of Sun workstations and another large cluster composed of 32 high-end IBM RS/6000 platforms. The clusters had Ethernet, fiber distributed data interface (FDDI), Fibre Channel, and asynchronous transfer mode (ATM) network interface cards installed, providing the same processors and operating system for the entire suite of experiments. The primary goal of this report is to assess the suitability of an ATM-based, local-area network to support interprocess communication and remote file input/output systems for distributed computing.
PIV/HPIV Film Analysis Software Package
NASA Technical Reports Server (NTRS)
Blackshire, James L.
1997-01-01
A PIV/HPIV film analysis software system was developed that calculates the 2-dimensional spatial autocorrelations of subregions of Particle Image Velocimetry (PIV) or Holographic Particle Image Velocimetry (HPIV) film recordings. The software controls three hardware subsystems including (1) a Kodak Megaplus 1.4 camera and EPIX 4MEG framegrabber subsystem, (2) an IEEE/Unidex 11 precision motion control subsystem, and (3) an Alacron I860 array processor subsystem. The software runs on an IBM PC/AT host computer running either the Microsoft Windows 3.1 or Windows 95 operating system. It is capable of processing five PIV or HPIV displacement vectors per second, and is completely automated with the exception of user input to a configuration file prior to analysis execution for update of various system parameters.
Performance of the Cell processor for biomolecular simulations
NASA Astrophysics Data System (ADS)
De Fabritiis, G.
2007-06-01
The new Cell processor represents a turning point for computing intensive applications. Here, I show that for molecular dynamics it is possible to reach an impressive sustained performance in excess of 30 Gflops with a peak of 45 Gflops for the non-bonded force calculations, over one order of magnitude faster than a single core standard processor.
NASA Technical Reports Server (NTRS)
Nguyen, D. T.; Rogers, J. L., Jr.
1986-01-01
A finite element based programming system for minimum weight design of a truss-type structure subjected to displacement, stress, and lower and upper bounds on design variables is presented. The programming system consists of a number of independent processors, each performing a specific task. These processors, however, are interfaced through a well-organized data base, thus making the tasks of modifying, updating, or expanding the programming system much easier in a friendly environment provided by many inexpensive personal computers. The proposed software can be viewed as an important step in achieving a 'dummy' finite element for optimization. The programming system has been implemented on both large and small computers (such as VAX, CYBER, IBM-PC, and APPLE) although the focus is on the latter. Examples are presented to demonstrate the capabilities of the code. The present programming system can be used stand-alone or as part of the multilevel decomposition procedure to obtain optimum design for very large scale structural systems. Furthermore, other related research areas such as developing optimization algorithms (or in the larger level: a structural synthesis program) for future trends in using parallel computers may also benefit from this study.
P-HARP: A parallel dynamic spectral partitioner
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sohn, A.; Biswas, R.; Simon, H.D.
1997-05-01
Partitioning unstructured graphs is central to the parallel solution of problems in computational science and engineering. The authors have introduced earlier the sequential version of an inertial spectral partitioner called HARP which maintains the quality of recursive spectral bisection (RSB) while forming the partitions an order of magnitude faster than RSB. The serial HARP is known to be the fastest spectral partitioner to date, three to four times faster than similar partitioners on a variety of meshes. This paper presents a parallel version of HARP, called P-HARP. Two types of parallelism have been exploited: loop level parallelism and recursive parallelism.more » P-HARP has been implemented in MPI on the SGI/Cray T3E and the IBM SP2. Experimental results demonstrate that P-HARP can partition a mesh of over 100,000 vertices into 256 partitions in 0.25 seconds on a 64-processor T3E. Experimental results further show that P-HARP can give nearly a 20-fold speedup on 64 processors. These results indicate that graph partitioning is no longer a major bottleneck that hinders the advancement of computational science and engineering for dynamically-changing real-world applications.« less
Liwo, Adam; Ołdziej, Stanisław; Czaplewski, Cezary; Kleinerman, Dana S.; Blood, Philip; Scheraga, Harold A.
2010-01-01
We report the implementation of our united-residue UNRES force field for simulations of protein structure and dynamics with massively parallel architectures. In addition to coarse-grained parallelism already implemented in our previous work, in which each conformation was treated by a different task, we introduce a fine-grained level in which energy and gradient evaluation are split between several tasks. The Message Passing Interface (MPI) libraries have been utilized to construct the parallel code. The parallel performance of the code has been tested on a professional Beowulf cluster (Xeon Quad Core), a Cray XT3 supercomputer, and two IBM BlueGene/P supercomputers with canonical and replica-exchange molecular dynamics. With IBM BlueGene/P, about 50 % efficiency and 120-fold speed-up of the fine-grained part was achieved for a single trajectory of a 767-residue protein with use of 256 processors/trajectory. Because of averaging over the fast degrees of freedom, UNRES provides an effective 1000-fold speed-up compared to the experimental time scale and, therefore, enables us to effectively carry out millisecond-scale simulations of proteins with 500 and more amino-acid residues in days of wall-clock time. PMID:20305729
NASA Technical Reports Server (NTRS)
Voecks, G. E.
1985-01-01
In proposed fuel-cell system, methanol converted to hydrogen in two places. External fuel processor converts only part of methanol. Remaining methanol converted in fuel cell itself, in reaction at anode. As result, size of fuel processor reduced, system efficiency increased, and cost lowered.
Fuel processors for fuel cell APU applications
NASA Astrophysics Data System (ADS)
Aicher, T.; Lenz, B.; Gschnell, F.; Groos, U.; Federici, F.; Caprile, L.; Parodi, L.
The conversion of liquid hydrocarbons to a hydrogen rich product gas is a central process step in fuel processors for auxiliary power units (APUs) for vehicles of all kinds. The selection of the reforming process depends on the fuel and the type of the fuel cell. For vehicle power trains, liquid hydrocarbons like gasoline, kerosene, and diesel are utilized and, therefore, they will also be the fuel for the respective APU systems. The fuel cells commonly envisioned for mobile APU applications are molten carbonate fuel cells (MCFC), solid oxide fuel cells (SOFC), and proton exchange membrane fuel cells (PEMFC). Since high-temperature fuel cells, e.g. MCFCs or SOFCs, can be supplied with a feed gas that contains carbon monoxide (CO) their fuel processor does not require reactors for CO reduction and removal. For PEMFCs on the other hand, CO concentrations in the feed gas must not exceed 50 ppm, better 20 ppm, which requires additional reactors downstream of the reforming reactor. This paper gives an overview of the current state of the fuel processor development for APU applications and APU system developments. Furthermore, it will present the latest developments at Fraunhofer ISE regarding fuel processors for high-temperature fuel cell APU systems on board of ships and aircrafts.
Method for operating a combustor in a fuel cell system
Chalfant, Robert W.; Clingerman, Bruce J.
2002-01-01
A method of operating a combustor to heat a fuel processor in a fuel cell system, in which the fuel processor generates a hydrogen-rich stream a portion of which is consumed in a fuel cell stack and a portion of which is discharged from the fuel cell stack and supplied to the combustor, and wherein first and second streams are supplied to the combustor, the first stream being a hydrocarbon fuel stream and the second stream consisting of said hydrogen-rich stream, the method comprising the steps of monitoring the temperature of the fuel processor; regulating the quantity of the first stream to the combustor according to the temperature of the fuel processor; and comparing said quantity of said first stream to a predetermined value or range of predetermined values.
Augustin, Jean-Christophe; Ferrier, Rachel; Hezard, Bernard; Lintz, Adrienne; Stahl, Valérie
2015-02-01
Individual-based modeling (IBM) approach combined with the microenvironment modeling of vacuum-packed cold-smoked salmon was more effective to describe the variability of the growth of a few Listeria monocytogenes cells contaminating irradiated salmon slices than the traditional population models. The IBM approach was particularly relevant to predict the absence of growth in 25% (5 among 20) of artificially contaminated cold-smoked salmon samples stored at 8 °C. These results confirmed similar observations obtained with smear soft cheese (Ferrier et al., 2013). These two different food models were used to compare the IBM/microscale and population/macroscale modeling approaches in more global exposure and risk assessment frameworks taking into account the variability and/or the uncertainty of the factors influencing the growth of L. monocytogenes. We observed that the traditional population models significantly overestimate exposure and risk estimates in comparison to IBM approach when contamination of foods occurs with a low number of cells (<100 per serving). Moreover, the exposure estimates obtained with the population model were characterized by a great uncertainty. The overestimation was mainly linked to the ability of IBM to predict no growth situations rather than the consideration of microscale environment. On the other hand, when the aim of quantitative risk assessment studies is only to assess the relative impact of changes in control measures affecting the growth of foodborne bacteria, the two modeling approach gave similar results and the simplest population approach was suitable. Copyright © 2014 Elsevier Ltd. All rights reserved.
Particle Based Simulations of Complex Systems with MP2C : Hydrodynamics and Electrostatics
NASA Astrophysics Data System (ADS)
Sutmann, Godehard; Westphal, Lidia; Bolten, Matthias
2010-09-01
Particle based simulation methods are well established paths to explore system behavior on microscopic to mesoscopic time and length scales. With the development of new computer architectures it becomes more and more important to concentrate on local algorithms which do not need global data transfer or reorganisation of large arrays of data across processors. This requirement strongly addresses long-range interactions in particle systems, i.e. mainly hydrodynamic and electrostatic contributions. In this article, emphasis is given to the implementation and parallelization of the Multi-Particle Collision Dynamics method for hydrodynamic contributions and a splitting scheme based on Multigrid for electrostatic contributions. Implementations are done for massively parallel architectures and are demonstrated for the IBM Blue Gene/P architecture Jugene in Jülich.
DOE Office of Scientific and Technical Information (OSTI.GOV)
DUNCAN, D.R.
The HANSF analysis tool is an integrated model considering phenomena inside a multi-canister overpack (MCO) spent nuclear fuel container such as fuel oxidation, convective and radiative heat transfer, and the potential for fission product release. This manual reflects the HANSF version 1.3.2, a revised version of 1.3.1. HANSF 1.3.2 was written to correct minor errors and to allow modeling of condensate flow on the MCO inner surface. HANSF 1.3.2 is intended for use on personal computers such as IBM-compatible machines with Intel processors running under Lahey TI or digital Visual FORTRAN, Version 6.0, but this does not preclude operation inmore » other environments.« less
Noise reduction and image enhancement using a hardware implementation of artificial neural networks
NASA Astrophysics Data System (ADS)
David, Robert; Williams, Erin; de Tremiolles, Ghislain; Tannhof, Pascal
1999-03-01
In this paper, we present a neural based solution developed for noise reduction and image enhancement using the ZISC, an IBM hardware processor which implements the Restricted Coulomb Energy algorithm and the K-Nearest Neighbor algorithm. Artificial neural networks present the advantages of processing time reduction in comparison with classical models, adaptability, and the weighted property of pattern learning. The goal of the developed application is image enhancement in order to restore old movies (noise reduction, focus correction, etc.), to improve digital television images, or to treat images which require adaptive processing (medical images, spatial images, special effects, etc.). Image results show a quantitative improvement over the noisy image as well as the efficiency of this system. Further enhancements are being examined to improve the output of the system.
Method for operating a combustor in a fuel cell system
Clingerman, Bruce J.; Mowery, Kenneth D.
2002-01-01
In one aspect, the invention provides a method of operating a combustor to heat a fuel processor to a desired temperature in a fuel cell system, wherein the fuel processor generates hydrogen (H.sub.2) from a hydrocarbon for reaction within a fuel cell to generate electricity. More particularly, the invention provides a method and select system design features which cooperate to provide a start up mode of operation and a smooth transition from start-up of the combustor and fuel processor to a running mode.
A novel VLSI processor architecture for supercomputing arrays
NASA Technical Reports Server (NTRS)
Venkateswaran, N.; Pattabiraman, S.; Devanathan, R.; Ahmed, Ashaf; Venkataraman, S.; Ganesh, N.
1993-01-01
Design of the processor element for general purpose massively parallel supercomputing arrays is highly complex and cost ineffective. To overcome this, the architecture and organization of the functional units of the processor element should be such as to suit the diverse computational structures and simplify mapping of complex communication structures of different classes of algorithms. This demands that the computation and communication structures of different class of algorithms be unified. While unifying the different communication structures is a difficult process, analysis of a wide class of algorithms reveals that their computation structures can be expressed in terms of basic IP,IP,OP,CM,R,SM, and MAA operations. The execution of these operations is unified on the PAcube macro-cell array. Based on this PAcube macro-cell array, we present a novel processor element called the GIPOP processor, which has dedicated functional units to perform the above operations. The architecture and organization of these functional units are such to satisfy the two important criteria mentioned above. The structure of the macro-cell and the unification process has led to a very regular and simpler design of the GIPOP processor. The production cost of the GIPOP processor is drastically reduced as it is designed on high performance mask programmable PAcube arrays.
Digital Parallel Processor Array for Optimum Path Planning
NASA Technical Reports Server (NTRS)
Kremeny, Sabrina E. (Inventor); Fossum, Eric R. (Inventor); Nixon, Robert H. (Inventor)
1996-01-01
The invention computes the optimum path across a terrain or topology represented by an array of parallel processor cells interconnected between neighboring cells by links extending along different directions to the neighboring cells. Such an array is preferably implemented as a high-speed integrated circuit. The computation of the optimum path is accomplished by, in each cell, receiving stimulus signals from neighboring cells along corresponding directions, determining and storing the identity of a direction along which the first stimulus signal is received, broadcasting a subsequent stimulus signal to the neighboring cells after a predetermined delay time, whereby stimulus signals propagate throughout the array from a starting one of the cells. After propagation of the stimulus signal throughout the array, a master processor traces back from a selected destination cell to the starting cell along an optimum path of the cells in accordance with the identity of the directions stored in each of the cells.
Spherical Harmonic Solutions to the 3D Kobayashi Benchmark Suite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, P.N.; Chang, B.; Hanebutte, U.R.
1999-12-29
Spherical harmonic solutions of order 5, 9 and 21 on spatial grids containing up to 3.3 million cells are presented for the Kobayashi benchmark suite. This suite of three problems with simple geometry of pure absorber with large void region was proposed by Professor Kobayashi at an OECD/NEA meeting in 1996. Each of the three problems contains a source, a void and a shield region. Problem 1 can best be described as a box in a box problem, where a source region is surrounded by a square void region which itself is embedded in a square shield region. Problems 2more » and 3 represent a shield with a void duct. Problem 2 having a straight and problem 3 a dog leg shaped duct. A pure absorber and a 50% scattering case are considered for each of the three problems. The solutions have been obtained with Ardra, a scalable, parallel neutron transport code developed at Lawrence Livermore National Laboratory (LLNL). The Ardra code takes advantage of a two-level parallelization strategy, which combines message passing between processing nodes and thread based parallelism amongst processors on each node. All calculations were performed on the IBM ASCI Blue-Pacific computer at LLNL.« less
Miniature Fuel Processors for Portable Fuel Cell Power Supplies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holladay, Jamie D.; Jones, Evan O.; Palo, Daniel R.
2003-06-02
Miniature and micro-scale fuel processors are discussed. The enabling technologies for these devices are the novel catalysts and the micro-technology-based designs. The novel catalyst allows for methanol reforming at high gas hourly space velocities of 50,000 hr-1 or higher, while maintaining a carbon monoxide levels at 1% or less. The micro-technology-based designs enable the devices to be extremely compact and lightweight. The miniature fuel processors can nominally provide between 25-50 watts equivalent of hydrogen which is ample for soldier or personal portable power supplies. The integrated processors have a volume less than 50 cm3, a mass less than 150 grams,more » and thermal efficiencies of up to 83%. With reasonable assumptions on fuel cell efficiencies, anode gas and water management, parasitic power loss, etc., the energy density was estimated at 1700 Whr/kg. The miniature processors have been demonstrated with a carbon monoxide clean-up method and a fuel cell stack. The micro-scale fuel processors have been designed to provide up to 0.3 watt equivalent of power with efficiencies over 20%. They have a volume of less than 0.25 cm3 and a mass of less than 1 gram.« less
NAS Experiences of Porting CM Fortran Codes to HPF on IBM SP2 and SGI Power Challenge
NASA Technical Reports Server (NTRS)
Saini, Subhash
1995-01-01
Current Connection Machine (CM) Fortran codes developed for the CM-2 and the CM-5 represent an important class of parallel applications. Several users have employed CM Fortran codes in production mode on the CM-2 and the CM-5 for the last five to six years, constituting a heavy investment in terms of cost and time. With Thinking Machines Corporation's decision to withdraw from the hardware business and with the decommissioning of many CM-2 and CM-5 machines, the best way to protect the substantial investment in CM Fortran codes is to port the codes to High Performance Fortran (HPF) on highly parallel systems. HPF is very similar to CM Fortran and thus represents a natural transition. Conversion issues involved in porting CM Fortran codes on the CM-5 to HPF are presented. In particular, the differences between data distribution directives and the CM Fortran Utility Routines Library, as well as the equivalent functionality in the HPF Library are discussed. Several CM Fortran codes (Cannon algorithm for matrix-matrix multiplication, Linear solver Ax=b, 1-D convolution for 2-D datasets, Laplace's Equation solver, and Direct Simulation Monte Carlo (DSMC) codes have been ported to Subset HPF on the IBM SP2 and the SGI Power Challenge. Speedup ratios versus number of processors for the Linear solver and DSMC code are presented.
Proton exchange membrane fuel cell technology for transportation applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swathirajan, S.
1996-04-01
Proton Exchange Membrane (PEM) fuel cells are extremely promising as future power plants in the transportation sector to achieve an increase in energy efficiency and eliminate environmental pollution due to vehicles. GM is currently involved in a multiphase program with the US Department of Energy for developing a proof-of-concept hybrid vehicle based on a PEM fuel cell power plant and a methanol fuel processor. Other participants in the program are Los Alamos National Labs, Dow Chemical Co., Ballard Power Systems and DuPont Co., In the just completed phase 1 of the program, a 10 kW PEM fuel cell power plantmore » was built and tested to demonstrate the feasibility of integrating a methanol fuel processor with a PEM fuel cell stack. However, the fuel cell power plant must overcome stiff technical and economic challenges before it can be commercialized for light duty vehicle applications. Progress achieved in phase I on the use of monolithic catalyst reactors in the fuel processor, managing CO impurity in the fuel cell stack, low-cost electrode-membrane assembles, and on the integration of the fuel processor with a Ballard PEM fuel cell stack will be presented.« less
Ferrier, Rachel; Hezard, Bernard; Lintz, Adrienne; Stahl, Valérie
2013-01-01
An individual-based modeling (IBM) approach was developed to describe the behavior of a few Listeria monocytogenes cells contaminating smear soft cheese surface. The IBM approach consisted of assessing the stochastic individual behaviors of cells on cheese surfaces and knowing the characteristics of their surrounding microenvironments. We used a microelectrode for pH measurements and micro-osmolality to assess the water activity of cheese microsamples. These measurements revealed a high variability of microscale pH compared to that of macroscale pH. A model describing the increase in pH from approximately 5.0 to more than 7.0 during ripening was developed. The spatial variability of the cheese surface characterized by an increasing pH with radius and higher pH on crests compared to that of hollows on cheese rind was also modeled. The microscale water activity ranged from approximately 0.96 to 0.98 and was stable during ripening. The spatial variability on cheese surfaces was low compared to between-cheese variability. Models describing the microscale variability of cheese characteristics were combined with the IBM approach to simulate the stochastic growth of L. monocytogenes on cheese, and these simulations were compared to bacterial counts obtained from irradiated cheeses artificially contaminated at different ripening stages. The simulated variability of L. monocytogenes counts with the IBM/microenvironmental approach was consistent with the observed one. Contrasting situations corresponding to no growth or highly contaminated foods could be deduced from these models. Moreover, the IBM approach was more effective than the traditional population/macroenvironmental approach to describe the actual bacterial behavior variability. PMID:23872572
PC-CUBE: A Personal Computer Based Hypercube
NASA Technical Reports Server (NTRS)
Ho, Alex; Fox, Geoffrey; Walker, David; Snyder, Scott; Chang, Douglas; Chen, Stanley; Breaden, Matt; Cole, Terry
1988-01-01
PC-CUBE is an ensemble of IBM PCs or close compatibles connected in the hypercube topology with ordinary computer cables. Communication occurs at the rate of 115.2 K-band via the RS-232 serial links. Available for PC-CUBE is the Crystalline Operating System III (CrOS III), Mercury Operating System, CUBIX and PLOTIX which are parallel I/O and graphics libraries. A CrOS performance monitor was developed to facilitate the measurement of communication and computation time of a program and their effects on performance. Also available are CXLISP, a parallel version of the XLISP interpreter; GRAFIX, some graphics routines for the EGA and CGA; and a general execution profiler for determining execution time spent by program subroutines. PC-CUBE provides a programming environment similar to all hypercube systems running CrOS III, Mercury and CUBIX. In addition, every node (personal computer) has its own graphics display monitor and storage devices. These allow data to be displayed or stored at every processor, which has much instructional value and enables easier debugging of applications. Some application programs which are taken from the book Solving Problems on Concurrent Processors (Fox 88) were implemented with graphics enhancement on PC-CUBE. The applications range from solving the Mandelbrot set, Laplace equation, wave equation, long range force interaction, to WaTor, an ecological simulation.
A Parallel Ghosting Algorithm for The Flexible Distributed Mesh Database
Mubarak, Misbah; Seol, Seegyoung; Lu, Qiukai; ...
2013-01-01
Critical to the scalability of parallel adaptive simulations are parallel control functions including load balancing, reduced inter-process communication and optimal data decomposition. In distributed meshes, many mesh-based applications frequently access neighborhood information for computational purposes which must be transmitted efficiently to avoid parallel performance degradation when the neighbors are on different processors. This article presents a parallel algorithm of creating and deleting data copies, referred to as ghost copies, which localize neighborhood data for computation purposes while minimizing inter-process communication. The key characteristics of the algorithm are: (1) It can create ghost copies of any permissible topological order in amore » 1D, 2D or 3D mesh based on selected adjacencies. (2) It exploits neighborhood communication patterns during the ghost creation process thus eliminating all-to-all communication. (3) For applications that need neighbors of neighbors, the algorithm can create n number of ghost layers up to a point where the whole partitioned mesh can be ghosted. Strong and weak scaling results are presented for the IBM BG/P and Cray XE6 architectures up to a core count of 32,768 processors. The algorithm also leads to scalable results when used in a parallel super-convergent patch recovery error estimator, an application that frequently accesses neighborhood data to carry out computation.« less
Multiprocessing MCNP on an IBM RS/6000 cluster
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKinney, G.W.; West, J.T.
1993-01-01
The advent of high-performance computer systems has brought to maturity programming concepts like vectorization, multiprocessing, and multitasking. While there are many schools of thought as to the most significant factor in obtaining order-of-magnitude increases in performance, such speedup can only be achieved by integrating the computer system and application code. Vectorization leads to faster manipulation of arrays by overlapping instruction CPU cycles. Discrete ordinates codes, which require the solving of large matrices, have proved to be major benefactors of vectorization. Monte Carlo transport, on the other hand, typically contains numerous logic statements and requires extensive redevelopment to benefit from vectorization.more » Multiprocessing and multitasking provide additional CPU cycles via multiple processors. Such systems are generally designed with either common memory access (multitasking) or distributed memory access. In both cases, theoretical speedup, as a function of the number of processors (P) and the fraction of task time that multiprocesses (f), can be formulated using Amdahl's Law S ((f,P) = 1 f + f/P). However, for most applications this theoretical limit cannot be achieved, due to additional terms not included in Amdahl's Law. Monte Carlo transport is a natural candidate for multiprocessing, since the particle tracks are generally independent and the precision of the result increases as the square root of the number of particles tracked.« less
Simulation of a Real-Time Local Data Integration System over East-Central Florida
NASA Technical Reports Server (NTRS)
Case, Jonathan
1999-01-01
The Applied Meteorology Unit (AMU) simulated a real-time configuration of a Local Data Integration System (LDIS) using data from 15-28 February 1999. The objectives were to assess the utility of a simulated real-time LDIS, evaluate and extrapolate system performance to identify the hardware necessary to run a real-time LDIS, and determine the sensitivities of LDIS. The ultimate goal for running LDIS is to generate analysis products that enhance short-range (less than 6 h) weather forecasts issued in support of the 45th Weather Squadron, Spaceflight Meteorology Group, and Melbourne National Weather Service operational requirements. The simulation used the Advanced Regional Prediction System (ARPS) Data Analysis System (ADAS) software on an IBM RS/6000 workstation with a 67-MHz processor. This configuration ran in real-time, but not sufficiently fast for operational requirements. Thus, the AMU recommends a workstation with a 200-MHz processor and 512 megabytes of memory to run the AMU's configuration of LDIS in real-time. This report presents results from two case studies and several data sensitivity experiments. ADAS demonstrates utility through its ability to depict high-resolution cloud and wind features in a variety of weather situations. The sensitivity experiments illustrate the influence of disparate data on the resulting ADAS analyses.
FTMP (Fault Tolerant Multiprocessor) programmer's manual
NASA Technical Reports Server (NTRS)
Feather, F. E.; Liceaga, C. A.; Padilla, P. A.
1986-01-01
The Fault Tolerant Multiprocessor (FTMP) computer system was constructed using the Rockwell/Collins CAPS-6 processor. It is installed in the Avionics Integration Research Laboratory (AIRLAB) of NASA Langley Research Center. It is hosted by AIRLAB's System 10, a VAX 11/750, for the loading of programs and experimentation. The FTMP support software includes a cross compiler for a high level language called Automated Engineering Design (AED) System, an assembler for the CAPS-6 processor assembly language, and a linker. Access to this support software is through an automated remote access facility on the VAX which relieves the user of the burden of learning how to use the IBM 4381. This manual is a compilation of information about the FTMP support environment. It explains the FTMP software and support environment along many of the finer points of running programs on FTMP. This will be helpful to the researcher trying to run an experiment on FTMP and even to the person probing FTMP with fault injections. Much of the information in this manual can be found in other sources; we are only attempting to bring together the basic points in a single source. If the reader should need points clarified, there is a list of support documentation in the back of this manual.
Potential medical applications of TAE
NASA Technical Reports Server (NTRS)
Fahy, J. Ben; Kaucic, Robert; Kim, Yongmin
1986-01-01
In cooperation with scientists in the University of Washington Medical School, a microcomputer-based image processing system for quantitative microscopy, called DMD1 (Digital Microdensitometer 1) was constructed. In order to make DMD1 transportable to different hosts and image processors, we have been investigating the possibility of rewriting the lower level portions of DMD1 software using Transportable Applications Executive (TAE) libraries and subsystems. If successful, we hope to produce a newer version of DMD1, called DMD2, running on an IBM PC/AT under the SCO XENIX System 5 operating system, using any of seven target image processors available in our laboratory. Following this implementation, copies of the system will be transferred to other laboratories with biomedical imaging applications. By integrating those applications into DMD2, we hope to eventually expand our system into a low-cost general purpose biomedical imaging workstation. This workstation will be useful not only as a self-contained instrument for clinical or research applications, but also as part of a large scale Digital Imaging Network and Picture Archiving and Communication System, (DIN/PACS). Widespread application of these TAE-based image processing and analysis systems should facilitate software exchange and scientific cooperation not only within the medical community, but between the medical and remote sensing communities as well.
S-HARP: A parallel dynamic spectral partitioner
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sohn, A.; Simon, H.
1998-01-01
Computational science problems with adaptive meshes involve dynamic load balancing when implemented on parallel machines. This dynamic load balancing requires fast partitioning of computational meshes at run time. The authors present in this report a fast parallel dynamic partitioner, called S-HARP. The underlying principles of S-HARP are the fast feature of inertial partitioning and the quality feature of spectral partitioning. S-HARP partitions a graph from scratch, requiring no partition information from previous iterations. Two types of parallelism have been exploited in S-HARP, fine grain loop level parallelism and coarse grain recursive parallelism. The parallel partitioner has been implemented in Messagemore » Passing Interface on Cray T3E and IBM SP2 for portability. Experimental results indicate that S-HARP can partition a mesh of over 100,000 vertices into 256 partitions in 0.2 seconds on a 64 processor Cray T3E. S-HARP is much more scalable than other dynamic partitioners, giving over 15 fold speedup on 64 processors while ParaMeTiS1.0 gives a few fold speedup. Experimental results demonstrate that S-HARP is three to 10 times faster than the dynamic partitioners ParaMeTiS and Jostle on six computational meshes of size over 100,000 vertices.« less
Szatmary, J; Hadani, I; Julesz, B
1997-01-01
Rogers and Graham (1979) developed a system to show that head-movement-contingent motion parallax produces monocular depth perception in random dot patterns. Their display system comprised an oscilloscope driven by function generators or a special graphics board that triggered the X and Y deflection of the raster scan signal. Replication of this system required costly hardware that is no longer on the market. In this paper the Rogers-Graham method is reproduced with an Intel processor based IBM PC compatible machine with no additional hardware cost. An adapted joystick sampled through the standard game-port can serve as a provisional head-movement sensor. Monitor resolution for displaying motion is effectively enhanced 16 times by the use of anti-aliasing, enabling the display of thousands of random dots in real-time with a refresh rate of 60 Hz or above. A color monitor enables the use of the anaglyph method, thus combining stereoscopic and monocular parallax on a single display without the loss of speed. The power of this system is demonstrated by a psychophysical measurement in which subjects nulled head-movement-contingent illusory parallax, evoked by a static stereogram, with real parallax. The amount of real parallax required to null the illusory stereoscopic parallax monotonically increased with disparity.
NASA Astrophysics Data System (ADS)
van Dyk, Danny; Geveler, Markus; Mallach, Sven; Ribbrock, Dirk; Göddeke, Dominik; Gutwenger, Carsten
2009-12-01
We present HONEI, an open-source collection of libraries offering a hardware oriented approach to numerical calculations. HONEI abstracts the hardware, and applications written on top of HONEI can be executed on a wide range of computer architectures such as CPUs, GPUs and the Cell processor. We demonstrate the flexibility and performance of our approach with two test applications, a Finite Element multigrid solver for the Poisson problem and a robust and fast simulation of shallow water waves. By linking against HONEI's libraries, we achieve a two-fold speedup over straight forward C++ code using HONEI's SSE backend, and additional 3-4 and 4-16 times faster execution on the Cell and a GPU. A second important aspect of our approach is that the full performance capabilities of the hardware under consideration can be exploited by adding optimised application-specific operations to the HONEI libraries. HONEI provides all necessary infrastructure for development and evaluation of such kernels, significantly simplifying their development. Program summaryProgram title: HONEI Catalogue identifier: AEDW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPLv2 No. of lines in distributed program, including test data, etc.: 216 180 No. of bytes in distributed program, including test data, etc.: 1 270 140 Distribution format: tar.gz Programming language: C++ Computer: x86, x86_64, NVIDIA CUDA GPUs, Cell blades and PlayStation 3 Operating system: Linux RAM: at least 500 MB free Classification: 4.8, 4.3, 6.1 External routines: SSE: none; [1] for GPU, [2] for Cell backend Nature of problem: Computational science in general and numerical simulation in particular have reached a turning point. The revolution developers are facing is not primarily driven by a change in (problem-specific) methodology, but rather by the fundamental paradigm shift of the underlying hardware towards heterogeneity and parallelism. This is particularly relevant for data-intensive problems stemming from discretisations with local support, such as finite differences, volumes and elements. Solution method: To address these issues, we present a hardware aware collection of libraries combining the advantages of modern software techniques and hardware oriented programming. Applications built on top of these libraries can be configured trivially to execute on CPUs, GPUs or the Cell processor. In order to evaluate the performance and accuracy of our approach, we provide two domain specific applications; a multigrid solver for the Poisson problem and a fully explicit solver for 2D shallow water equations. Restrictions: HONEI is actively being developed, and its feature list is continuously expanded. Not all combinations of operations and architectures might be supported in earlier versions of the code. Obtaining snapshots from http://www.honei.org is recommended. Unusual features: The considered applications as well as all library operations can be run on NVIDIA GPUs and the Cell BE. Running time: Depending on the application, and the input sizes. The Poisson solver executes in few seconds, while the SWE solver requires up to 5 minutes for large spatial discretisations or small timesteps. References:http://www.nvidia.com/cuda. http://www.ibm.com/developerworks/power/cell.
Clinicopathologic features of myositis patients with CD8-MHC-1 complex pathology.
Ikenaga, Chiseko; Kubota, Akatsuki; Kadoya, Masato; Taira, Kenichiro; Uchio, Naohiro; Hida, Ayumi; Maeda, Meiko Hashimoto; Nagashima, Yu; Ishiura, Hiroyuki; Kaida, Kenichi; Goto, Jun; Tsuji, Shoji; Shimizu, Jun
2017-09-05
To determine the clinical features of myositis patients with the histopathologic finding of CD8-positive T cells invading non-necrotic muscle fibers expressing major histocompatibility complex class 1 (CD8-MHC-1 complex), which is shared by polymyositis (PM) and inclusion body myositis (IBM), in relation to the p62 immunostaining pattern of muscle fibers. All 93 myositis patients with CD8-MHC-1 complex who were referred to our hospital from 1993 to 2015 were classified on the basis of the European Neuromuscular Center (ENMC) diagnostic criteria for IBM (Rose, 2013) or PM (Hoogendijk, 2004) and analyzed. The 93 patients included were 17 patients with PM, 70 patients with IBM, and 6 patients who neither met the criteria for PM nor IBM in terms of muscle weakness distribution (unclassifiable group). For these PM, IBM, and unclassifiable patients, their mean ages at diagnosis were 63, 70, and 64 years; autoimmune disease was present in 7 (41%), 13 (19%), and 4 (67%); hepatitis C virus infection was detected in 0%, 13 (20%), and 2 (33%); and p62 was immunopositive in 0%, 66 (94%), and 2 (33%), respectively. Of the treated patients, 11 of 16 PM patients and 4 of 6 p62-immunonegative patients in the unclassifiable group showed responses to immunotherapy, whereas all 44 patients with IBM and 2 p62-immunopositive patients in the unclassifiable group were unresponsive to immunotherapy. CD8-MHC-1 complex is present in patients with PM, IBM, or unclassifiable group. The data may serve as an argument for a trial of immunosuppressive treatment in p62-immunonegative patients with unclassifiable myositis. © 2017 American Academy of Neurology.
Reumann, Matthias; Fitch, Blake G; Rayshubskiy, Aleksandr; Keller, David U J; Seemann, Gunnar; Dossel, Olaf; Pitman, Michael C; Rice, John J
2009-01-01
Orthogonal recursive bisection (ORB) algorithm can be used as data decomposition strategy to distribute a large data set of a cardiac model to a distributed memory supercomputer. It has been shown previously that good scaling results can be achieved using the ORB algorithm for data decomposition. However, the ORB algorithm depends on the distribution of computational load of each element in the data set. In this work we investigated the dependence of data decomposition and load balancing on different rotations of the anatomical data set to achieve optimization in load balancing. The anatomical data set was given by both ventricles of the Visible Female data set in a 0.2 mm resolution. Fiber orientation was included. The data set was rotated by 90 degrees around x, y and z axis, respectively. By either translating or by simply taking the magnitude of the resulting negative coordinates we were able to create 14 data set of the same anatomy with different orientation and position in the overall volume. Computation load ratios for non - tissue vs. tissue elements used in the data decomposition were 1:1, 1:2, 1:5, 1:10, 1:25, 1:38.85, 1:50 and 1:100 to investigate the effect of different load ratios on the data decomposition. The ten Tusscher et al. (2004) electrophysiological cell model was used in monodomain simulations of 1 ms simulation time to compare performance using the different data sets and orientations. The simulations were carried out for load ratio 1:10, 1:25 and 1:38.85 on a 512 processor partition of the IBM Blue Gene/L supercomputer. Th results show that the data decomposition does depend on the orientation and position of the anatomy in the global volume. The difference in total run time between the data sets is 10 s for a simulation time of 1 ms. This yields a difference of about 28 h for a simulation of 10 s simulation time. However, given larger processor partitions, the difference in run time decreases and becomes less significant. Depending on the processor partition size, future work will have to consider the orientation of the anatomy in the global volume for longer simulation runs.
Optimization of Particle-in-Cell Codes on RISC Processors
NASA Technical Reports Server (NTRS)
Decyk, Viktor K.; Karmesin, Steve Roy; Boer, Aeint de; Liewer, Paulette C.
1996-01-01
General strategies are developed to optimize particle-cell-codes written in Fortran for RISC processors which are commonly used on massively parallel computers. These strategies include data reorganization to improve cache utilization and code reorganization to improve efficiency of arithmetic pipelines.
Rain rate instrument for deployment at sea, phase 2
NASA Technical Reports Server (NTRS)
Steele, Jimmy W.
1992-01-01
This report describes, in detail, the SBIR Phase 2 contracting effort provided for by NASA Contract Number NAS8-38481 in which a prototype Rain Rate Sensor was developed. FWG Model RP101A is a fully functional rain rate and droplet size analyzing instrument. The RP101A is a fully functional rain rate and droplet size analyzing instrument. The RP101A consists of a fiber optic probe containing a 32-fiber array connected to an electronic signal processor. When interfaced to an IBM compatible personal computer and configured with appropriate software, the RP101A is capable of measuring rain rates and particles ranging in size from around 300 microns up to 6 to 7 millimeters. FWG Associates, Inc. intends to develop a production model from the prototype and continue the effort under NASA's SBIR Phase 3 program.
NASA Technical Reports Server (NTRS)
Fijany, Amir (Inventor); Bejczy, Antal K. (Inventor)
1994-01-01
In a computer having a large number of single-instruction multiple data (SIMD) processors, each of the SIMD processors has two sets of three individual processor elements controlled by a master control unit and interconnected among a plurality of register file units where data is stored. The register files input and output data in synchronism with a minor cycle clock under control of two slave control units controlling the register file units connected to respective ones of the two sets of processor elements. Depending upon which ones of the register file units are enabled to store or transmit data during a particular minor clock cycle, the processor elements within an SIMD processor are connected in rings or in pipeline arrays, and may exchange data with the internal bus or with neighboring SIMD processors through interface units controlled by respective ones of the two slave control units.
Askanas, Valerie; Engel, W King
2011-04-01
The pathogenesis of sporadic inclusion-body myositis (s-IBM), the most common muscle disease of older persons, is complex and multifactorial. Both the muscle fiber degeneration and the mononuclear-cell inflammation are components of the s-IBM pathology, but how each relates to the pathogenesis remains unsettled. We consider that the intramuscle fiber degenerative component plays the primary and the major pathogenic role leading to muscle fiber destruction and clinical weakness. In this article we review the newest research advances that provide a better understanding of the s-IBM pathogenesis. Cellular abnormalities occurring in s-IBM muscle fibers are discussed, including: several proteins that are accumulated in the form of aggregates within muscle fibers, including amyloid-β42 and its oligomers, and phosphorylated tau in the form of paired helical filaments, and we consider their putative detrimental influence; cellular mechanisms leading to protein misfolding and aggregation, including evidence of their inadequate disposal; pathogenic importance of endoplasmic reticulum stress and the unfolded protein response demonstrated in s-IBM muscle fibers; and decreased deacetylase activity of SIRT1. All these factors are combined with, and perhaps provoked by, an ageing intracellular milieu. Also discussed are the intriguing phenotypic similarities between s-IBM muscle fibers and the brains of Alzheimer and Parkinson's disease patients, the two most common neurodegenerative diseases associated with ageing. Muscle biopsy diagnostic criteria are also described and illustrated. Copyright © 2011 Elsevier Masson SAS. All rights reserved.
Methanol tailgas combustor control method
Hart-Predmore, David J.; Pettit, William H.
2002-01-01
A method for controlling the power and temperature and fuel source of a combustor in a fuel cell apparatus to supply heat to a fuel processor where the combustor has dual fuel inlet streams including a first fuel stream, and a second fuel stream of anode effluent from the fuel cell and reformate from the fuel processor. In all operating modes, an enthalpy balance is determined by regulating the amount of the first and/or second fuel streams and the quantity of the first air flow stream to support fuel processor power requirements.
NASA Astrophysics Data System (ADS)
Yang, Mei; Jiao, Fengjun; Li, Shulian; Li, Hengqiang; Chen, Guangwen
2015-08-01
A self-sustained, complete and miniaturized methanol fuel processor has been developed based on modular integration and microreactor technology. The fuel processor is comprised of one methanol oxidative reformer, one methanol combustor and one two-stage CO preferential oxidation unit. Microchannel heat exchanger is employed to recover heat from hot stream, miniaturize system size and thus achieve high energy utilization efficiency. By optimized thermal management and proper operation parameter control, the fuel processor can start up in 10 min at room temperature without external heating. A self-sustained state is achieved with H2 production rate of 0.99 Nm3 h-1 and extremely low CO content below 25 ppm. This amount of H2 is sufficient to supply a 1 kWe proton exchange membrane fuel cell. The corresponding thermal efficiency of whole processor is higher than 86%. The size and weight of the assembled reactors integrated with microchannel heat exchangers are 1.4 L and 5.3 kg, respectively, demonstrating a very compact construction of the fuel processor.
Insulin-like growth factor I in inclusion-body myositis and human muscle cultures.
Broccolini, Aldobrando; Ricci, Enzo; Pescatori, Mario; Papacci, Manuela; Gliubizzi, Carla; D'Amico, Adele; Servidei, Serenella; Tonali, Pietro; Mirabella, Massimiliano
2004-06-01
Possible pathogenic mechanisms of sporadic inclusion-body myositis (sIBM) include abnormal production and accumulation of amyloid beta (A beta), muscle aging, and increased oxidative stress. Insulin-like growth factor I (IGF-I), an endocrine and autocrine/paracrine trophic factor, provides resistance against A beta toxicity and oxidative stress in vitro and promotes cell survival. In this study we analyzed the IGF-I signaling pathway in sIBM muscle and found that 16.2% +/- 2.5% of nonregenerating fibers showed increased expression of IGF-I, phosphatidylinositide 3'OH-kinase, and Akt. In the majority of sIBM abnormal muscle fibers, increased IGF-I mRNA and protein correlated with the presence of A beta cytoplasmic inclusions. To investigate a possible relationship between A beta toxicity and IGF-I upregulation, normal primary muscle cultures were stimulated for 24 hours with the A beta(25-35) peptide corresponding to the biologically active domain of A beta. This induced an increase of IGF-I mRNA and protein in myotubes at 6 hours, followed by a gradual reduction thereafter. The level of phosphorylated Akt showed similar changes. We suggest that in sIBM. IGF-I overexpression represents a reactive response to A beta toxicity, possibly providing trophic support to vulnerable fibers. Understanding the signaling pathways activated by IGF-I in sIBM may lead to novel therapeutic strategies for the disease.
Geospace simulations on the Cell BE processor
NASA Astrophysics Data System (ADS)
Germaschewski, K.; Raeder, J.; Larson, D.
2008-12-01
OpenGGCM (Open Geospace General circulation Model) is an established numerical code that simulates the Earth's space environment. The most computing intensive part is the MHD (magnetohydrodynamics) solver that models the plasma surrounding Earth and its interaction with Earth's magnetic field and the solar wind flowing in from the sun. Like other global magnetosphere codes, OpenGGCM's realism is limited by computational constraints on grid resolution. We investigate porting of the MHD solver to the Cell BE architecture, a novel inhomogeneous multicore architecture capable of up to 230 GFlops per processor. Realizing this high performance on the Cell processor is a programming challenge, though. We implemented the MHD solver using a multi-level parallel approach: On the coarsest level, the problem is distributed to processors based upon the usual domain decomposition approach. Then, on each processor, the problem is divided into 3D columns, each of which is handled by the memory limited SPEs (synergistic processing elements) slice by slice. Finally, SIMD instructions are used to fully exploit the vector/SIMD FPUs in each SPE. Memory management needs to be handled explicitly by the code, using DMA to move data from main memory to the per-SPE local store and vice versa. We obtained excellent performance numbers, a speed-up of a factor of 25 compared to just using the main processor, while still keeping the numerical implementation details of the code maintainable.
Development of compact fuel processor for 2 kW class residential PEMFCs
NASA Astrophysics Data System (ADS)
Seo, Yu Taek; Seo, Dong Joo; Jeong, Jin Hyeok; Yoon, Wang Lai
Korea Institute of Energy Research (KIER) has been developing a novel fuel processing system to provide hydrogen rich gas to residential polymer electrolyte membrane fuel cells (PEMFCs) cogeneration system. For the effective design of a compact hydrogen production system, the unit processes of steam reforming, high and low temperature water gas shift, steam generator and internal heat exchangers are thermally and physically integrated into a packaged hardware system. Several prototypes are under development and the prototype I fuel processor showed thermal efficiency of 73% as a HHV basis with methane conversion of 81%. Recently tested prototype II has been shown the improved performance of thermal efficiency of 76% with methane conversion of 83%. In both prototypes, two-stage PrOx reactors reduce CO concentration less than 10 ppm, which is the prerequisite CO limit condition of product gas for the PEMFCs stack. After confirming the initial performance of prototype I fuel processor, it is coupled with PEMFC single cell to test the durability and demonstrated that the fuel processor is operated for 3 days successfully without any failure of fuel cell voltage. Prototype II fuel processor also showed stable performance during the durability test.
NASA Technical Reports Server (NTRS)
Goodwin, Sabine A.; Raj, P.
1999-01-01
Progress to date towards the development and validation of a fast, accurate and cost-effective aeroelastic method for advanced parallel computing platforms such as the IBM SP2 and the SGI Origin 2000 is presented in this paper. The ENSAERO code, developed at the NASA-Ames Research Center has been selected for this effort. The code allows for the computation of aeroelastic responses by simultaneously integrating the Euler or Navier-Stokes equations and the modal structural equations of motion. To assess the computational performance and accuracy of the ENSAERO code, this paper reports the results of the Navier-Stokes simulations of the transonic flow over a flexible aeroelastic wing body configuration. In addition, a forced harmonic oscillation analysis in the frequency domain and an analysis in the time domain are done on a wing undergoing a rigid pitch and plunge motion. Finally, to demonstrate the ENSAERO flutter-analysis capability, aeroelastic Euler and Navier-Stokes computations on an L-1011 wind tunnel model including pylon, nacelle and empennage are underway. All computational solutions are compared with experimental data to assess the level of accuracy of ENSAERO. As the computations described above are performed, a meticulous log of computational performance in terms of wall clock time, execution speed, memory and disk storage is kept. Code scalability is also demonstrated by studying the impact of varying the number of processors on computational performance on the IBM SP2 and the Origin 2000 systems.
Dense and Sparse Matrix Operations on the Cell Processor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Samuel W.; Shalf, John; Oliker, Leonid
2005-05-01
The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. Therefore, the high performance computing community is examining alternative architectures that address the limitations of modern superscalar designs. In this work, we examine STI's forthcoming Cell processor: a novel, low-power architecture that combines a PowerPC core with eight independent SIMD processing units coupled with a software-controlled memory to offer high FLOP/s/Watt. Since neither Cell hardware nor cycle-accurate simulators are currently publicly available, we develop an analytic framework to predict Cell performance on dense and sparse matrix operations, usingmore » a variety of algorithmic approaches. Results demonstrate Cell's potential to deliver more than an order of magnitude better GFLOP/s per watt performance, when compared with the Intel Itanium2 and Cray X1 processors.« less
Multiprocessing MCNP on an IBM RS/6000 cluster
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKinney, G.W.; West, J.T.
1993-03-01
The advent of high-performance computer systems has brought to maturity programming concepts like vectorization, multiprocessing, and multitasking. While there are many schools of thought as to the most significant factor in obtaining order-of-magnitude increases in performance, such speedup can only be achieved by integrating the computer system and application code. Vectorization leads to faster manipulation of arrays by overlapping instruction CPU cycles. Discrete ordinates codes, which require the solving of large matrices, have proved to be major benefactors of vectorization. Monte Carlo transport, on the other hand, typically contains numerous logic statements and requires extensive redevelopment to benefit from vectorization.more » Multiprocessing and multitasking provide additional CPU cycles via multiple processors. Such systems are generally designed with either common memory access (multitasking) or distributed memory access. In both cases, theoretical speedup, as a function of the number of processors (P) and the fraction of task time that multiprocesses (f), can be formulated using Amdahl`s Law S ((f,P) = 1 f + f/P). However, for most applications this theoretical limit cannot be achieved, due to additional terms not included in Amdahl`s Law. Monte Carlo transport is a natural candidate for multiprocessing, since the particle tracks are generally independent and the precision of the result increases as the square root of the number of particles tracked.« less
NASA Astrophysics Data System (ADS)
Echigo, Mitsuaki; Shinke, Norihisa; Takami, Susumu; Tabata, Takeshi
Natural gas fuel processors have been developed for 500 W and 1 kW class residential polymer electrolyte fuel cell (PEFC) systems. These fuel processors contain all the elements—desulfurizers, steam reformers, CO shift converters, CO preferential oxidation (PROX) reactors, steam generators, burners and heat exchangers—in one package. For the PROX reactor, a single-stage PROX process using a novel PROX catalyst was adopted. In the 1 kW class fuel processor, thermal efficiency of 83% at HHV was achieved at nominal output assuming a H 2 utilization rate in the cell stack of 76%. CO concentration below 1 ppm in the product gas was achieved even under the condition of [O 2]/[CO]=1.5 at the PROX reactor. The long-term durability of the fuel processor was demonstrated with almost no deterioration in thermal efficiency and CO concentration for 10,000 h, 1000 times start and stop cycles, 25,000 cycles of load change.
Sachdev, Rishibha; Kappes-Horn, Karin; Paulsen, Lydia; Duernberger, Yvonne; Pleschka, Catharina; Denner, Philip; Kundu, Bishwajit; Reimann, Jens; Vorberg, Ina
2018-03-15
Sporadic inclusion body myositis (sIBM) is the most prevalent acquired muscle disorder in the elderly with no defined etiology or effective therapy. Endoplasmic reticulum stress and deposition of myostatin, a secreted negative regulator of muscle growth, have been implicated in disease pathology. The myostatin signaling pathway has emerged as a major target for symptomatic treatment of muscle atrophy. Here, we systematically analyzed the maturation and secretion of myostatin precursor MstnPP and its metabolites in a human muscle cell line. We find that increased MsntPP protein levels induce ER stress. MstnPP metabolites were predominantly retained within the endoplasmic reticulum (ER), also evident in sIBM histology. MstnPP cleavage products formed insoluble high molecular weight aggregates, a process that was aggravated by experimental ER stress. Importantly, ER stress also impaired secretion of mature myostatin. Reduced secretion and aggregation of MstnPP metabolites were not simply caused by overexpression, as both events were also observed in wildtype cells under ER stress. It is tempting to speculate that reduced circulating myostatin growth factor could be one explanation for the poor clinical efficacy of drugs targeting the myostatin pathway in sIBM.
NASA Astrophysics Data System (ADS)
Bekas, C.; Curioni, A.
2010-06-01
Enforcing the orthogonality of approximate wavefunctions becomes one of the dominant computational kernels in planewave based Density Functional Theory electronic structure calculations that involve thousands of atoms. In this context, algorithms that enjoy both excellent scalability and single processor performance properties are much needed. In this paper we present block versions of the Gram-Schmidt method and we show that they are excellent candidates for our purposes. We compare the new approach with the state of the art practice in planewave based calculations and find that it has much to offer, especially when applied on massively parallel supercomputers such as the IBM Blue Gene/P Supercomputer. The new method achieves excellent sustained performance that surpasses 73 TFLOPS (67% of peak) on 8 Blue Gene/P racks (32 768 compute cores), while it enables more than a two fold decrease in run time when compared with the best competing methodology.
Computational Issues in Damping Identification for Large Scale Problems
NASA Technical Reports Server (NTRS)
Pilkey, Deborah L.; Roe, Kevin P.; Inman, Daniel J.
1997-01-01
Two damping identification methods are tested for efficiency in large-scale applications. One is an iterative routine, and the other a least squares method. Numerical simulations have been performed on multiple degree-of-freedom models to test the effectiveness of the algorithm and the usefulness of parallel computation for the problems. High Performance Fortran is used to parallelize the algorithm. Tests were performed using the IBM-SP2 at NASA Ames Research Center. The least squares method tested incurs high communication costs, which reduces the benefit of high performance computing. This method's memory requirement grows at a very rapid rate meaning that larger problems can quickly exceed available computer memory. The iterative method's memory requirement grows at a much slower pace and is able to handle problems with 500+ degrees of freedom on a single processor. This method benefits from parallelization, and significant speedup can he seen for problems of 100+ degrees-of-freedom.
High Performance Computing at NASA
NASA Technical Reports Server (NTRS)
Bailey, David H.; Cooper, D. M. (Technical Monitor)
1994-01-01
The speaker will give an overview of high performance computing in the U.S. in general and within NASA in particular, including a description of the recently signed NASA-IBM cooperative agreement. The latest performance figures of various parallel systems on the NAS Parallel Benchmarks will be presented. The speaker was one of the authors of the NAS (National Aerospace Standards) Parallel Benchmarks, which are now widely cited in the industry as a measure of sustained performance on realistic high-end scientific applications. It will be shown that significant progress has been made by the highly parallel supercomputer industry during the past year or so, with several new systems, based on high-performance RISC processors, that now deliver superior performance per dollar compared to conventional supercomputers. Various pitfalls in reporting performance will be discussed. The speaker will then conclude by assessing the general state of the high performance computing field.
Optimization and experimental realization of the quantum permutation algorithm
NASA Astrophysics Data System (ADS)
Yalçınkaya, I.; Gedik, Z.
2017-12-01
The quantum permutation algorithm provides computational speed-up over classical algorithms for determining the parity of a given cyclic permutation. For its n -qubit implementations, the number of required quantum gates scales quadratically with n due to the quantum Fourier transforms included. We show here for the n -qubit case that the algorithm can be simplified so that it requires only O (n ) quantum gates, which theoretically reduces the complexity of the implementation. To test our results experimentally, we utilize IBM's 5-qubit quantum processor to realize the algorithm by using the original and simplified recipes for the 2-qubit case. It turns out that the latter results in a significantly higher success probability which allows us to verify the algorithm more precisely than the previous experimental realizations. We also verify the algorithm for the first time for the 3-qubit case with a considerable success probability by taking the advantage of our simplified scheme.
Programming a hillslope water movement model on the MPP
NASA Technical Reports Server (NTRS)
Devaney, J. E.; Irving, A. R.; Camillo, P. J.; Gurney, R. J.
1987-01-01
A physically based numerical model was developed of heat and moisture flow within a hillslope on a parallel architecture computer, as a precursor to a model of a complete catchment. Moisture flow within a catchment includes evaporation, overland flow, flow in unsaturated soil, and flow in saturated soil. Because of the empirical evidence that moisture flow in unsaturated soil is mainly in the vertical direction, flow in the unsaturated zone can be modeled as a series of one dimensional columns. This initial version of the hillslope model includes evaporation and a single column of one dimensional unsaturated zone flow. This case has already been solved on an IBM 3081 computer and is now being applied to the massively parallel processor architecture so as to make the extension to the one dimensional case easier and to check the problems and benefits of using a parallel architecture machine.
NASA Technical Reports Server (NTRS)
Ramaswamy, Shankar; Banerjee, Prithviraj
1994-01-01
Appropriate data distribution has been found to be critical for obtaining good performance on Distributed Memory Multicomputers like the CM-5, Intel Paragon and IBM SP-1. It has also been found that some programs need to change their distributions during execution for better performance (redistribution). This work focuses on automatically generating efficient routines for redistribution. We present a new mathematical representation for regular distributions called PITFALLS and then discuss algorithms for redistribution based on this representation. One of the significant contributions of this work is being able to handle arbitrary source and target processor sets while performing redistribution. Another important contribution is the ability to handle an arbitrary number of dimensions for the array involved in the redistribution in a scalable manner. Our implementation of these techniques is based on an MPI-like communication library. The results presented show the low overheads for our redistribution algorithm as compared to naive runtime methods.
NASA Technical Reports Server (NTRS)
Amirouche, F. M. L.; Shareef, N. H.; Xie, M.
1991-01-01
A generalized algorithmic procedure is presented for handling the constraints in transmissions, which are treated as a multibody system of interconnected rigid/flexible bodies. The type of constraints are classified based on the interconnection of the bodies, assuming one or more points of contact to exist between them. The method is explained through flow charts and configuration/interaction tables. A significant increase in speed of execution is achieved by vectorizing the developed code in computationally intensive areas. The study of an example consisting of two meshing disks rotating at high angular velocity is carried out. The dynamic behavior of the constraint forces associated with the generalized coordinates of the system are plotted by selecting various modes. Applications are intended for the study of dynamic and subsequent prediction of constraint forces at the gear teeth contacting points in helicopter transmissions with the aim of improving performance dependability.
Parallel Processing of Adaptive Meshes with Load Balancing
NASA Technical Reports Server (NTRS)
Das, Sajal K.; Harvey, Daniel J.; Biswas, Rupak; Biegel, Bryan (Technical Monitor)
2001-01-01
Many scientific applications involve grids that lack a uniform underlying structure. These applications are often also dynamic in nature in that the grid structure significantly changes between successive phases of execution. In parallel computing environments, mesh adaptation of unstructured grids through selective refinement/coarsening has proven to be an effective approach. However, achieving load balance while minimizing interprocessor communication and redistribution costs is a difficult problem. Traditional dynamic load balancers are mostly inadequate because they lack a global view of system loads across processors. In this paper, we propose a novel and general-purpose load balancer that utilizes symmetric broadcast networks (SBN) as the underlying communication topology, and compare its performance with a successful global load balancing environment, called PLUM, specifically created to handle adaptive unstructured applications. Our experimental results on an IBM SP2 demonstrate that the SBN-based load balancer achieves lower redistribution costs than that under PLUM by overlapping processing and data migration.
COPYCAT; IBM OS system catalog utility routine. [IBM360,370; Assembly language
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engert, D.E.
COPYCAT is an OS utility program designed to produce an efficient system-wide catalog which may reside on many volumes. Substantial improvement in performance may also be obtained on a system with only a single catalog. First, catalog entries from many different catalogs may be redistributed to equalize the load on each catalog. Second, each individual catalog is restructured in a way designed to minimize the I/O time required for searching and updating. Redistribution and restructuring parameters are under user control. Model DSCB's for generation data groups and alias entries are also processed. Catalogs on all direct access devices, including datamore » cells, are supported. Backup copies may also be made.IBM360,370; Assembly language; OS/MVT, OS/MFT, OS/VS1 and OS/VS2 Release 1; A large region size is recommended since COPYCAT will use all of the core available to it for buffers..« less
Control apparatus and method for efficiently heating a fuel processor in a fuel cell system
Doan, Tien M.; Clingerman, Bruce J.
2003-08-05
A control apparatus and method for efficiently controlling the amount of heat generated by a fuel cell processor in a fuel cell system by determining a temperature error between actual and desired fuel processor temperatures. The temperature error is converted to a combustor fuel injector command signal or a heat dump valve position command signal depending upon the type of temperature error. Logic controls are responsive to the combustor fuel injector command signals and the heat dump valve position command signal to prevent the combustor fuel injector command signal from being generated if the heat dump valve is opened or, alternately, from preventing the heat dump valve position command signal from being generated if the combustor fuel injector is opened.
Compact propane fuel processor for auxiliary power unit application
NASA Astrophysics Data System (ADS)
Dokupil, M.; Spitta, C.; Mathiak, J.; Beckhaus, P.; Heinzel, A.
With focus on mobile applications a fuel cell auxiliary power unit (APU) using liquefied petroleum gas (LPG) is currently being developed at the Centre for Fuel Cell Technology (Zentrum für BrennstoffzellenTechnik, ZBT gGmbH). The system is consisting of an integrated compact and lightweight fuel processor and a low temperature PEM fuel cell for an electric power output of 300 W. This article is presenting the current status of development of the fuel processor which is designed for a nominal hydrogen output of 1 k Wth,H2 within a load range from 50 to 120%. A modular setup was chosen defining a reformer/burner module and a CO-purification module. Based on the performance specifications, thermodynamic simulations, benchmarking and selection of catalysts the modules have been developed and characterised simultaneously and then assembled to the complete fuel processor. Automated operation results in a cold startup time of about 25 min for nominal load and carbon monoxide output concentrations below 50 ppm for steady state and dynamic operation. Also fast transient response of the fuel processor at load changes with low fluctuations of the reformate gas composition have been achieved. Beside the development of the main reactors the transfer of the fuel processor to an autonomous system is of major concern. Hence, concepts for packaging have been developed resulting in a volume of 7 l and a weight of 3 kg. Further a selection of peripheral components has been tested and evaluated regarding to the substitution of the laboratory equipment.
NASA Astrophysics Data System (ADS)
Nobile, Matthew A.; Chu, Richard C.
2005-09-01
Although Bill Lang's accomplishments and key roles in national and international standards and in the formation of INCE are widely recognized, sometimes it has to be remembered that for nearly 35 years he also had a ``day job'' at the IBM Corporation. His achievements at IBM were no less significant and enduring than those in external standards and professional societies. This paper will highlight some of the accomplishments and activities of Bill Lang as an IBM noise control engineer, the creator of the IBM Acoustics Lab in Poughkeepsie, the founder of the global Acoustics program at IBM, and his many other IBM leadership roles. Bill was also a long-serving IBM manager, with the full set of personnel issues to deal with, so his people-management skills were often called into play. Bill ended his long and fruitful IBM career at a high point. In 1988, he took an original idea of his to the top of IBM executive management, which led directly to the formation of the IBM Academy of Technology, today the preeminent body of IBM top technical leaders from around the world.
Comparing an FPGA to a Cell for an Image Processing Application
NASA Astrophysics Data System (ADS)
Rakvic, Ryan N.; Ngo, Hau; Broussard, Randy P.; Ives, Robert W.
2010-12-01
Modern advancements in configurable hardware, most notably Field-Programmable Gate Arrays (FPGAs), have provided an exciting opportunity to discover the parallel nature of modern image processing algorithms. On the other hand, PlayStation3 (PS3) game consoles contain a multicore heterogeneous processor known as the Cell, which is designed to perform complex image processing algorithms at a high performance. In this research project, our aim is to study the differences in performance of a modern image processing algorithm on these two hardware platforms. In particular, Iris Recognition Systems have recently become an attractive identification method because of their extremely high accuracy. Iris matching, a repeatedly executed portion of a modern iris recognition algorithm, is parallelized on an FPGA system and a Cell processor. We demonstrate a 2.5 times speedup of the parallelized algorithm on the FPGA system when compared to a Cell processor-based version.
Effect of Alemtuzumab (CAMPATH 1-H) in patients with inclusion-body myositis
Rakocevic, Goran; Schmidt, Jens; Salajegheh, Mohammad; McElroy, Beverly; Harris-Love, Michael O.; Shrader, Joseph A.; Levy, Ellen W.; Dambrosia, James; Kampen, Robert L.; Bruno, David A.; Kirk, Allan D.
2009-01-01
Sporadic inclusion-body myositis (sIBM) is the most common disabling, adult-onset, inflammatory myopathy histologically characterized by intense inflammation and vacuolar degeneration. In spite of T cell-mediated cytotoxicity and persistent, clonally expanded and antigen-driven endomysial T cells, the disease is resistant to immunotherapies. Alemtuzumab is a humanized monoclonal antibody that causes an immediate depletion or severe reduction of peripheral blood lymphocytes, lasting at least 6 months. We designed a proof-of-principle study to examine if one series of Alemtuzumab infusions in sIBM patients depletes not only peripheral blood lymphocytes but also endomysial T cells and alters the natural course of the disease. Thirteen sIBM patients with established 12-month natural history data received 0.3 mg/kg/day Alemtuzumab for 4 days. The study was powered to capture ≥10% increase strength 6 months after treatment. The primary end-point was disease stabilization compared to natural history, assessed by bi-monthly Quantitative Muscle Strength Testing and Medical Research Council strength measurements. Lymphocytes and T cell subsets were monitored concurrently in the blood and the repeated muscle biopsies. Alterations in the mRNA expression of inflammatory, stressor and degeneration-associated molecules were examined in the repeated biopsies. During a 12-month observation period, the patients’ total strength had declined by a mean of 14.9% based on Quantitative Muscle Strength Testing. Six months after therapy, the overall decline was only 1.9% (P < 0.002), corresponding to a 13% differential gain. Among those patients, four improved by a mean of 10% and six reported improved performance of daily activities. The benefit was more evident by the Medical Research Council scales, which demonstrated a decline in the total scores by 13.8% during the observation period but an improvement by 11.4% (P < 0.001) after 6 months, reaching the level of strength recorded 12 months earlier. Depletion of peripheral blood lymphocytes, including the naive and memory CD8+ cells, was noted 2 weeks after treatment and persisted up to 6 months. The effector CD45RA+CD62L cells, however, started to increase 2 months after therapy and peaked by the 4th month. Repeated muscle biopsies showed reduction of CD3 lymphocytes by a mean of 50% (P < 0.008), most prominent in the improved patients, and reduced mRNA expression of stressor molecules Fas, Mip-1a and αB-crystallin; the mRNA of desmin, a regeneration-associated molecule, increased. This proof-of-principle study provides insights into the pathogenesis of inclusion-body myositis and concludes that in sIBM one series of Alemtuzumab infusions can slow down disease progression up to 6 months, improve the strength of some patients, and reduce endomysial inflammation and stressor molecules. These encouraging results, the first in sIBM, warrant a future study with repeated infusions (Clinical Trials. Gov NCT00079768). PMID:19454532
Dynamic behavior of gasoline fuel cell electric vehicles
NASA Astrophysics Data System (ADS)
Mitchell, William; Bowers, Brian J.; Garnier, Christophe; Boudjemaa, Fabien
As we begin the 21st century, society is continuing efforts towards finding clean power sources and alternative forms of energy. In the automotive sector, reduction of pollutants and greenhouse gas emissions from the power plant is one of the main objectives of car manufacturers and innovative technologies are under active consideration to achieve this goal. One technology that has been proposed and vigorously pursued in the past decade is the proton exchange membrane (PEM) fuel cell, an electrochemical device that reacts hydrogen with oxygen to produce water, electricity and heat. Since today there is no existing extensive hydrogen infrastructure and no commercially viable hydrogen storage technology for vehicles, there is a continuing debate as to how the hydrogen for these advanced vehicles will be supplied. In order to circumvent the above issues, power systems based on PEM fuel cells can employ an on-board fuel processor that has the ability to convert conventional fuels such as gasoline into hydrogen for the fuel cell. This option could thereby remove the fuel infrastructure and storage issues. However, for these fuel processor/fuel cell vehicles to be commercially successful, issues such as start time and transient response must be addressed. This paper discusses the role of transient response of the fuel processor power plant and how it relates to the battery sizing for a gasoline fuel cell vehicle. In addition, results of fuel processor testing from a current Renault/Nuvera Fuel Cells project are presented to show the progress in transient performance.
Metal membrane-type 25-kW methanol fuel processor for fuel-cell hybrid vehicle
NASA Astrophysics Data System (ADS)
Han, Jaesung; Lee, Seok-Min; Chang, Hyuksang
A 25-kW on-board methanol fuel processor has been developed. It consists of a methanol steam reformer, which converts methanol to hydrogen-rich gas mixture, and two metal membrane modules, which clean-up the gas mixture to high-purity hydrogen. It produces hydrogen at rates up to 25 N m 3/h and the purity of the product hydrogen is over 99.9995% with a CO content of less than 1 ppm. In this fuel processor, the operating condition of the reformer and the metal membrane modules is nearly the same, so that operation is simple and the overall system construction is compact by eliminating the extensive temperature control of the intermediate gas streams. The recovery of hydrogen in the metal membrane units is maintained at 70-75% by the control of the pressure in the system, and the remaining 25-30% hydrogen is recycled to a catalytic combustion zone to supply heat for the methanol steam-reforming reaction. The thermal efficiency of the fuel processor is about 75% and the inlet air pressure is as low as 4 psi. The fuel processor is currently being integrated with 25-kW polymer electrolyte membrane fuel-cell (PEMFC) stack developed by the Hyundai Motor Company. The stack exhibits the same performance as those with pure hydrogen, which proves that the maximum power output as well as the minimum stack degradation is possible with this fuel processor. This fuel-cell 'engine' is to be installed in a hybrid passenger vehicle for road testing.
Inlet Spillage Drag Predictions Using the AIRPLANE Code
NASA Technical Reports Server (NTRS)
Thomas, Scott D.; Won, Mark A.; Cliff, Susan E.
1999-01-01
AIRPLANE (Jameson/Baker) is a steady inviscid unstructured Euler flow solver. It has been validated on many HSR geometries. It is implemented as MESHPLANE, an unstructured mesh generator, and FLOPLANE, an iterative flow solver. The surface description from an Intergraph CAD system goes into MESHPLANE as collections of polygonal curves to generate the 3D mesh. The flow solver uses a multistage time stepping scheme with residual averaging to approach steady state, but R is not time accurate. The flow solver was ported from Cray to IBM SP2 by Wu-Sun Cheng (IBM); it could only be run on 4 CPUs at a time because of memory limitations. Meshes for the four cases had about 655,000 points in the flow field, about 3.9 million tetrahedra, about 77,500 points on the surface. The flow solver took about 23 wall seconds per iteration when using 4 CPUs. It took about eight and a half wall hours to run 1,300 iterations at a time (the queue limit is 10 hours). A revised version of FLOPLANE (Thomas) was used on up to 64 CPUs to finish up some calculations at the end. We had to turn on more communication when using more processors to eliminate noise that was contaminating the flow field; this added about 50% to the elapsed wall time per iteration when using 64 CPUs. This study involved computing lift and drag for a wing/body/nacelle configuration at Mach 0.9 and 4 degrees pitch. Four cases were considered, corresponding to four nacelle mass flow conditions.
Analysis of NCAM helps identify unusual phenotypes of hereditary inclusion-body myopathy.
Broccolini, A; Gidaro, T; Tasca, G; Morosetti, R; Rodolico, C; Ricci, E; Mirabella, M
2010-07-20
Hereditary inclusion-body myopathy or distal myopathy with rimmed vacuoles (h-IBM/DMRV) is due to mutations of the UDP-N-acetylglucosamine 2-epimerase/N-acetylmannosamine kinase (GNE) gene, which codes for an enzyme of the sialic acid biosynthetic pathway. By Western blot (WB) analysis, we have previously shown that in h-IBM/DMRV muscle, the neural cell adhesion molecule (NCAM) has increased electrophoretic mobility that reflects reduced sialylation of the protein. To identify patients with h-IBM/DMRV with atypical clinical or pathologic phenotype using NCAM analysis and the possible cellular mechanism associated with the overall abnormal sialylation of NCAM observed in this disorder. WB analysis of NCAM was performed on muscle biopsies of 84 patients with an uncharacterized muscle disorder who were divided in the following 2 groups: 1) 46 patients with a proximal muscle weakness in whom the main limb-girdle muscular dystrophy syndromes had been ruled out; and 2) 38 patients with a distal distribution of weakness in whom a neurogenic affection had been excluded. Patients in whom a reduced sialylation of NCAM was suspected were studied for the presence of GNE mutations. In 3 patients, we found that NCAM had increased electrophoretic mobility, thus suggesting an abnormal sialylation of the protein. The genetic study demonstrated that they all carried pathogenic GNE mutations. Further studies demonstrated that hyposialylated NCAM, showing increased electrophoretic mobility on WB, is expressed by nonregenerating fibers in h-IBM/DMRV muscle. WB analysis of NCAM may be instrumental in the identification of h-IBM/DMRV with atypical clinical or pathologic features.
Bakkar, Nadine; Kovalik, Tina; Lorenzini, Ileana; Spangler, Scott; Lacoste, Alix; Sponaugle, Kyle; Ferrante, Philip; Argentinis, Elenee; Sattler, Rita; Bowser, Robert
2018-02-01
Amyotrophic lateral sclerosis (ALS) is a devastating neurodegenerative disease with no effective treatments. Numerous RNA-binding proteins (RBPs) have been shown to be altered in ALS, with mutations in 11 RBPs causing familial forms of the disease, and 6 more RBPs showing abnormal expression/distribution in ALS albeit without any known mutations. RBP dysregulation is widely accepted as a contributing factor in ALS pathobiology. There are at least 1542 RBPs in the human genome; therefore, other unidentified RBPs may also be linked to the pathogenesis of ALS. We used IBM Watson ® to sieve through all RBPs in the genome and identify new RBPs linked to ALS (ALS-RBPs). IBM Watson extracted features from published literature to create semantic similarities and identify new connections between entities of interest. IBM Watson analyzed all published abstracts of previously known ALS-RBPs, and applied that text-based knowledge to all RBPs in the genome, ranking them by semantic similarity to the known set. We then validated the Watson top-ten-ranked RBPs at the protein and RNA levels in tissues from ALS and non-neurological disease controls, as well as in patient-derived induced pluripotent stem cells. 5 RBPs previously unlinked to ALS, hnRNPU, Syncrip, RBMS3, Caprin-1 and NUPL2, showed significant alterations in ALS compared to controls. Overall, we successfully used IBM Watson to help identify additional RBPs altered in ALS, highlighting the use of artificial intelligence tools to accelerate scientific discovery in ALS and possibly other complex neurological disorders.
NASA Astrophysics Data System (ADS)
Palo, Daniel R.; Holladay, Jamie D.; Rozmiarek, Robert T.; Guzman-Leong, Consuelo E.; Wang, Yong; Hu, Jianli; Chin, Ya-Huei; Dagle, Robert A.; Baker, Eddie G.
A 15-W e portable power system is being developed for the US Army that consists of a hydrogen-generating fuel reformer coupled to a proton-exchange membrane fuel cell. In the first phase of this project, a methanol steam reformer system was developed and demonstrated. The reformer system included a combustor, two vaporizers, and a steam reforming reactor. The device was demonstrated as a thermally independent unit over the range of 14-80 W t output. Assuming a 14-day mission life and an ultimate 1-kg fuel processor/fuel cell assembly, a base case was chosen to illustrate the expected system performance. Operating at 13 W e, the system yielded a fuel processor efficiency of 45% (LHV of H 2 out/LHV of fuel in) and an estimated net efficiency of 22% (assuming a fuel cell efficiency of 48%). The resulting energy density of 720 Wh/kg is several times the energy density of the best lithium-ion batteries. Some immediate areas of improvement in thermal management also have been identified, and an integrated fuel processor is under development. The final system will be a hybrid, containing a fuel reformer, a fuel cell, and a rechargeable battery. The battery will provide power for start-up and added capacity for times of peak power demand.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palo, Daniel R.; Holladay, Jamelyn D.; Rozmiarek, Robert T.
A 15-We portable power system is being developed for the US Army, comprised of a hydrogen-generating fuel reformer coupled to a hydrogen-converting fuel cell. As a first phase of this project, a methanol steam reformer system was developed and demonstrated. The reformer system included a combustor, two vaporizers, and a steam-reforming reactor. The device was demonstrated as a thermally independent unit over the range of 14 to 80 Wt output. Assuming a 14-day mission life and an ultimate 1-kg fuel processor/fuel cell assembly, a base case was chosen to illustrate the expected system performance. Operating at 13 We, the systemmore » yielded a fuel processor efficiency of 45% (LHV of H2 out/LHV of fuel in) and an estimated net efficiency of 22% (assuming a fuel cell efficiency of 48%). The resulting energy density of 720 W-hr/kg is several times the energy density of the best lithium-ion batteries. Some immediate areas of improvement in thermal management also have been identified and an integrated fuel processor is under development. The final system will be a hybrid, containing a fuel reformer, fuel cell, and rechargeable battery. The battery will provide power for startup and added capacity for times of peak power demand.« less
1988-05-20
AVF Control Number: AVF-VSR-84.1087 ’S (0 87-03-10-TEL I- Ada® COMPILER VALIDATION SUMMARY REPORT: International Business Machines Corporation IBM...System, Version 1.1.0, International Business Machines Corporation, Wright-Patterson AFB. IBM 4381 under VM/SP CMS, Release 3.6 (host) and IBM 4381...an IBM 4381 operating under MVS, Release 3.8. On-site testing was performed 18 May 1987 through 20 May 1987 at International Business Machines
1988-03-28
International Business Machines Corporation IBM Development System for the Ada Language, Version 2.1.0 IBM 4381 under VM/HPO, host IBM 4381 under MVS/XA, target...Program Office, AJPO 20. ABSTRACT (Continue on reverse side if necessary and identify by block number) International Business Machines Corporation, IBM...Standard ANSI/MIL-STD-1815A in the compiler listed in this declaration. I declare that International Business Machines Corporation is the owner of record
Diesel fuel to dc power: Navy & Marine Corps Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bloomfield, D.P.
1996-12-31
During the past year Analytic Power has tested fuel cell stacks and diesel fuel processors for US Navy and Marine Corps applications. The units are 10 kW demonstration power plants. The USN power plant was built to demonstrate the feasibility of diesel fueled PEM fuel cell power plants for 250 kW and 2.5 MW shipboard power systems. We designed and tested a ten cell, 1 kW USMC substack and fuel processor. The complete 10 kW prototype power plant, which has application to both power and hydrogen generation, is now under construction. The USN and USMC fuel cell stacks have beenmore » tested on both actual and simulated reformate. Analytic Power has accumulated operating experience with autothermal reforming based fuel processors operating on sulfur bearing diesel fuel, jet fuel, propane and natural gas. We have also completed the design and fabrication of an advanced regenerative ATR for the USMC. One of the significant problems with small fuel processors is heat loss which limits its ability to operate with the high steam to carbon ratios required for coke free high efficiency operation. The new USMC unit specifically addresses these heat transfer issues. The advances in the mill programs have been incorporated into Analytic Power`s commercial units which are now under test.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wasserman, H.J.
1996-02-01
The second generation of the Digital Equipment Corp. (DEC) DECchip Alpha AXP microprocessor is referred to as the 21164. From the viewpoint of numerically-intensive computing, the primary difference between it and its predecessor, the 21064, is that the 21164 has twice the multiply/add throughput per clock period (CP), a maximum of two floating point operations (FLOPS) per CP vs. one for 21064. The AlphaServer 8400 is a shared-memory multiprocessor server system that can accommodate up to 12 CPUs and up to 14 GB of memory. In this report we will compare single processor performance of the 8400 system with thatmore » of the International Business Machines Corp. (IBM) RISC System/6000 POWER-2 microprocessor running at 66 MHz, the Silicon Graphics, Inc. (SGI) MIPS R8000 microprocessor running at 75 MHz, and the Cray Research, Inc. CRAY J90. The performance comparison is based on a set of Fortran benchmark codes that represent a portion of the Los Alamos National Laboratory supercomputer workload. The advantage of using these codes, is that the codes also span a wide range of computational characteristics, such as vectorizability, problem size, and memory access pattern. The primary disadvantage of using them is that detailed, quantitative analysis of performance behavior of all codes on all machines is difficult. One important addition to the benchmark set appears for the first time in this report. Whereas the older version was written for a vector processor, the newer version is more optimized for microprocessor architectures. Therefore, we have for the first time, an opportunity to measure performance on a single application using implementations that expose the respective strengths of vector and superscalar architecture. All results in this report are from single processors. A subsequent article will explore shared-memory multiprocessing performance of the 8400 system.« less
NASA Astrophysics Data System (ADS)
Stellmach, Stephan; Hansen, Ulrich
2008-05-01
Numerical simulations of the process of convection and magnetic field generation in planetary cores still fail to reach geophysically realistic control parameter values. Future progress in this field depends crucially on efficient numerical algorithms which are able to take advantage of the newest generation of parallel computers. Desirable features of simulation algorithms include (1) spectral accuracy, (2) an operation count per time step that is small and roughly proportional to the number of grid points, (3) memory requirements that scale linear with resolution, (4) an implicit treatment of all linear terms including the Coriolis force, (5) the ability to treat all kinds of common boundary conditions, and (6) reasonable efficiency on massively parallel machines with tens of thousands of processors. So far, algorithms for fully self-consistent dynamo simulations in spherical shells do not achieve all these criteria simultaneously, resulting in strong restrictions on the possible resolutions. In this paper, we demonstrate that local dynamo models in which the process of convection and magnetic field generation is only simulated for a small part of a planetary core in Cartesian geometry can achieve the above goal. We propose an algorithm that fulfills the first five of the above criteria and demonstrate that a model implementation of our method on an IBM Blue Gene/L system scales impressively well for up to O(104) processors. This allows for numerical simulations at rather extreme parameter values.
Metascalable molecular dynamics simulation of nano-mechano-chemistry
NASA Astrophysics Data System (ADS)
Shimojo, F.; Kalia, R. K.; Nakano, A.; Nomura, K.; Vashishta, P.
2008-07-01
We have developed a metascalable (or 'design once, scale on new architectures') parallel application-development framework for first-principles based simulations of nano-mechano-chemical processes on emerging petaflops architectures based on spatiotemporal data locality principles. The framework consists of (1) an embedded divide-and-conquer (EDC) algorithmic framework based on spatial locality to design linear-scaling algorithms, (2) a space-time-ensemble parallel (STEP) approach based on temporal locality to predict long-time dynamics, and (3) a tunable hierarchical cellular decomposition (HCD) parallelization framework to map these scalable algorithms onto hardware. The EDC-STEP-HCD framework exposes and expresses maximal concurrency and data locality, thereby achieving parallel efficiency as high as 0.99 for 1.59-billion-atom reactive force field molecular dynamics (MD) and 17.7-million-atom (1.56 trillion electronic degrees of freedom) quantum mechanical (QM) MD in the framework of the density functional theory (DFT) on adaptive multigrids, in addition to 201-billion-atom nonreactive MD, on 196 608 IBM BlueGene/L processors. We have also used the framework for automated execution of adaptive hybrid DFT/MD simulation on a grid of six supercomputers in the US and Japan, in which the number of processors changed dynamically on demand and tasks were migrated according to unexpected faults. The paper presents the application of the framework to the study of nanoenergetic materials: (1) combustion of an Al/Fe2O3 thermite and (2) shock initiation and reactive nanojets at a void in an energetic crystal.
Simulation of a 250 kW diesel fuel processor/PEM fuel cell system
NASA Astrophysics Data System (ADS)
Amphlett, J. C.; Mann, R. F.; Peppley, B. A.; Roberge, P. R.; Rodrigues, A.; Salvador, J. P.
Polymer-electrolyte membrane (PEM) fuel cell systems offer a potential power source for utility and mobile applications. Practical fuel cell systems use fuel processors for the production of hydrogen-rich gas. Liquid fuels, such as diesel or other related fuels, are attractive options as feeds to a fuel processor. The generation of hydrogen gas for fuel cells, in most cases, becomes the crucial design issue with respect to weight and volume in these applications. Furthermore, these systems will require a gas clean-up system to insure that the fuel quality meets the demands of the cell anode. The endothermic nature of the reformer will have a significant affect on the overall system efficiency. The gas clean-up system may also significantly effect the overall heat balance. To optimize the performance of this integrated system, therefore, waste heat must be used effectively. Previously, we have concentrated on catalytic methanol-steam reforming. A model of a methanol steam reformer has been previously developed and has been used as the basis for a new, higher temperature model for liquid hydrocarbon fuels. Similarly, our fuel cell evaluation program previously led to the development of a steady-state electrochemical fuel cell model (SSEM). The hydrocarbon fuel processor model and the SSEM have now been incorporated in the development of a process simulation of a 250 kW diesel-fueled reformer/fuel cell system using a process simulator. The performance of this system has been investigated for a variety of operating conditions and a preliminary assessment of thermal integration issues has been carried out. This study demonstrates the application of a process simulation model as a design analysis tool for the development of a 250 kW fuel cell system.
SHABERTH - ANALYSIS OF A SHAFT BEARING SYSTEM (CRAY VERSION)
NASA Technical Reports Server (NTRS)
Coe, H. H.
1994-01-01
The SHABERTH computer program was developed to predict operating characteristics of bearings in a multibearing load support system. Lubricated and non-lubricated bearings can be modeled. SHABERTH calculates the loads, torques, temperatures, and fatigue life for ball and/or roller bearings on a single shaft. The program also allows for an analysis of the system reaction to the termination of lubricant supply to the bearings and other lubricated mechanical elements. SHABERTH has proven to be a valuable tool in the design and analysis of shaft bearing systems. The SHABERTH program is structured with four nested calculation schemes. The thermal scheme performs steady state and transient temperature calculations which predict system temperatures for a given operating state. The bearing dimensional equilibrium scheme uses the bearing temperatures, predicted by the temperature mapping subprograms, and the rolling element raceway load distribution, predicted by the bearing subprogram, to calculate bearing diametral clearance for a given operating state. The shaft-bearing system load equilibrium scheme calculates bearing inner ring positions relative to the respective outer rings such that the external loading applied to the shaft is brought into equilibrium by the rolling element loads which develop at each bearing inner ring for a given operating state. The bearing rolling element and cage load equilibrium scheme calculates the rolling element and cage equilibrium positions and rotational speeds based on the relative inner-outer ring positions, inertia effects, and friction conditions. The ball bearing subprograms in the current SHABERTH program have several model enhancements over similar programs. These enhancements include an elastohydrodynamic (EHD) film thickness model that accounts for thermal heating in the contact area and lubricant film starvation; a new model for traction combined with an asperity load sharing model; a model for the hydrodynamic rolling and shear forces in the inlet zone of lubricated contacts, which accounts for the degree of lubricant film starvation; modeling normal and friction forces between a ball and a cage pocket, which account for the transition between the hydrodynamic and elastohydrodynamic regimes of lubrication; and a model of the effect on fatigue life of the ratio of the EHD plateau film thickness to the composite surface roughness. SHABERTH is intended to be as general as possible. The models in SHABERTH allow for the complete mathematical simulation of real physical systems. Systems are limited to a maximum of five bearings supporting the shaft, a maximum of thirty rolling elements per bearing, and a maximum of one hundred temperature nodes. The SHABERTH program structure is modular and has been designed to permit refinement and replacement of various component models as the need and opportunities develop. A preprocessor is included in the IBM PC version of SHABERTH to provide a user friendly means of developing SHABERTH models and executing the resulting code. The preprocessor allows the user to create and modify data files with minimal effort and a reduced chance for errors. Data is utilized as it is entered; the preprocessor then decides what additional data is required to complete the model. Only this required information is requested. The preprocessor can accommodate data input for any SHABERTH compatible shaft bearing system model. The system may include ball bearings, roller bearings, and/or tapered roller bearings. SHABERTH is written in FORTRAN 77, and two machine versions are available from COSMIC. The CRAY version (LEW-14860) has a RAM requirement of 176K of 64 bit words. The IBM PC version (MFS-28818) is written for IBM PC series and compatible computers running MS-DOS, and includes a sample MS-DOS executable. For execution, the PC version requires at least 1Mb of RAM and an 80386 or 486 processor machine with an 80x87 math co-processor. The standard distribution medium for the IBM PC version is a set of two 5.25 inch 360K MS-DOS format diskettes. The contents of the diske
Direct RF A-O Processor Spectrum Analyzer.
1981-08-01
The primary objective was to develop and demonstrate design approach, along with the associated processing technologies, for a wideband acousto optic Bragg...cell spectrum analyzer. The signal processor used to demonstrate feasibility of the technical approach consisted of two bulk wave acousto optic deflectors
Monte Carlo dose calculation using a cell processor based PlayStation 3 system
NASA Astrophysics Data System (ADS)
Chow, James C. L.; Lam, Phil; Jaffray, David A.
2012-02-01
This study investigates the performance of the EGSnrc computer code coupled with a Cell-based hardware in Monte Carlo simulation of radiation dose in radiotherapy. Performance evaluations of two processor-intensive functions namely, HOWNEAR and RANMAR_GET in the EGSnrc code were carried out basing on the 20-80 rule (Pareto principle). The execution speeds of the two functions were measured by the profiler gprof specifying the number of executions and total time spent on the functions. A testing architecture designed for Cell processor was implemented in the evaluation using a PlayStation3 (PS3) system. The evaluation results show that the algorithms examined are readily parallelizable on the Cell platform, provided that an architectural change of the EGSnrc was made. However, as the EGSnrc performance was limited by the PowerPC Processing Element in the PS3, PC coupled with graphics processing units or GPCPU may provide a more viable avenue for acceleration.
Coupling of a 2.5 kW steam reformer with a 1 kW el PEM fuel cell
NASA Astrophysics Data System (ADS)
Mathiak, J.; Heinzel, A.; Roes, J.; Kalk, Th.; Kraus, H.; Brandt, H.
The University of Duisburg-Essen has developed a compact multi-fuel steam reformer suitable for natural gas, propane and butane. This steam reformer was combined with a polymer electrolyte membrane fuel cell (PEM FC) and a system test of the process chain was performed. The fuel processor comprises a prereformer step, a primary reformer, water gas shift reactors, a steam generator, internal heat exchangers in order to achieve an optimised heat integration and an external burner for heat supply as well as a preferential oxidation step (PROX) as CO purification. The fuel processor is designed to deliver a thermal hydrogen power output from 500 W to 2.5 kW. The PEM fuel cell stack provides about 1 kW electrical power. In the following paper experimental results of measurements of the single components PEM fuel cell and fuel processor as well as results of the coupling of both to form a process chain are presented.
Geospace simulations using modern accelerator processor technology
NASA Astrophysics Data System (ADS)
Germaschewski, K.; Raeder, J.; Larson, D. J.
2009-12-01
OpenGGCM (Open Geospace General Circulation Model) is a well-established numerical code simulating the Earth's space environment. The most computing intensive part is the MHD (magnetohydrodynamics) solver that models the plasma surrounding Earth and its interaction with Earth's magnetic field and the solar wind flowing in from the sun. Like other global magnetosphere codes, OpenGGCM's realism is currently limited by computational constraints on grid resolution. OpenGGCM has been ported to make use of the added computational powerof modern accelerator based processor architectures, in particular the Cell processor. The Cell architecture is a novel inhomogeneous multicore architecture capable of achieving up to 230 GFLops on a single chip. The University of New Hampshire recently acquired a PowerXCell 8i based computing cluster, and here we will report initial performance results of OpenGGCM. Realizing the high theoretical performance of the Cell processor is a programming challenge, though. We implemented the MHD solver using a multi-level parallelization approach: On the coarsest level, the problem is distributed to processors based upon the usual domain decomposition approach. Then, on each processor, the problem is divided into 3D columns, each of which is handled by the memory limited SPEs (synergistic processing elements) slice by slice. Finally, SIMD instructions are used to fully exploit the SIMD FPUs in each SPE. Memory management needs to be handled explicitly by the code, using DMA to move data from main memory to the per-SPE local store and vice versa. We use a modern technique, automatic code generation, which shields the application programmer from having to deal with all of the implementation details just described, keeping the code much more easily maintainable. Our preliminary results indicate excellent performance, a speed-up of a factor of 30 compared to the unoptimized version.
Self-sustained operation of a kW e-class kerosene-reforming processor for solid oxide fuel cells
NASA Astrophysics Data System (ADS)
Yoon, Sangho; Bae, Joongmyeon; Kim, Sunyoung; Yoo, Young-Sung
In this paper, fuel-processing technologies are developed for application in residential power generation (RPG) in solid oxide fuel cells (SOFCs). Kerosene is selected as the fuel because of its high hydrogen density and because of the established infrastructure that already exists in South Korea. A kerosene fuel processor with two different reaction stages, autothermal reforming (ATR) and adsorptive desulfurization reactions, is developed for SOFC operations. ATR is suited to the reforming of liquid hydrocarbon fuels because oxygen-aided reactions can break the aromatics in the fuel and steam can suppress carbon deposition during the reforming reaction. ATR can also be implemented as a self-sustaining reactor due to the exothermicity of the reaction. The kW e self-sustained kerosene fuel processor, including the desulfurizer, operates for about 250 h in this study. This fuel processor does not require a heat exchanger between the ATR reactor and the desulfurizer or electric equipment for heat supply and fuel or water vaporization because a suitable temperature of the ATR reformate is reached for H 2S adsorption on the ZnO catalyst beds in desulfurizer. Although the CH 4 concentration in the reformate gas of the fuel processor is higher due to the lower temperature of ATR tail gas, SOFCs can directly use CH 4 as a fuel with the addition of sufficient steam feeds (H 2O/CH 4 ≥ 1.5), in contrast to low-temperature fuel cells. The reforming efficiency of the fuel processor is about 60%, and the desulfurizer removed H 2S to a sufficient level to allow for the operation of SOFCs.
The ATLAS Level-1 Calorimeter Trigger: PreProcessor implementation and performance
NASA Astrophysics Data System (ADS)
Åsman, B.; Achenbach, R.; Allbrooke, B. M. M.; Anders, G.; Andrei, V.; Büscher, V.; Bansil, H. S.; Barnett, B. M.; Bauss, B.; Bendtz, K.; Bohm, C.; Bracinik, J.; Brawn, I. P.; Brock, R.; Buttinger, W.; Caputo, R.; Caughron, S.; Cerrito, L.; Charlton, D. G.; Childers, J. T.; Curtis, C. J.; Daniells, A. C.; Davis, A. O.; Davygora, Y.; Dorn, M.; Eckweiler, S.; Edmunds, D.; Edwards, J. P.; Eisenhandler, E.; Ellis, K.; Ermoline, Y.; Föhlisch, F.; Faulkner, P. J. W.; Fedorko, W.; Fleckner, J.; French, S. T.; Gee, C. N. P.; Gillman, A. R.; Goeringer, C.; Hülsing, T.; Hadley, D. R.; Hanke, P.; Hauser, R.; Heim, S.; Hellman, S.; Hickling, R. S.; Hidvégi, A.; Hillier, S. J.; Hofmann, J. I.; Hristova, I.; Ji, W.; Johansen, M.; Keller, M.; Khomich, A.; Kluge, E.-E.; Koll, J.; Laier, H.; Landon, M. P. J.; Lang, V. S.; Laurens, P.; Lepold, F.; Lilley, J. N.; Linnemann, J. T.; Müller, F.; Müller, T.; Mahboubi, K.; Martin, T. A.; Mass, A.; Meier, K.; Meyer, C.; Middleton, R. P.; Moa, T.; Moritz, S.; Morris, J. D.; Mudd, R. D.; Narayan, R.; zur Nedden, M.; Neusiedl, A.; Newman, P. R.; Nikiforov, A.; Ohm, C. C.; Perera, V. J. O.; Pfeiffer, U.; Plucinski, P.; Poddar, S.; Prieur, D. P. F.; Qian, W.; Rieck, P.; Rizvi, E.; Sankey, D. P. C.; Schäfer, U.; Scharf, V.; Schmitt, K.; Schröder, C.; Schultz-Coulon, H.-C.; Schumacher, C.; Schwienhorst, R.; Silverstein, S. B.; Simioni, E.; Snidero, G.; Staley, R. J.; Stamen, R.; Stock, P.; Stockton, M. C.; Tan, C. L. A.; Tapprogge, S.; Thomas, J. P.; Thompson, P. D.; Thomson, M.; True, P.; Watkins, P. M.; Watson, A. T.; Watson, M. F.; Weber, P.; Wessels, M.; Wiglesworth, C.; Williams, S. L.
2012-12-01
The PreProcessor system of the ATLAS Level-1 Calorimeter Trigger (L1Calo) receives about 7200 analogue signals from the electromagnetic and hadronic components of the calorimetric detector system. Lateral division results in cells which are pre-summed to so-called Trigger Towers of size 0.1 × 0.1 along azimuth (phi) and pseudorapidity (η). The received calorimeter signals represent deposits of transverse energy. The system consists of 124 individual PreProcessor modules that digitise the input signals for each LHC collision, and provide energy and timing information to the digital processors of the L1Calo system, which identify physics objects forming much of the basis for the full ATLAS first level trigger decision. This paper describes the architecture of the PreProcessor, its hardware realisation, functionality, and performance.
Corporate Mortality Files and Late Industrial Necropolitics.
Little, Peter C
2017-10-05
This article critically examines the corporate production, archival politics, and socio-legal dimensions of corporate mortality files (CMFs), the largest corporate archive developed by IBM to systematically document industrial exposures and occupational health outcomes for electronics workers. I first provide a history of IBM's CMF project, which amounts to a comprehensive mortality record for IBM employees over the past 40 years. Next, I explore a recent case in Endicott, New York, birthplace of IBM, where the U.S. National Institute for Occupational Safety and Health studied IBM's CMFs for workers at IBM's former Endicott plant. Tracking the production of the IBM CMF, the strategic avoidance of this source of big data as evidence for determining a recent legal settlement, alongside local critiques of the IBM CMF project, the article develops what I call "late industrial necropolitics." © 2017 by the American Anthropological Association.
Configuring a fuel cell based residential combined heat and power system
NASA Astrophysics Data System (ADS)
Ahmed, Shabbir; Papadias, Dionissios D.; Ahluwalia, Rajesh K.
2013-11-01
The design and performance of a fuel cell based residential combined heat and power (CHP) system operating on natural gas has been analyzed. The natural gas is first converted to a hydrogen-rich reformate in a steam reformer based fuel processor, and the hydrogen is then electrochemically oxidized in a low temperature polymer electrolyte fuel cell to generate electric power. The heat generated in the fuel cell and the available heat in the exhaust gas is recovered to meet residential needs for hot water and space heating. Two fuel processor configurations have been studied. One of the configurations was explored to quantify the effects of design and operating parameters, which include pressure, temperature, and steam-to-carbon ratio in the fuel processor, and fuel utilization in the fuel cell. The second configuration applied the lessons from the study of the first configuration to increase the CHP efficiency. Results from the two configurations allow a quantitative comparison of the design alternatives. The analyses showed that these systems can operate at electrical efficiencies of ∼46% and combined heat and power efficiencies of ∼90%.
A natural-gas fuel processor for a residential fuel cell system
NASA Astrophysics Data System (ADS)
Adachi, H.; Ahmed, S.; Lee, S. H. D.; Papadias, D.; Ahluwalia, R. K.; Bendert, J. C.; Kanner, S. A.; Yamazaki, Y.
A system model was used to develop an autothermal reforming fuel processor to meet the targets of 80% efficiency (higher heating value) and start-up energy consumption of less than 500 kJ when operated as part of a 1-kWe natural-gas fueled fuel cell system for cogeneration of heat and power. The key catalytic reactors of the fuel processor - namely the autothermal reformer, a two-stage water gas shift reactor and a preferential oxidation reactor - were configured and tested in a breadboard apparatus. Experimental results demonstrated a reformate containing ∼48% hydrogen (on a dry basis and with pure methane as fuel) and less than 5 ppm CO. The effects of steam-to-carbon and part load operations were explored.
1988-05-22
TITLE (andSubtile) 5. TYPE OF REPORT & PERIOD COVERED Ada Compler Validation Summary Report: 22 May 1987 to 22 May 1988 International Business Machines...IBM Development System for the Ada Language System, Version 1.1.0, International Business Machines Corporation, Wright-Patterson AFB. IBM 4381 under...SUMMARY REPORT: International Business Machines Corporation IBM Development System f’or the Ada Language System, Version 1.1.0 IBM 4381 under MVS
1986-04-29
COMPILER VALIDATION SUMMARY REPORT: International Business Machines Corporation IBM Development System for the Ada Language for VM/CMS, Version 1.0 IBM 4381...tested using command scripts provided by International Business Machines Corporation. These scripts were reviewed by the validation team. Test.s were run...s): IBM 4381 (System/370) Operating System: VM/CMS, release 3.6 International Business Machines Corporation has made no deliberate extensions to the
A digital retina-like low-level vision processor.
Mertoguno, S; Bourbakis, N G
2003-01-01
This correspondence presents the basic design and the simulation of a low level multilayer vision processor that emulates to some degree the functional behavior of a human retina. This retina-like multilayer processor is the lower part of an autonomous self-organized vision system, called Kydon, that could be used on visually impaired people with a damaged visual cerebral cortex. The Kydon vision system, however, is not presented in this paper. The retina-like processor consists of four major layers, where each of them is an array processor based on hexagonal, autonomous processing elements that perform a certain set of low level vision tasks, such as smoothing and light adaptation, edge detection, segmentation, line recognition and region-graph generation. At each layer, the array processor is a 2D array of k/spl times/m hexagonal identical autonomous cells that simultaneously execute certain low level vision tasks. Thus, the hardware design and the simulation at the transistor level of the processing elements (PEs) of the retina-like processor and its simulated functionality with illustrative examples are provided in this paper.
Efficient Use of Distributed Systems for Scientific Applications
NASA Technical Reports Server (NTRS)
Taylor, Valerie; Chen, Jian; Canfield, Thomas; Richard, Jacques
2000-01-01
Distributed computing has been regarded as the future of high performance computing. Nationwide high speed networks such as vBNS are becoming widely available to interconnect high-speed computers, virtual environments, scientific instruments and large data sets. One of the major issues to be addressed with distributed systems is the development of computational tools that facilitate the efficient execution of parallel applications on such systems. These tools must exploit the heterogeneous resources (networks and compute nodes) in distributed systems. This paper presents a tool, called PART, which addresses this issue for mesh partitioning. PART takes advantage of the following heterogeneous system features: (1) processor speed; (2) number of processors; (3) local network performance; and (4) wide area network performance. Further, different finite element applications under consideration may have different computational complexities, different communication patterns, and different element types, which also must be taken into consideration when partitioning. PART uses parallel simulated annealing to partition the domain, taking into consideration network and processor heterogeneity. The results of using PART for an explicit finite element application executing on two IBM SPs (located at Argonne National Laboratory and the San Diego Supercomputer Center) indicate an increase in efficiency by up to 36% as compared to METIS, a widely used mesh partitioning tool. The input to METIS was modified to take into consideration heterogeneous processor performance; METIS does not take into consideration heterogeneous networks. The execution times for these applications were reduced by up to 30% as compared to METIS. These results are given in Figure 1 for four irregular meshes with number of elements ranging from 30,269 elements for the Barth5 mesh to 11,451 elements for the Barth4 mesh. Future work with PART entails using the tool with an integrated application requiring distributed systems. In particular this application, illustrated in the document entails an integration of finite element and fluid dynamic simulations to address the cooling of turbine blades of a gas turbine engine design. It is not uncommon to encounter high-temperature, film-cooled turbine airfoils with 1,000,000s of degrees of freedom. This results because of the complexity of the various components of the airfoils, requiring fine-grain meshing for accuracy. Additional information is contained in the original.
An Enhanced GINGERSimulation Code with Harmonic Emission and HDF5IO Capabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fawley, William M.
GINGER [1] is an axisymmetric, polychromatic (r-z-t) FEL simulation code originally developed in the mid-1980's to model the performance of single-pass amplifiers. Over the past 15 years GINGER's capabilities have been extended to include more complicated configurations such as undulators with drift spaces, dispersive sections, and vacuum chamber wakefield effects; multi-pass oscillators; and multi-stage harmonic cascades. Its coding base has been tuned to permit running effectively on platforms ranging from desktop PC's to massively parallel processors such as the IBM-SP. Recently, we have made significant changes to GINGER by replacing the original predictor-corrector field solver with a new direct implicitmore » algorithm, adding harmonic emission capability, and switching to the HDF5 IO library [2] for output diagnostics. In this paper, we discuss some details regarding these changes and also present simulation results for LCLS SASE emission at {lambda} = 0.15 nm and higher harmonics.« less
Broccolini, Aldobrando; Gidaro, Teresa; Morosetti, Roberta; Gliubizzi, Carla; Servidei, Tiziana; Pescatori, Mario; Tonali, Pietro A; Ricci, Enzo; Mirabella, Massimiliano
2006-02-01
Neprilysin (NEP, EP24.11), a metallopeptidase originally shown to modulate signalling events by degrading small regulatory peptides, is also an amyloid-beta- (Abeta) degrading enzyme. We investigated a possible role of NEP in inclusion body myositis (IBM) and other acquired and hereditary muscle disorders and found that in all myopathies NEP expression was directly associated with the degree of muscle fibre regeneration. In IBM muscle, NEP protein was also strongly accumulated in Abeta-bearing abnormal fibres. In vitro, during the experimental differentiation of myoblasts, NEP protein expression was regulated at the post-transcriptional level with a rapid increase in the early stage of myoblast differentiation followed by a gradual reduction thereafter, coincident with the progression of the myogenic programme. Treatment of differentiating muscle cells with the NEP inhibitor dl-3-mercapto-2-benzylpropanoylglycine resulted in impaired differentiation that was mainly associated with an abnormal regulation of Akt activation. Therefore, NEP may play an important role during muscle cell differentiation, possibly through the regulation, either directly or indirectly, of the insulin-like growth factor I-driven myogenic programme. In IBM muscle increased NEP may be instrumental in (i) reducing the Abeta accumulation in vulnerable fibres and (ii) promoting a repair/regenerative attempt of muscle fibres possibly through the modulation of insulin-like growth factor I-dependent pathways.
A mechanistic Individual-based Model of microbial communities.
Jayathilake, Pahala Gedara; Gupta, Prashant; Li, Bowen; Madsen, Curtis; Oyebamiji, Oluwole; González-Cabaleiro, Rebeca; Rushton, Steve; Bridgens, Ben; Swailes, David; Allen, Ben; McGough, A Stephen; Zuliani, Paolo; Ofiteru, Irina Dana; Wilkinson, Darren; Chen, Jinju; Curtis, Tom
2017-01-01
Accurate predictive modelling of the growth of microbial communities requires the credible representation of the interactions of biological, chemical and mechanical processes. However, although biological and chemical processes are represented in a number of Individual-based Models (IbMs) the interaction of growth and mechanics is limited. Conversely, there are mechanically sophisticated IbMs with only elementary biology and chemistry. This study focuses on addressing these limitations by developing a flexible IbM that can robustly combine the biological, chemical and physical processes that dictate the emergent properties of a wide range of bacterial communities. This IbM is developed by creating a microbiological adaptation of the open source Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS). This innovation should provide the basis for "bottom up" prediction of the emergent behaviour of entire microbial systems. In the model presented here, bacterial growth, division, decay, mechanical contact among bacterial cells, and adhesion between the bacteria and extracellular polymeric substances are incorporated. In addition, fluid-bacteria interaction is implemented to simulate biofilm deformation and erosion. The model predicts that the surface morphology of biofilms becomes smoother with increased nutrient concentration, which agrees well with previous literature. In addition, the results show that increased shear rate results in smoother and more compact biofilms. The model can also predict shear rate dependent biofilm deformation, erosion, streamer formation and breakup.
A mechanistic Individual-based Model of microbial communities
Gupta, Prashant; Li, Bowen; Madsen, Curtis; Oyebamiji, Oluwole; González-Cabaleiro, Rebeca; Rushton, Steve; Bridgens, Ben; Swailes, David; Allen, Ben; McGough, A. Stephen; Zuliani, Paolo; Ofiteru, Irina Dana; Wilkinson, Darren; Chen, Jinju; Curtis, Tom
2017-01-01
Accurate predictive modelling of the growth of microbial communities requires the credible representation of the interactions of biological, chemical and mechanical processes. However, although biological and chemical processes are represented in a number of Individual-based Models (IbMs) the interaction of growth and mechanics is limited. Conversely, there are mechanically sophisticated IbMs with only elementary biology and chemistry. This study focuses on addressing these limitations by developing a flexible IbM that can robustly combine the biological, chemical and physical processes that dictate the emergent properties of a wide range of bacterial communities. This IbM is developed by creating a microbiological adaptation of the open source Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS). This innovation should provide the basis for “bottom up” prediction of the emergent behaviour of entire microbial systems. In the model presented here, bacterial growth, division, decay, mechanical contact among bacterial cells, and adhesion between the bacteria and extracellular polymeric substances are incorporated. In addition, fluid-bacteria interaction is implemented to simulate biofilm deformation and erosion. The model predicts that the surface morphology of biofilms becomes smoother with increased nutrient concentration, which agrees well with previous literature. In addition, the results show that increased shear rate results in smoother and more compact biofilms. The model can also predict shear rate dependent biofilm deformation, erosion, streamer formation and breakup. PMID:28771505
Increasing the electric efficiency of a fuel cell system by recirculating the anodic offgas
NASA Astrophysics Data System (ADS)
Heinzel, A.; Roes, J.; Brandt, H.
The University of Duisburg-Essen and the Center for Fuel Cell Technology (ZBT Duisburg GmbH) have developed a compact multi-fuel steam reformer suitable for natural gas, propane and butane. Fuel processor prototypes based on this concept were built up in the power range from 2.5 to 12.5 kW thermal hydrogen power for different applications and different industrial partners. The fuel processor concept contains all the necessary elements, a prereformer step, a primary reformer, water gas shift reactors, a steam generator, internal heat exchangers, in order to achieve an optimised heat integration and an external burner for heat supply as well as a preferential oxidation step (PrOx) as CO purification. One of the built fuel processors is designed to deliver a thermal hydrogen power output of 2.5 kW according to a PEM fuel cell stack providing about 1 kW electrical power and achieves a thermal efficiency of about 75% (LHV basis after PrOx), while the CO content of the product gas is below 20 ppm. This steam reformer has been combined with a 1 kW PEM fuel cell. Recirculating the anodic offgas results in a significant efficiency increase for the fuel processor. The gross efficiency of the combined system was already clearly above 30% during the first tests. Further improvements are currently investigated and developed at the ZBT.
IBM Application System/400 as the foundation of the Mayo Clinic/IBM PACS project
NASA Astrophysics Data System (ADS)
Rothman, Melvyn L.; Morin, Richard L.; Persons, Kenneth R.; Gibbons, Patricia S.
1990-08-01
An IBM Application System/400 (AS/400) anchors the Mayo Clinic/IBM joint development PACS project. This paper highlights some of the AS/400's features and the resulting benefits which make it a strong foundation for a medical image archival and review system. Among the AS/400's key features are: 1. A high-level machine architecture 2. Object orientation 3. Relational data base and other functions integrated into the system's architecture 4. High-function interfaces to IBM Personal Computers and IBM Personal System/2s' (pS/2TM).
1988-03-28
International Business Machines Corporation IBM Development System for the Ada Language, Version 2.1.0 IBM 4381 under VM/HPO, host and target DTIC...necessary and identify by block number) International Business Machines Corporation, IBM Development System for the Ada Language, Version 2.1.0, IBM...in the compiler listed in this declaration. I declare that International Business Machines Corporation is the owner of record of the object code of the
1989-04-20
International Business Machines Corporation) IBM Development System for the Ada Language, VN11/CMS Ada Compiler, Version 2.1.1, Wright-Patterson AFB, IBM 3083...890420W1.10073 International Business Machines Corporation IBM Development System for the Ada Language VM/CMS Ada Compiler Version 2.1.1 IBM 3083... International Business Machines Corporation and reviewed by the validation team. The compiler was tested using all default option settings except for the
1989-04-20
International business Machines Corporati,:i IBM Development System for the Ada Language, CMS/MVS Ada Cross Compiler, Version 2.1.1, Wright-Patterson AFB, IBM...VALIDATION SUMMARY REPORT: Certificate Number: 890420W1.10075 International Business Machines Corporation IBM Development System for the Ada Language CMS...command scripts provided by International Business Machines Corporation and reviewed by the validation team. The compiler was tested using all default
Comparison of neuronal spike exchange methods on a Blue Gene/P supercomputer.
Hines, Michael; Kumar, Sameer; Schürmann, Felix
2011-01-01
For neural network simulations on parallel machines, interprocessor spike communication can be a significant portion of the total simulation time. The performance of several spike exchange methods using a Blue Gene/P (BG/P) supercomputer has been tested with 8-128 K cores using randomly connected networks of up to 32 M cells with 1 k connections per cell and 4 M cells with 10 k connections per cell, i.e., on the order of 4·10(10) connections (K is 1024, M is 1024(2), and k is 1000). The spike exchange methods used are the standard Message Passing Interface (MPI) collective, MPI_Allgather, and several variants of the non-blocking Multisend method either implemented via non-blocking MPI_Isend, or exploiting the possibility of very low overhead direct memory access (DMA) communication available on the BG/P. In all cases, the worst performing method was that using MPI_Isend due to the high overhead of initiating a spike communication. The two best performing methods-the persistent Multisend method using the Record-Replay feature of the Deep Computing Messaging Framework DCMF_Multicast; and a two-phase multisend in which a DCMF_Multicast is used to first send to a subset of phase one destination cores, which then pass it on to their subset of phase two destination cores-had similar performance with very low overhead for the initiation of spike communication. Departure from ideal scaling for the Multisend methods is almost completely due to load imbalance caused by the large variation in number of cells that fire on each processor in the interval between synchronization. Spike exchange time itself is negligible since transmission overlaps with computation and is handled by a DMA controller. We conclude that ideal performance scaling will be ultimately limited by imbalance between incoming processor spikes between synchronization intervals. Thus, counterintuitively, maximization of load balance requires that the distribution of cells on processors should not reflect neural net architecture but be randomly distributed so that sets of cells which are burst firing together should be on different processors with their targets on as large a set of processors as possible.
Yang, L. H.; Brooks III, E. D.; Belak, J.
1992-01-01
A molecular dynamics algorithm for performing large-scale simulations using the Parallel C Preprocessor (PCP) programming paradigm on the BBN TC2000, a massively parallel computer, is discussed. The algorithm uses a linked-cell data structure to obtain the near neighbors of each atom as time evoles. Each processor is assigned to a geometric domain containing many subcells and the storage for that domain is private to the processor. Within this scheme, the interdomain (i.e., interprocessor) communication is minimized.
1993-01-01
Deoxyribose nucleicacid DPP: Digital Post-Processor DREO Detence Research Establishment Ottawa RF: Radio Frequency TeO2 : tellurium dioxide TIC: Time... TeO2 is 620 m/s, a device with a 100-As aperture device is 62-mm long. To take advantage of the full interaction time of these Bragg cells, the whole...INCLUDED IN THE DIGITAL POST-PROCESSOR HARDWARE Characteristics of Bandwidth Center Frequency Bragg Cell glass (bulk 100 MHz 150 MHz interaction) iNbO3
NASA Astrophysics Data System (ADS)
Wade, Mark T.; Shainline, Jeffrey M.; Orcutt, Jason S.; Ram, Rajeev J.; Stojanovic, Vladimir; Popovic, Milos A.
2014-03-01
We present the spoked-ring microcavity, a nanophotonic building block enabling energy-efficient, active photonics in unmodified, advanced CMOS microelectronics processes. The cavity is realized in the IBM 45nm SOI CMOS process - the same process used to make many commercially available microprocessors including the IBM Power7 and Sony Playstation 3 processors. In advanced SOI CMOS processes, no partial etch steps and no vertical junctions are available, which limits the types of optical cavities that can be used for active nanophotonics. To enable efficient active devices with no process modifications, we designed a novel spoked-ring microcavity which is fully compatible with the constraints of the process. As a modulator, the device leverages the sub-100nm lithography resolution of the process to create radially extending p-n junctions, providing high optical fill factor depletion-mode modulation and thereby eliminating the need for a vertical junction. The device is made entirely in the transistor active layer, low-loss crystalline silicon, which eliminates the need for a partial etch commonly used to create ridge cavities. In this work, we present the full optical and electrical design of the cavity including rigorous mode solver and FDTD simulations to design the Qlimiting electrical contacts and the coupling/excitation. We address the layout of active photonics within the mask set of a standard advanced CMOS process and show that high-performance photonic devices can be seamlessly monolithically integrated alongside electronics on the same chip. The present designs enable monolithically integrated optoelectronic transceivers on a single advanced CMOS chip, without requiring any process changes, enabling the penetration of photonics into the microprocessor.
Optimizing the inner loop of the gravitational force interaction on modern processors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warren, Michael S
2010-12-08
We have achieved superior performance on multiple generations of the fastest supercomputers in the world with our hashed oct-tree N-body code (HOT), spanning almost two decades and garnering multiple Gordon Bell Prizes for significant achievement in parallel processing. Execution time for our N-body code is largely influenced by the force calculation in the inner loop. Improvements to the inner loop using SSE3 instructions has enabled the calculation of over 200 million gravitational interactions per second per processor on a 2.6 GHz Opteron, for a computational rate of over 7 Gflops in single precision (700/0 of peak). We obtain optimal performancemore » some processors (including the Cell) by decomposing the reciprocal square root function required for a gravitational interaction into a table lookup, Chebychev polynomial interpolation, and Newton-Raphson iteration, using the algorithm of Karp. By unrolling the loop by a factor of six, and using SPU intrinsics to compute on vectors, we obtain performance of over 16 Gflops on a single Cell SPE. Aggregated over the 8 SPEs on a Cell processor, the overall performance is roughly 130 Gflops. In comparison, the ordinary C version of our inner loop only obtains 1.6 Gflops per SPE with the spuxlc compiler.« less
1986-05-05
AVF-VSR-36.0187 Ada" COMPILER VALIDATION SUMMARY REPORT: International Business Machines Corporation IBM Development System for the Ada Language for...withdrawn from ACVC Version 1.7 were not run. The compiler was tested using command scripts provided by International Business Machines Corporation. These...APPENDIX A COMPLIANCE STATEMENT International Business Machines Corporation has submitted the following compliance statement concerning the IBM
NASA Astrophysics Data System (ADS)
Rakvic, Ryan N.; Ives, Robert W.; Lira, Javier; Molina, Carlos
2011-01-01
General purpose computer designers have recently begun adding cores to their processors in order to increase performance. For example, Intel has adopted a homogeneous quad-core processor as a base for general purpose computing. PlayStation3 (PS3) game consoles contain a multicore heterogeneous processor known as the Cell, which is designed to perform complex image processing algorithms at a high level. Can modern image-processing algorithms utilize these additional cores? On the other hand, modern advancements in configurable hardware, most notably field-programmable gate arrays (FPGAs) have created an interesting question for general purpose computer designers. Is there a reason to combine FPGAs with multicore processors to create an FPGA multicore hybrid general purpose computer? Iris matching, a repeatedly executed portion of a modern iris-recognition algorithm, is parallelized on an Intel-based homogeneous multicore Xeon system, a heterogeneous multicore Cell system, and an FPGA multicore hybrid system. Surprisingly, the cheaper PS3 slightly outperforms the Intel-based multicore on a core-for-core basis. However, both multicore systems are beaten by the FPGA multicore hybrid system by >50%.
Distributed multitasking ITS with PVM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fan, W.C.; Halbleib, J.A. Sr.
1995-12-31
Advances in computer hardware and communication software have made it possible to perform parallel-processing computing on a collection of desktop workstations. For many applications, multitasking on a cluster of high-performance workstations has achieved performance comparable to or better than that on a traditional supercomputer. From the point of view of cost-effectiveness, it also allows users to exploit available but unused computational resources and thus achieve a higher performance-to-cost ratio. Monte Carlo calculations are inherently parallelizable because the individual particle trajectories can be generated independently with minimum need for interprocessor communication. Furthermore, the number of particle histories that can be generatedmore » in a given amount of wall-clock time is nearly proportional to the number of processors in the cluster. This is an important fact because the inherent statistical uncertainty in any Monte Carlo result decreases as the number of histories increases. For these reasons, researchers have expended considerable effort to take advantage of different parallel architectures for a variety of Monte Carlo radiation transport codes, often with excellent results. The initial interest in this work was sparked by the multitasking capability of the MCNP code on a cluster of workstations using the Parallel Virtual Machine (PVM) software. On a 16-machine IBM RS/6000 cluster, it has been demonstrated that MCNP runs ten times as fast as on a single-processor CRAY YMP. In this paper, we summarize the implementation of a similar multitasking capability for the coupled electronphoton transport code system, the Integrated TIGER Series (ITS), and the evaluation of two load-balancing schemes for homogeneous and heterogeneous networks.« less
Distributed multitasking ITS with PVM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fan, W.C.; Halbleib, J.A. Sr.
1995-02-01
Advances of computer hardware and communication software have made it possible to perform parallel-processing computing on a collection of desktop workstations. For many applications, multitasking on a cluster of high-performance workstations has achieved performance comparable or better than that on a traditional supercomputer. From the point of view of cost-effectiveness, it also allows users to exploit available but unused computational resources, and thus achieve a higher performance-to-cost ratio. Monte Carlo calculations are inherently parallelizable because the individual particle trajectories can be generated independently with minimum need for interprocessor communication. Furthermore, the number of particle histories that can be generated inmore » a given amount of wall-clock time is nearly proportional to the number of processors in the cluster. This is an important fact because the inherent statistical uncertainty in any Monte Carlo result decreases as the number of histories increases. For these reasons, researchers have expended considerable effort to take advantage of different parallel architectures for a variety of Monte Carlo radiation transport codes, often with excellent results. The initial interest in this work was sparked by the multitasking capability of MCNP on a cluster of workstations using the Parallel Virtual Machine (PVM) software. On a 16-machine IBM RS/6000 cluster, it has been demonstrated that MCNP runs ten times as fast as on a single-processor CRAY YMP. In this paper, we summarize the implementation of a similar multitasking capability for the coupled electron/photon transport code system, the Integrated TIGER Series (ITS), and the evaluation of two load balancing schemes for homogeneous and heterogeneous networks.« less
NASA Technical Reports Server (NTRS)
Pyle, R. S.; Sykora, R. G.; Denman, S. C.
1976-01-01
FLEXSTAB, an array of computer programs developed on CDC equipment, has been converted to operate on the IBM 360 computation system. Instructions for installing, validating, and operating FLEXSTAB on the IBM 360 are included. Hardware requirements are itemized and supplemental materials describe JCL sequences, the CDC to IBM conversion, the input output subprograms, and the interprogram data flow.
Evaluation and construction of diagnostic criteria for inclusion body myositis
Mammen, Andrew L.; Amato, Anthony A.; Weiss, Michael D.; Needham, Merrilee
2014-01-01
Objective: To use patient data to evaluate and construct diagnostic criteria for inclusion body myositis (IBM), a progressive disease of skeletal muscle. Methods: The literature was reviewed to identify all previously proposed IBM diagnostic criteria. These criteria were applied through medical records review to 200 patients diagnosed as having IBM and 171 patients diagnosed as having a muscle disease other than IBM by neuromuscular specialists at 2 institutions, and to a validating set of 66 additional patients with IBM from 2 other institutions. Machine learning techniques were used for unbiased construction of diagnostic criteria. Results: Twenty-four previously proposed IBM diagnostic categories were identified. Twelve categories all performed with high (≥97%) specificity but varied substantially in their sensitivities (11%–84%). The best performing category was European Neuromuscular Centre 2013 probable (sensitivity of 84%). Specialized pathologic features and newly introduced strength criteria (comparative knee extension/hip flexion strength) performed poorly. Unbiased data-directed analysis of 20 features in 371 patients resulted in construction of higher-performing data-derived diagnostic criteria (90% sensitivity and 96% specificity). Conclusions: Published expert consensus–derived IBM diagnostic categories have uniformly high specificity but wide-ranging sensitivities. High-performing IBM diagnostic category criteria can be developed directly from principled unbiased analysis of patient data. Classification of evidence: This study provides Class II evidence that published expert consensus–derived IBM diagnostic categories accurately distinguish IBM from other muscle disease with high specificity but wide-ranging sensitivities. PMID:24975859
Sleep disordered breathing in a cohort of patients with sporadic inclusion body myositis.
Della Marca, Giacomo; Sancricca, Cristina; Losurdo, Anna; Di Blasi, Chiara; De Fino, Chiara; Morosetti, Roberta; Broccolini, Aldobrando; Testani, Elisa; Scarano, Emanuele; Servidei, Serenella; Mirabella, Massimiliano
2013-08-01
The aims of the study were: (1) to evaluate subjective sleep quality and daytime sleepiness in patients affected by sporadic inclusion-body myositis (IBM); (2) to define the sleep and sleep-related respiratory pattern in IBM patients. Thirteen consecutive adult patients affected by definite IBM were enrolled, six women and seven men, mean age 66.2 ± 11.1 years (range: 50-80). Diagnosis was based on clinical and muscle biopsy studies. All patients underwent subjective sleep evaluation (Pittsburgh Sleep Quality Index, PSQI and Epworth Sleepiness Scale, ESS), oro-pharingo-esophageal scintigraphy, pulmonary function tests, psychometric measures, anatomic evaluation of upper airways, and laboratory-based polysomnography. Findings in IBM patients were compared to those obtained from a control group of 25 healthy subjects (13 men and 12 women, mean age 61.9 ± 8.6 years). Disease duration was >10 years in all. Mean IBM severity score was 28.8 ± 5.4 (range 18-36). Dysphagia was present in 10 patients. Nine patients had PSQI scores ≥ 5; patients had higher mean PSQI score (IBM: 7.2 ± 4.7, CONTROLS: 2.76 ± 1.45, p=0.005); one patient (and no controls) had EES>9. Polysomnography showed that IBM patients, compared to controls, had lower sleep efficiency (IBM: 78.8 ± 12.0%, 94.0 ± 4.5%, p<0.001), more awakenings (IBM: 11.9 ± 11.0, CONTROLS: 5.2 ± 7.5, p=0.009) and increased nocturnal time awake (IBM: 121.2 ± 82.0 min., 46.12 ± 28.8 min., p=0.001). Seven Patients (and no controls) had polysomnographic findings consistent with sleep disordered breathing (SDB). Data suggest that sleep disruption, and in particular SDB, might be highly prevalent in IBM. Data indicate that IBM patients have poor sleep and high prevalence of SDB. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Son, In-Hyuk; Shin, Woo-Cheol; Lee, Yong-Kul; Lee, Sung-Chul; Ahn, Jin-Gu; Han, Sang-Il; kweon, Ho-Jin; Kim, Ju-Yong; Kim, Moon-Chan; Park, Jun-Yong
A polymer electrolyte membrane fuel cell (PEMFC) system is developed to power a notebook computer. The system consists of a compact methanol-reforming system with a CO preferential oxidation unit, a 16-cell PEMFC stack, and a control unit for the management of the system with a d.c.-d.c. converter. The compact fuel-processor system (260 cm 3) generates about 1.2 L min -1 of reformate, which corresponds to 35 We, with a low CO concentration (<30 ppm, typically 0 ppm), and is thus proven to be capable of being targetted at notebook computers.
Analysis of habitat-selection rules using an individual-based model
Steven F. Railsback; Bret C. Harvey
2002-01-01
Abstract - Despite their promise for simulating natural complexity,individual-based models (IBMs) are rarely used for ecological research or resource management. Few IBMs have been shown to reproduce realistic patterns of behavior by individual organisms.To test our IBM of stream salmonids and draw conclusions about foraging theory,we analyzed the IBM âs ability to...
Update in inclusion body myositis
Machado, Pedro; Brady, Stefen; Hanna, Michael G.
2013-01-01
Purpose of review The purpose of this study is to review recent scientific advances relating to the natural history, cause, treatment and serum and imaging biomarkers of inclusion body myositis (IBM). Recent findings Several theories regarding the aetiopathogenesis of IBM are being explored and new therapeutic approaches are being investigated. New diagnostic criteria have been proposed, reflecting the knowledge that the diagnostic pathological findings may be absent in patients with clinically typical IBM. The role of MRI in IBM is expanding and knowledge about pathological biomarkers is increasing. The recent description of autoantibodies to cytosolic 5′ nucleotidase 1A in patients with IBM is a potentially important advance that may aid early diagnosis and provides new evidence regarding the role of autoimmunity in IBM. Summary IBM remains an enigmatic and often misdiagnosed disease. The pathogenesis of the disease is still not fully understood. To date, pharmacological treatment trials have failed to show clear efficacy. Future research should continue to focus on improving understanding of the pathophysiological mechanisms of the disease and on the identification of reliable and sensitive outcome measures for clinical trials. IBM is a rare disease and international multicentre collaboration for trials is important to translate research advances into improved patient outcomes. PMID:24067381
Method for fast start of a fuel processor
Ahluwalia, Rajesh K [Burr Ridge, IL; Ahmed, Shabbir [Naperville, IL; Lee, Sheldon H. D. [Willowbrook, IL
2008-01-29
An improved fuel processor for fuel cells is provided whereby the startup time of the processor is less than sixty seconds and can be as low as 30 seconds, if not less. A rapid startup time is achieved by either igniting or allowing a small mixture of air and fuel to react over and warm up the catalyst of an autothermal reformer (ATR). The ATR then produces combustible gases to be subsequently oxidized on and simultaneously warm up water-gas shift zone catalysts. After normal operating temperature has been achieved, the proportion of air included with the fuel is greatly diminished.
Electrically reconfigurable logic array
NASA Technical Reports Server (NTRS)
Agarwal, R. K.
1982-01-01
To compose the complicated systems using algorithmically specialized logic circuits or processors, one solution is to perform relational computations such as union, division and intersection directly on hardware. These relations can be pipelined efficiently on a network of processors having an array configuration. These processors can be designed and implemented with a few simple cells. In order to determine the state-of-the-art in Electrically Reconfigurable Logic Array (ERLA), a survey of the available programmable logic array (PLA) and the logic circuit elements used in such arrays was conducted. Based on this survey some recommendations are made for ERLA devices.
NASA Astrophysics Data System (ADS)
Chen, Ming-Chih; Hsiao, Shen-Fu
In this paper, we propose an area-efficient design of Advanced Encryption Standard (AES) processor by applying a new common-expression-elimination (CSE) method to the sub-functions of various transformations required in AES. The proposed method reduces the area cost of realizing the sub-functions by extracting the common factors in the bit-level XOR/AND-based sum-of-product expressions of these sub-functions using a new CSE algorithm. Cell-based implementation results show that the AES processor with our proposed CSE method has significant area improvement compared with previous designs.
Network Coding on Heterogeneous Multi-Core Processors for Wireless Sensor Networks
Kim, Deokho; Park, Karam; Ro, Won W.
2011-01-01
While network coding is well known for its efficiency and usefulness in wireless sensor networks, the excessive costs associated with decoding computation and complexity still hinder its adoption into practical use. On the other hand, high-performance microprocessors with heterogeneous multi-cores would be used as processing nodes of the wireless sensor networks in the near future. To this end, this paper introduces an efficient network coding algorithm developed for the heterogenous multi-core processors. The proposed idea is fully tested on one of the currently available heterogeneous multi-core processors referred to as the Cell Broadband Engine. PMID:22164053
Acker, Jason P; Hansen, Adele L; Yi, Qi-Long; Sondi, Nayana; Cserti-Gazdewich, Christine; Pendergrast, Jacob; Hannach, Barbara
2016-01-01
After introduction of a closed-system cell processor, the effect of this product change on safety, efficacy, and utilization of washed red blood cells (RBCs) was assessed. This study was a pre-/postimplementation observational study. Efficacy data were collected from sequentially transfused washed RBCs received as prophylactic therapy by β-thalassemia patients during a 3-month period before and after implementation of the Haemonetics ACP 215 closed-system processor. Before implementation, an open system (TerumoBCT COBE 2991) was used to wash RBCs. The primary endpoint for efficacy was a change in hemoglobin (Hb) concentration corrected for the duration between transfusions. The primary endpoint for safety was the frequency of adverse transfusion reactions (ATRs) in all washed RBCs provided by Canadian Blood Services to the transfusion service for 12 months before and after implementation. Data were analyzed from more than 300 RBCs transfused to 31 recipients before implementation and 29 recipients after implementation. The number of units transfused per episode reduced significantly after implementation, from a mean of 3.5 units to a mean of 3.1 units (p < 0.005). The corrected change in Hb concentration was not significantly different before and after implementation. ATRs occurred in 0.15% of transfusions both before and after implementation. Safety and efficacy of washed RBCs were not affected after introduction of a closed-system cell processor. The ACP 215 allowed for an extended expiry time, improving inventory management and overall utilization of washed RBCs. Transfusion of fewer RBCs per episode reduced exposure of recipients to allogeneic blood products while maintaining efficacy. © 2015 AABB.
FPGA wavelet processor design using language for instruction-set architectures (LISA)
NASA Astrophysics Data System (ADS)
Meyer-Bäse, Uwe; Vera, Alonzo; Rao, Suhasini; Lenk, Karl; Pattichis, Marios
2007-04-01
The design of an microprocessor is a long, tedious, and error-prone task consisting of typically three design phases: architecture exploration, software design (assembler, linker, loader, profiler), architecture implementation (RTL generation for FPGA or cell-based ASIC) and verification. The Language for instruction-set architectures (LISA) allows to model a microprocessor not only from instruction-set but also from architecture description including pipelining behavior that allows a design and development tool consistency over all levels of the design. To explore the capability of the LISA processor design platform a.k.a. CoWare Processor Designer we present in this paper three microprocessor designs that implement a 8/8 wavelet transform processor that is typically used in today's FBI fingerprint compression scheme. We have designed a 3 stage pipelined 16 bit RISC processor (NanoBlaze). Although RISC μPs are usually considered "fast" processors due to design concept like constant instruction word size, deep pipelines and many general purpose registers, it turns out that DSP operations consume essential processing time in a RISC processor. In a second step we have used design principles from programmable digital signal processor (PDSP) to improve the throughput of the DWT processor. A multiply-accumulate operation along with indirect addressing operation were the key to achieve higher throughput. A further improvement is possible with today's FPGA technology. Today's FPGAs offer a large number of embedded array multipliers and it is now feasible to design a "true" vector processor (TVP). A multiplication of two vectors can be done in just one clock cycle with our TVP, a complete scalar product in two clock cycles. Code profiling and Xilinx FPGA ISE synthesis results are provided that demonstrate the essential improvement that a TVP has compared with traditional RISC or PDSP designs.
Efficiency of a solid polymer fuel cell operating on ethanol
NASA Astrophysics Data System (ADS)
Ioannides, Theophilos; Neophytides, Stylianos
The efficiency of a solid polymer fuel cell (SPFC) system operating on ethanol fuel has been analyzed as a function of operating parameters focusing on vehicle and stationary applications. Two types of ethanol processors — employing either steam reforming or partial oxidation (POX) steps — have been considered and their performance has been investigated by thermodynamic analysis. SPFC operation has been analyzed by an available parametric model. It has been found that dilute ethanol-water mixtures (˜55% v/v EtOH) are the most suitable for stationary applications with a steam reformer (SR)-SPFC system. Regarding vehicle applications, pure ethanol (˜95% v/v EtOH) appears to be the best fuel with a POX-SPFC system. Efficiencies in the case of an ideal ethanol processor can be of the order of 60% under low load conditions and 30-35% at peak power, while efficiencies with an actual processor are 80-85% of the above values.
INM Integrated Noise Model Version 2. Programmer’s Guide
1979-09-01
cost, turnaround time, and system-dependent limitations. 3.2 CONVERSION PROBLEMS Item Item Item No. Desciption Category 1 BLOCK DATA Initialization IBM ...Restricted 2 Boolean Operations Differences Call Statement Parameters Extensions 4 Data Initialization IBM Restricted 5 ENTRY Differences 6 EQUIVALENCE...Machine Dependent 7 Format: A CDC Extension 8 Hollerith Strings IBM Restricted 9 Hollerith Variables IBM Restricted 10 Identifier Names CDC Extension
A light hydrocarbon fuel processor producing high-purity hydrogen
NASA Astrophysics Data System (ADS)
Löffler, Daniel G.; Taylor, Kyle; Mason, Dylan
This paper discusses the design process and presents performance data for a dual fuel (natural gas and LPG) fuel processor for PEM fuel cells delivering between 2 and 8 kW electric power in stationary applications. The fuel processor resulted from a series of design compromises made to address different design constraints. First, the product quality was selected; then, the unit operations needed to achieve that product quality were chosen from the pool of available technologies. Next, the specific equipment needed for each unit operation was selected. Finally, the unit operations were thermally integrated to achieve high thermal efficiency. Early in the design process, it was decided that the fuel processor would deliver high-purity hydrogen. Hydrogen can be separated from other gases by pressure-driven processes based on either selective adsorption or permeation. The pressure requirement made steam reforming (SR) the preferred reforming technology because it does not require compression of combustion air; therefore, steam reforming is more efficient in a high-pressure fuel processor than alternative technologies like autothermal reforming (ATR) or partial oxidation (POX), where the combustion occurs at the pressure of the process stream. A low-temperature pre-reformer reactor is needed upstream of a steam reformer to suppress coke formation; yet, low temperatures facilitate the formation of metal sulfides that deactivate the catalyst. For this reason, a desulfurization unit is needed upstream of the pre-reformer. Hydrogen separation was implemented using a palladium alloy membrane. Packed beds were chosen for the pre-reformer and reformer reactors primarily because of their low cost, relatively simple operation and low maintenance. Commercial, off-the-shelf balance of plant (BOP) components (pumps, valves, and heat exchangers) were used to integrate the unit operations. The fuel processor delivers up to 100 slm hydrogen >99.9% pure with <1 ppm CO, <3 ppm CO 2. The thermal efficiency is better than 67% operating at full load. This fuel processor has been integrated with a 5-kW fuel cell producing electricity and hot water.
Spiking neural networks on high performance computer clusters
NASA Astrophysics Data System (ADS)
Chen, Chong; Taha, Tarek M.
2011-09-01
In this paper we examine the acceleration of two spiking neural network models on three clusters of multicore processors representing three categories of processors: x86, STI Cell, and NVIDIA GPGPUs. The x86 cluster utilized consists of 352 dualcore AMD Opterons, the Cell cluster consists of 320 Sony Playstation 3s, while the GPGPU cluster contains 32 NVIDIA Tesla S1070 systems. The results indicate that the GPGPU platform can dominate in performance compared to the Cell and x86 platforms examined. From a cost perspective, the GPGPU is more expensive in terms of neuron/s throughput. If the cost of GPGPUs go down in the future, this platform will become very cost effective for these models.
NASA Technical Reports Server (NTRS)
Moore, J. Strother
1992-01-01
In this paper we present a formal model of asynchronous communication as a function in the Boyer-Moore logic. The function transforms the signal stream generated by one processor into the signal stream consumed by an independently clocked processor. This transformation 'blurs' edges and 'dilates' time due to differences in the phases and rates of the two clocks and the communications delay. The model can be used quantitatively to derive concrete performance bounds on asynchronous communications at ISO protocol level 1 (physical level). We develop part of the reusable formal theory that permits the convenient application of the model. We use the theory to show that a biphase mark protocol can be used to send messages of arbitrary length between two asynchronous processors. We study two versions of the protocol, a conventional one which uses cells of size 32 cycles and an unconventional one which uses cells of size 18. We conjecture that the protocol can be proved to work under our model for smaller cell sizes and more divergent clock rates but the proofs would be harder.
1989-11-29
nvmbe’j International Business Machines Corporation Wright-Patterson AFB, The IBM Development System for the Ada Language AIX/RT follow-on, Version 1.1...Certificate Number: 891129W1.10198 International Business Machines Corporation The IBM Development System for the Ada Language AIX/RT Follow-on, Version 1.1 IBM...scripts provided by International Business Machines Corporation and reviewed by the validation team. The compiler was tested using all the following
Applications Performance on NAS Intel Paragon XP/S - 15#
NASA Technical Reports Server (NTRS)
Saini, Subhash; Simon, Horst D.; Copper, D. M. (Technical Monitor)
1994-01-01
The Numerical Aerodynamic Simulation (NAS) Systems Division received an Intel Touchstone Sigma prototype model Paragon XP/S- 15 in February, 1993. The i860 XP microprocessor with an integrated floating point unit and operating in dual -instruction mode gives peak performance of 75 million floating point operations (NIFLOPS) per second for 64 bit floating point arithmetic. It is used in the Paragon XP/S-15 which has been installed at NAS, NASA Ames Research Center. The NAS Paragon has 208 nodes and its peak performance is 15.6 GFLOPS. Here, we will report on early experience using the Paragon XP/S- 15. We have tested its performance using both kernels and applications of interest to NAS. We have measured the performance of BLAS 1, 2 and 3 both assembly-coded and Fortran coded on NAS Paragon XP/S- 15. Furthermore, we have investigated the performance of a single node one-dimensional FFT, a distributed two-dimensional FFT and a distributed three-dimensional FFT Finally, we measured the performance of NAS Parallel Benchmarks (NPB) on the Paragon and compare it with the performance obtained on other highly parallel machines, such as CM-5, CRAY T3D, IBM SP I, etc. In particular, we investigated the following issues, which can strongly affect the performance of the Paragon: a. Impact of the operating system: Intel currently uses as a default an operating system OSF/1 AD from the Open Software Foundation. The paging of Open Software Foundation (OSF) server at 22 MB to make more memory available for the application degrades the performance. We found that when the limit of 26 NIB per node out of 32 MB available is reached, the application is paged out of main memory using virtual memory. When the application starts paging, the performance is considerably reduced. We found that dynamic memory allocation can help applications performance under certain circumstances. b. Impact of data cache on the i860/XP: We measured the performance of the BLAS both assembly coded and Fortran coded. We found that the measured performance of assembly-coded BLAS is much less than what memory bandwidth limitation would predict. The influence of data cache on different sizes of vectors is also investigated using one-dimensional FFTs. c. Impact of processor layout: There are several different ways processors can be laid out within the two-dimensional grid of processors on the Paragon. We have used the FFT example to investigate performance differences based on processors layout.
Content addressable memory project
NASA Technical Reports Server (NTRS)
Hall, J. Storrs; Levy, Saul; Smith, Donald E.; Miyake, Keith M.
1992-01-01
A parameterized version of the tree processor was designed and tested (by simulation). The leaf processor design is 90 percent complete. We expect to complete and test a combination of tree and leaf cell designs in the next period. Work is proceeding on algorithms for the computer aided manufacturing (CAM), and once the design is complete we will begin simulating algorithms for large problems. The following topics are covered: (1) the practical implementation of content addressable memory; (2) design of a LEAF cell for the Rutgers CAM architecture; (3) a circuit design tool user's manual; and (4) design and analysis of efficient hierarchical interconnection networks.
Sporadic inclusion body myositis: the genetic contributions to the pathogenesis
2014-01-01
Sporadic inclusion body myositis (sIBM) is the commonest idiopathic inflammatory muscle disease in people over 50 years old. It is characterized by slowly progressive muscle weakness and atrophy, with typical pathological changes of inflammation, degeneration and mitochondrial abnormality in affected muscle fibres. The cause(s) of sIBM are still unknown, but are considered complex, with the contribution of multiple factors such as environmental triggers, ageing and genetic susceptibility. This review summarizes the current understanding of the genetic contributions to sIBM and provides some insights for future research in this mysterious disease with the advantage of the rapid development of advanced genetic technology. An international sIBM genetic study is ongoing and whole-exome sequencing will be applied in a large cohort of sIBM patients with the aim of unravelling important genetic risk factors for sIBM. PMID:24948216
Investigation of triaxiality in 54122-128Xe isotopes in the framework of sdg-IBM
NASA Astrophysics Data System (ADS)
Jafarizadeh, M. A.; Ranjbar, Z.; Fouladi, N.; Ghapanvari, M.
In this paper, a transitional interacting boson model (IBM) Hamiltonian in both sd-(IBM) and sdg-IBM versions based on affine SU(1, 1) Lie algebra is employed to describe deviations from the gamma-unstable nature of Hamiltonian along the chain of Xe isotopes. sdg-IBM Hamiltonian proposed a better interpretation of this deviation which cannot be explained in the sd-boson models. The nuclei studied have well-known γ bands close to the γ-unstable limit. The energy levels, B(E2) transition rates and signature splitting of the γ -vibrational band are calculated via the affine SU(1,1) Lie algebra. An acceptable degree of agreement was achieved based on this procedure. It is shown that in these isotopes the signature splitting is better reproduced by the inclusion of sdg-IBM. In none of them, any evidence for a stable, triaxial ground state shape is found.
The UCLA MEDLARS Computer System *
Garvis, Francis J.
1966-01-01
Under a subcontract with UCLA the Planning Research Corporation has changed the MEDLARS system to make it possible to use the IBM 7094/7040 direct-couple computer instead of the Honeywell 800 for demand searches. The major tasks were the rewriting of the programs in COBOL and copying of the stored information on the narrower tapes that IBM computers require. (In the future NLM will copy the tapes for IBM computer users.) The differences in the software required by the two computers are noted. Major and costly revisions would be needed to adapt the large MEDLARS system to the smaller IBM 1401 and 1410 computers. In general, MEDLARS is transferrable to other computers of the IBM 7000 class, the new IBM 360, and those of like size, such as the CDC 1604 or UNIVAC 1108, although additional changes are necessary. Potential future improvements are suggested. PMID:5901355
High performance computing environment for multidimensional image analysis
Rao, A Ravishankar; Cecchi, Guillermo A; Magnasco, Marcelo
2007-01-01
Background The processing of images acquired through microscopy is a challenging task due to the large size of datasets (several gigabytes) and the fast turnaround time required. If the throughput of the image processing stage is significantly increased, it can have a major impact in microscopy applications. Results We present a high performance computing (HPC) solution to this problem. This involves decomposing the spatial 3D image into segments that are assigned to unique processors, and matched to the 3D torus architecture of the IBM Blue Gene/L machine. Communication between segments is restricted to the nearest neighbors. When running on a 2 Ghz Intel CPU, the task of 3D median filtering on a typical 256 megabyte dataset takes two and a half hours, whereas by using 1024 nodes of Blue Gene, this task can be performed in 18.8 seconds, a 478× speedup. Conclusion Our parallel solution dramatically improves the performance of image processing, feature extraction and 3D reconstruction tasks. This increased throughput permits biologists to conduct unprecedented large scale experiments with massive datasets. PMID:17634099
High performance computing environment for multidimensional image analysis.
Rao, A Ravishankar; Cecchi, Guillermo A; Magnasco, Marcelo
2007-07-10
The processing of images acquired through microscopy is a challenging task due to the large size of datasets (several gigabytes) and the fast turnaround time required. If the throughput of the image processing stage is significantly increased, it can have a major impact in microscopy applications. We present a high performance computing (HPC) solution to this problem. This involves decomposing the spatial 3D image into segments that are assigned to unique processors, and matched to the 3D torus architecture of the IBM Blue Gene/L machine. Communication between segments is restricted to the nearest neighbors. When running on a 2 Ghz Intel CPU, the task of 3D median filtering on a typical 256 megabyte dataset takes two and a half hours, whereas by using 1024 nodes of Blue Gene, this task can be performed in 18.8 seconds, a 478x speedup. Our parallel solution dramatically improves the performance of image processing, feature extraction and 3D reconstruction tasks. This increased throughput permits biologists to conduct unprecedented large scale experiments with massive datasets.
A performance analysis of advanced I/O architectures for PC-based network file servers
NASA Astrophysics Data System (ADS)
Huynh, K. D.; Khoshgoftaar, T. M.
1994-12-01
In the personal computing and workstation environments, more and more I/O adapters are becoming complete functional subsystems that are intelligent enough to handle I/O operations on their own without much intervention from the host processor. The IBM Subsystem Control Block (SCB) architecture has been defined to enhance the potential of these intelligent adapters by defining services and conventions that deliver command information and data to and from the adapters. In recent years, a new storage architecture, the Redundant Array of Independent Disks (RAID), has been quickly gaining acceptance in the world of computing. In this paper, we would like to discuss critical system design issues that are important to the performance of a network file server. We then present a performance analysis of the SCB architecture and disk array technology in typical network file server environments based on personal computers (PCs). One of the key issues investigated in this paper is whether a disk array can outperform a group of disks (of same type, same data capacity, and same cost) operating independently, not in parallel as in a disk array.
Communication Studies of DMP and SMP Machines
NASA Technical Reports Server (NTRS)
Sohn, Andrew; Biswas, Rupak; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
Understanding the interplay between machines and problems is key to obtaining high performance on parallel machines. This paper investigates the interplay between programming paradigms and communication capabilities of parallel machines. In particular, we explicate the communication capabilities of the IBM SP-2 distributed-memory multiprocessor and the SGI PowerCHALLENGEarray symmetric multiprocessor. Two benchmark problems of bitonic sorting and Fast Fourier Transform are selected for experiments. Communication-efficient algorithms are developed to exploit the overlapping capabilities of the machines. Programs are written in Message-Passing Interface for portability and identical codes are used for both machines. Various data sizes and message sizes are used to test the machines' communication capabilities. Experimental results indicate that the communication performance of the multiprocessors are consistent with the size of messages. The SP-2 is sensitive to message size but yields a much higher communication overlapping because of the communication co-processor. The PowerCHALLENGEarray is not highly sensitive to message size and yields a low communication overlapping. Bitonic sorting yields lower performance compared to FFT due to a smaller computation-to-communication ratio.
Spectral Calculation of ICRF Wave Propagation and Heating in 2-D Using Massively Parallel Computers
NASA Astrophysics Data System (ADS)
Jaeger, E. F.; D'Azevedo, E.; Berry, L. A.; Carter, M. D.; Batchelor, D. B.
2000-10-01
Spectral calculations of ICRF wave propagation in plasmas have the natural advantage that they require no assumption regarding the smallness of the ion Larmor radius ρ relative to wavelength λ. Results are therefore applicable to all orders in k_bot ρ where k_bot = 2π/λ. But because all modes in the spectral representation are coupled, the solution requires inversion of a large dense matrix. In contrast, finite difference algorithms involve only matrices that are sparse and banded. Thus, spectral calculations of wave propagation and heating in tokamak plasmas have so far been limited to 1-D. In this paper, we extend the spectral method to 2-D by taking advantage of new matrix inversion techniques that utilize massively parallel computers. By spreading the dense matrix over 576 processors on the ORNL IBM RS/6000 SP supercomputer, we are able to solve up to 120,000 coupled complex equations requiring 230 GBytes of memory and achieving over 500 Gflops/sec. Initial results for ASDEX and NSTX will be presented using up to 200 modes in both the radial and vertical dimensions.
Cobalt: A GPU-based correlator and beamformer for LOFAR
NASA Astrophysics Data System (ADS)
Broekema, P. Chris; Mol, J. Jan David; Nijboer, R.; van Amesfoort, A. S.; Brentjens, M. A.; Loose, G. Marcel; Klijn, W. F. A.; Romein, J. W.
2018-04-01
For low-frequency radio astronomy, software correlation and beamforming on general purpose hardware is a viable alternative to custom designed hardware. LOFAR, a new-generation radio telescope centered in the Netherlands with international stations in Germany, France, Ireland, Poland, Sweden and the UK, has successfully used software real-time processors based on IBM Blue Gene technology since 2004. Since then, developments in technology have allowed us to build a system based on commercial off-the-shelf components that combines the same capabilities with lower operational cost. In this paper, we describe the design and implementation of a GPU-based correlator and beamformer with the same capabilities as the Blue Gene based systems. We focus on the design approach taken, and show the challenges faced in selecting an appropriate system. The design, implementation and verification of the software system show the value of a modern test-driven development approach. Operational experience, based on three years of operations, demonstrates that a general purpose system is a good alternative to the previous supercomputer-based system or custom-designed hardware.
Evaluation of Cache-based Superscalar and Cacheless Vector Architectures for Scientific Computations
NASA Technical Reports Server (NTRS)
Oliker, Leonid; Carter, Jonathan; Shalf, John; Skinner, David; Ethier, Stephane; Biswas, Rupak; Djomehri, Jahed; VanderWijngaart, Rob
2003-01-01
The growing gap between sustained and peak performance for scientific applications has become a well-known problem in high performance computing. The recent development of parallel vector systems offers the potential to bridge this gap for a significant number of computational science codes and deliver a substantial increase in computing capabilities. This paper examines the intranode performance of the NEC SX6 vector processor and the cache-based IBM Power3/4 superscalar architectures across a number of key scientific computing areas. First, we present the performance of a microbenchmark suite that examines a full spectrum of low-level machine characteristics. Next, we study the behavior of the NAS Parallel Benchmarks using some simple optimizations. Finally, we evaluate the perfor- mance of several numerical codes from key scientific computing domains. Overall results demonstrate that the SX6 achieves high performance on a large fraction of our application suite and in many cases significantly outperforms the RISC-based architectures. However, certain classes of applications are not easily amenable to vectorization and would likely require extensive reengineering of both algorithm and implementation to utilize the SX6 effectively.
Synaptic defects associated with s-inclusion body myositis are prevented by copper.
Aldunate, R; Minniti, A N; Rebolledo, D; Inestrosa, N C
2012-08-01
Sporadic-inclusion body myositis (s-IBM) is the most common skeletal muscle disorder to afflict the elderly, and is clinically characterized by skeletal muscle degeneration. Its progressive course leads to muscle weakness and wasting, resulting in severe disability. The exact pathogenesis of this disease is unknown and no effective treatment has yet been found. An intriguing aspect of s-IBM is that it shares several molecular abnormalities with Alzheimer's disease, including the accumulation of amyloid-β-peptide (Aβ). Both disorders affect homeostasis of the cytotoxic fragment Aβ(1-42) during aging, but they are clinically distinct diseases. The use of animals that mimic some characteristics of a disease has become important in the search to elucidate the molecular mechanisms underlying the pathogenesis. With the aim of analyzing Aβ-induced pathology and evaluating the consequences of modulating Aβ aggregation, we used Caenorhabditis elegans that express the Aβ human peptide in muscle cells as a model of s-IBM. Previous studies indicate that copper treatment increases the number and size of amyloid deposits in muscle cells, and is able to ameliorate the motility impairments in Aβ transgenic C. elegans. Our recent studies show that neuromuscular synaptic transmission is defective in animals that express the Aβ-peptide and suggest a specific defect at the nicotine acetylcholine receptors level. Biochemical analyses show that copper treatment increases the number of amyloid deposits but decreases Aβ-oligomers. Copper treatment improves motility, synaptic structure and function. Our results suggest that Aβ-oligomers are the toxic Aβ species that trigger neuromuscular junction dysfunction.
Organization of brain tissue - Is the brain a noisy processor.
NASA Technical Reports Server (NTRS)
Adey, W. R.
1972-01-01
This paper presents some thoughts on functional organization in cerebral tissue. 'Spontaneous' wave and unit firing are considered as essential phenomena in the handling of information. Various models are discussed which have been suggested to describe the pseudorandom behavior of brain cells, leading to a view of the brain as an information processor and its role in learning, memory, remembering and forgetting.
A unified approach to VLSI layout automation and algorithm mapping on processor arrays
NASA Technical Reports Server (NTRS)
Venkateswaran, N.; Pattabiraman, S.; Srinivasan, Vinoo N.
1993-01-01
Development of software tools for designing supercomputing systems is highly complex and cost ineffective. To tackle this a special purpose PAcube silicon compiler which integrates different design levels from cell to processor arrays has been proposed. As a part of this, we present in this paper a novel methodology which unifies the problems of Layout Automation and Algorithm Mapping.
Fuel Processor Development for a Soldier-Portable Fuel Cell System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palo, Daniel R.; Holladay, Jamie D.; Rozmiarek, Robert T.
2002-01-01
Battelle is currently developing a soldier-portable power system for the U.S. Army that will continuously provide 15 W (25 W peak) of base load electric power for weeks or months using a micro technology-based fuel processor. The fuel processing train consists of a combustor, two vaporizers, and a steam-reforming reactor. This paper describes the concept and experimental progress to date.
Implementation of NASTRAN on the IBM/370 CMS operating system
NASA Technical Reports Server (NTRS)
Britten, S. S.; Schumacker, B.
1980-01-01
The NASA Structural Analysis (NASTRAN) computer program is operational on the IBM 360/370 series computers. While execution of NASTRAN has been described and implemented under the virtual storage operating systems of the IBM 370 models, the IBM 370/168 computer can also operate in a time-sharing mode under the virtual machine operating system using the Conversational Monitor System (CMS) subset. The changes required to make NASTRAN operational under the CMS operating system are described.
Sporadic Inclusion Body Myositis: Possible pathogenesis inferred from biomarkers
Weihl, Conrad C.; Pestronk, Alan
2013-01-01
Purpose of review The relevance of proteins that accumulate and aggregate in the muscle fibers of patients with sporadic inclusion body myositis (sIBM) is unknown. Many of these proteins also aggregate in other disorders, including Alzheimer’s disease, leading to speculation that sIBM pathogenesis has similarities to neurodegenerative disorders. Our review will discuss current studies on these protein biomarkers and any utility in sIBM diagnosis. Recent findings Two “classical” components of sIBM aggregates (Aβ and phospho-tau) have been re-evaluated. Three additional components of aggregates (TDP-43, p62, and LC3) have been identified. The sensitivity and specificity of these biomarkers has been explored. Two studies suggest that TDP-43 may have clinical utility in distinguishing sIBM from other inflammatory myopathies. Summary The fact that sIBM muscle accumulates multiple protein aggregates with no single protein appearing in every sIBM patient biopsy suggests that it is not presently possible to place pathogenic blame on any single protein (i.e. Aβ or TDP-43). Instead changes in protein homeostasis may lead to the accumulation of different proteins that have a propensity to aggregate in skeletal muscle. Therapies aimed at improving protein homeostasis, instead of targeting a specific protein that may or may not accumulate in all sIBM patients, could be useful future strategies for this devastating and enigmatic disorder. PMID:20664349
ERIC Educational Resources Information Center
Moore, Jack
1988-01-01
The article describes the IBM/Special Needs Exchange which consists of: (1) electronic mail, conferencing, and a library of text and program files on the CompuServe Information Service; and (2) a dial-in database of special education software for IBM and compatible computers. (DB)
The Easy Way of Finding Parameters in IBM (EWofFP-IBM)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turkan, Nureddin
E2/M1 multipole mixing ratios of even-even nuclei in transitional region can be calculated as soon as B(E2) and B(M1) values by using the PHINT and/or NP-BOS codes. The correct calculations of energies must be obtained to produce such calculations. Also, the correct parameter values are needed to calculate the energies. The logic of the codes is based on the mathematical and physical Statements describing interacting boson model (IBM) which is one of the model of nuclear structure physics. Here, the big problem is to find the best fitted parameters values of the model. So, by using the Easy Way ofmore » Finding Parameters in IBM (EWofFP-IBM), the best parameter values of IBM Hamiltonian for {sup 102-110}Pd and {sup 102-110}Ru isotopes were firstly obtained and then the energies were calculated. At the end, it was seen that the calculated results are in good agreement with the experimental ones. In addition, it was carried out that the presented energy values obtained by using the EWofFP-IBM are dominantly better than the previous theoretical data.« less
Oflazer, P Serdaroglu; Deymeer, F; Parman, Y
2011-06-01
In a muscle biopsy based study, only 9 out of 5450 biopsy samples, received from all parts of greater Istanbul area, had typical clinical and most suggestive light microscopic sporadic-inclusion body myositis (s-IBM) findings. Two other patients with and ten further patients without characteristic light microscopic findings had referring diagnosis of s-IBM. As the general and the age-adjusted populations of Istanbul in 2010 were 13.255.685 and 2.347.300 respectively, the calculated corresponding 'estimated prevalences' of most suggestive s-IBM in the Istanbul area were 0.679 X 10(-6) and 3.834 X 10(-6). Since Istanbul receives heavy migration from all regions of Turkey and ours is the only muscle pathology laboratory in Istanbul, projection of these figures to the Turkish population was considered to be reasonable and an estimate of the prevalence of s-IBM in Turkey was obtained. The calculated 'estimated prevalence' of s-IBM in Turkey is lower than the previously reported rates from other countries. The wide variation in the prevalence rates of s-IBM may reflect different genetic, immunogenetic or environmental factors in different populations.
ERIC Educational Resources Information Center
Barry, Peter H.
1990-01-01
A graphic, interactive software program that is suitable for teaching students about the measurement and ion dependence of cell membrane potentials is described. The hardware requirements, the aim of the program, how to use the program, other related programs, and its advantages over traditional methods are included. (KR)
Hazardous Waste Cleanup: IBM Corporation, Former in Owego, New York
The corrective action activities at the facility are conducted by IBM Corporation, therefore IBM is listed as the operator of the Part 373 Hazardous Waste Management (HWM) Permit for corrective action. Lockheed Martin Corporation owns the facility and is l
Hazardous Waste Cleanup: IBM Corporation in Dayton, New Jersey
The IBM facility is located at 431 Ridge Road on a 66-acre parcel in a mixed residential and industrial section of Dayton, South Brunswick Township, Middlesex County, New Jersey. IBM's manufacturing plant was constructed in 1956 and used until 1985 for
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-03
...] International Business Machines (IBM), Global Sales Operations Organization, Sales and Distribution Business Manager Roles; One Teleworker Located in Charleston, WV; International Business Machines (IBM), Global Sales Operations Organization, Sales and Distribution Business Unit, Relations Analyst and Band 8...
Design and development of an IBM/VM menu system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cazzola, D.J.
1992-10-01
This report describes a full screen menu system developed using IBM`s Interactive System Productivity Facility (ISPF) and the REXX programming language. The software was developed for the 2800 IBM/VM Electrical Computer Aided Design (ECAD) system. The system was developed to deliver electronic drawing definitions to a corporate drawing release system. Although this report documents the status of the menu system when it was retired, the methodologies used and the requirements defined are very applicable to replacement systems.
1991-01-22
Customer Agreement Number: 90-05-29- VRX See Section 3.1 for any additional information about the testing environment. As a result of this validation...22 January 1991 90-05-29- VRX Ada COMPILER VALIDATION SUMMARY REPORT: Certificate Number: 900726W1.11017 Verdix Corporation VADS IBM RISC System/6000
Fracture Mechanics Analysis of Single and Double Rows of Fastener Holes Loaded in Bearing
1976-04-01
the following subprograms for execution: 1. ASRL FEABL-2 subroutines ASMLTV, ASMSUB, BCON, FACT, ORK, QBACK, SETUP, SIMULQ, STACON, and XTRACT. 2. IBM ...based on program code generated by IBM FORTRAN-G1 and FORTRAN-H compilers, with demonstration runs made on an IBM 370/168 computer. Programs SROW and...DROW are supplied ready to execute on systems with IBM -standard FORTRAN unit members for the card reader (unit 5) and line printer (unit 6). The
Combustor air flow control method for fuel cell apparatus
Clingerman, Bruce J.; Mowery, Kenneth D.; Ripley, Eugene V.
2001-01-01
A method for controlling the heat output of a combustor in a fuel cell apparatus to a fuel processor where the combustor has dual air inlet streams including atmospheric air and fuel cell cathode effluent containing oxygen depleted air. In all operating modes, an enthalpy balance is provided by regulating the quantity of the air flow stream to the combustor to support fuel cell processor heat requirements. A control provides a quick fast forward change in an air valve orifice cross section in response to a calculated predetermined air flow, the molar constituents of the air stream to the combustor, the pressure drop across the air valve, and a look up table of the orifice cross sectional area and valve steps. A feedback loop fine tunes any error between the measured air flow to the combustor and the predetermined air flow.
Multi-processor including data flow accelerator module
Davidson, George S.; Pierce, Paul E.
1990-01-01
An accelerator module for a data flow computer includes an intelligent memory. The module is added to a multiprocessor arrangement and uses a shared tagged memory architecture in the data flow computer. The intelligent memory module assigns locations for holding data values in correspondence with arcs leading to a node in a data dependency graph. Each primitive computation is associated with a corresponding memory cell, including a number of slots for operands needed to execute a primitive computation, a primitive identifying pointer, and linking slots for distributing the result of the cell computation to other cells requiring that result as an operand. Circuitry is provided for utilizing tag bits to determine automatically when all operands required by a processor are available and for scheduling the primitive for execution in a queue. Each memory cell of the module may be associated with any of the primitives, and the particular primitive to be executed by the processor associated with the cell is identified by providing an index, such as the cell number for the primitive, to the primitive lookup table of starting addresses. The module thus serves to perform functions previously performed by a number of sections of data flow architectures and coexists with conventional shared memory therein. A multiprocessing system including the module operates in a hybrid mode, wherein the same processing modules are used to perform some processing in a sequential mode, under immediate control of an operating system, while performing other processing in a data flow mode.
High-Speed Computation of the Kleene Star in Max-Plus Algebraic System Using a Cell Broadband Engine
NASA Astrophysics Data System (ADS)
Goto, Hiroyuki
This research addresses a high-speed computation method for the Kleene star of the weighted adjacency matrix in a max-plus algebraic system. We focus on systems whose precedence constraints are represented by a directed acyclic graph and implement it on a Cell Broadband Engine™ (CBE) processor. Since the resulting matrix gives the longest travel times between two adjacent nodes, it is often utilized in scheduling problem solvers for a class of discrete event systems. This research, in particular, attempts to achieve a speedup by using two approaches: parallelization and SIMDization (Single Instruction, Multiple Data), both of which can be accomplished by a CBE processor. The former refers to a parallel computation using multiple cores, while the latter is a method whereby multiple elements are computed by a single instruction. Using the implementation on a Sony PlayStation 3™ equipped with a CBE processor, we found that the SIMDization is effective regardless of the system's size and the number of processor cores used. We also found that the scalability of using multiple cores is remarkable especially for systems with a large number of nodes. In a numerical experiment where the number of nodes is 2000, we achieved a speedup of 20 times compared with the method without the above techniques.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-29
... 25, 2010, applicable to workers of International Business Machines (IBM), Global Technology Services... hereby issued as follows: All workers of International Business Machines (IBM), Global Technology... DEPARTMENT OF LABOR Employment and Training Administration [TA-W-74,164] International Business...
Targeting protein homeostasis in sporadic inclusion body myositis.
Ahmed, Mhoriam; Machado, Pedro M; Miller, Adrian; Spicer, Charlotte; Herbelin, Laura; He, Jianghua; Noel, Janelle; Wang, Yunxia; McVey, April L; Pasnoor, Mamatha; Gallagher, Philip; Statland, Jeffrey; Lu, Ching-Hua; Kalmar, Bernadett; Brady, Stefen; Sethi, Huma; Samandouras, George; Parton, Matt; Holton, Janice L; Weston, Anne; Collinson, Lucy; Taylor, J Paul; Schiavo, Giampietro; Hanna, Michael G; Barohn, Richard J; Dimachkie, Mazen M; Greensmith, Linda
2016-03-23
Sporadic inclusion body myositis (sIBM) is the commonest severe myopathy in patients more than 50 years of age. Previous therapeutic trials have targeted the inflammatory features of sIBM but all have failed. Because protein dyshomeostasis may also play a role in sIBM, we tested the effects of targeting this feature of the disease. Using rat myoblast cultures, we found that up-regulation of the heat shock response with arimoclomol reduced key pathological markers of sIBM in vitro. Furthermore, in mutant valosin-containing protein (VCP) mice, which develop an inclusion body myopathy, treatment with arimoclomol ameliorated disease pathology and improved muscle function. We therefore evaluated arimoclomol in an investigator-led, randomized, double-blind, placebo-controlled, proof-of-concept trial in sIBM patients and showed that arimoclomol was safe and well tolerated. Although arimoclomol improved some IBM-like pathology in the mutant VCP mouse, we did not see statistically significant evidence of efficacy in the proof-of-concept patient trial. Copyright © 2016, American Association for the Advancement of Science.
Targeting Protein Homeostasis in Sporadic Inclusion Body Myositis
Ahmed, Mhoriam; Machado, Pedro M.; Miller, Adrian; Spicer, Charlotte; Herbelin, Laura; He, Jianghua; Noel, Janelle; Wang, Yunxia; McVey, April L.; Pasnoor, Mamatha; Gallagher, Philip; Statland, Jeffrey; Lu, Ching-Hua; Kalmar, Bernadett; Brady, Stefen; Sethi, Huma; Samandouras, George; Parton, Matt; Holton, Janice L.; Weston, Anne; Collinson, Lucy; Taylor, J. Paul; Schiavo, Giampietro; Hanna, Michael G.; Barohn, Richard J.; Dimachkie, Mazen M.; Greensmith, Linda
2016-01-01
Sporadic inclusion body myositis (sIBM) is the commonest severe myopathy in patients over age 50. Previous therapeutic trials have targeted the inflammatory features of sIBM, but all have failed. Since protein dyshomeostasis may also play a role in sIBM, we tested the effects of targeting this feature of the disease. Using rat myoblast cultures, we found that up-regulation of the heat shock response with Arimoclomol reduced key pathological markers of sIBM in vitro. Furthermore, in mutant valosin-containing protein VCP mice, which develop an inclusion body myopathy (IBM), treatment with Arimoclomol ameliorated disease pathology and improved muscle function. We therefore evaluated the safety and tolerability of Arimoclomol in an investigator-lead, randomised, double-blind, placebo-controlled, proof-of-concept patient trial and gathered exploratory efficacy data which showed that Arimoclomol was safe and well tolerated. Although Arimoclomol improved some IBM-like pathology in vitro and in vivo in the mutant VCP mouse, we did not see statistically significant evidence of efficacy in this proof of concept patient trial. PMID:27009270
Attaching IBM-compatible 3380 disks to Cray X-MP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engert, D.E.; Midlock, J.L.
1989-01-01
A method of attaching IBM-compatible 3380 disks directly to a Cray X-MP via the XIOP with a BMC is described. The IBM 3380 disks appear to the UNICOS operating system as DD-29 disks with UNICOS file systems. IBM 3380 disks provide cheap, reliable large capacity disk storage. Combined with a small number of high-speed Cray disks, the IBM disks provide for the bulk of the storage for small files and infrequently used files. Cray Research designed the BMC and its supporting software in the XIOP to allow IBM tapes and other devices to be attached to the X-MP. No hardwaremore » changes were necessary, and we added less than 2000 lines of code to the XIOP to accomplish this project. This system has been in operation for over eight months. Future enhancements such as the use of a cache controller and attachment to a Y-MP are also described. 1 tab.« less
ERIC Educational Resources Information Center
Newland, Robert J.; And Others
1988-01-01
Reviews four organic chemistry computer programs and three books. Software includes: (1) NMR Simulator 7--for IBM or Macintosh, (2) Nucleic Acid Structure and Synthesis--for IBM, (3) Molecular Design Editor--for Apple II, and (4) Synthetic Adventure--for Apple II and IBM. Book topics include physical chemistry, polymer pioneers, and the basics of…
Wiendl, Heinz; Mitsdoerffer, Meike; Schneider, Dagmar; Chen, Lieping; Lochmüller, Hanns; Melms, Arthur; Weller, Michael
2003-10-01
B7-H1 is a novel B7 family protein attributed to costimulatory and immune regulatory functions. Here we report that human myoblasts cultured from control subjects and patients with inflammatory myopathies as well as TE671 muscle rhabdomyosarcoma cells express high levels of B7-H1 after stimulation with the inflammatory cytokine IFN-gamma. Coculture experiments of MHC class I/II-positive myoblasts with CD4 and CD8 T cells in the presence of antigen demonstrated the functional consequences of muscle-related B7-H1 expression: production of inflammatory cytokines, IFN-gamma and IL-2, by CD4 as well CD8 T cells was markedly enhanced in the presence of a neutralizing anti-B7-H1 antibody. This observation was paralleled by an augmented expression of the T cell activation markers CD25, ICOS, and CD69, thus showing B7-H1-mediated inhibition of T cell activation. Further, we investigated 23 muscle biopsy specimens from patients with polymyositis (PM), inclusion body myositis (IBM), dermatomyositis (DM), and nonmyopathic controls for B7-H1 expression by immunohistochemistry: B7-H1 was expressed in PM, IBM, and DM specimens but not in noninflammatory and nonmyopathic controls. Staining was predominantly localized to areas of strong inflammation and to muscle cells as well as mononuclear cells. These data highlight the immune regulatory properties of muscle cells and suggest that B7-H1 expression represents an inhibitory mechanism induced upon inflammatory stimuli and aimed at protecting muscle fibers from immune aggression.
FPGA-based distributed computing microarchitecture for complex physical dynamics investigation.
Borgese, Gianluca; Pace, Calogero; Pantano, Pietro; Bilotta, Eleonora
2013-09-01
In this paper, we present a distributed computing system, called DCMARK, aimed at solving partial differential equations at the basis of many investigation fields, such as solid state physics, nuclear physics, and plasma physics. This distributed architecture is based on the cellular neural network paradigm, which allows us to divide the differential equation system solving into many parallel integration operations to be executed by a custom multiprocessor system. We push the number of processors to the limit of one processor for each equation. In order to test the present idea, we choose to implement DCMARK on a single FPGA, designing the single processor in order to minimize its hardware requirements and to obtain a large number of easily interconnected processors. This approach is particularly suited to study the properties of 1-, 2- and 3-D locally interconnected dynamical systems. In order to test the computing platform, we implement a 200 cells, Korteweg-de Vries (KdV) equation solver and perform a comparison between simulations conducted on a high performance PC and on our system. Since our distributed architecture takes a constant computing time to solve the equation system, independently of the number of dynamical elements (cells) of the CNN array, it allows us to reduce the elaboration time more than other similar systems in the literature. To ensure a high level of reconfigurability, we design a compact system on programmable chip managed by a softcore processor, which controls the fast data/control communication between our system and a PC Host. An intuitively graphical user interface allows us to change the calculation parameters and plot the results.
Review of the workshop on low-cost polysilicon for terrestrial photovoltaic solar cell applications
NASA Technical Reports Server (NTRS)
Lutwack, R.
1986-01-01
Topics reviewed include: polysilicon material requirements; effects of impurities; requirements for high-efficiency solar cells; economics; development of silane processes; fluidized-bed processor development; silicon purification; and marketing.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-03
... for the workers and former workers of International Business Machines (IBM), Sales and Distribution... reconsideration alleges that IBM outsourced to India and China. During the reconsideration investigation, it was..., Armonk, New York. The subject worker group supply computer software development and maintenance services...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-02
... Machines (IBM), Software Group Business Unit, Quality Assurance Group, San Jose, California; Notice of... workers of International Business Machines (IBM), Software Group Business Unit, Optim Data Studio Tools QA... February 2, 2011 (76 FR 5832). The subject worker group supplies acceptance testing services, design...
[IBM Work and Personal Life Balance Programs.
ERIC Educational Resources Information Center
International Business Machines Corp., Armonk, NY.
These five brochures describe the IBM Corporation's policies, programs, and initiatives designed to meet the needs of employees' child care and family responsibilities as they move through various stages of employment with IBM. The Work and Personal Life Balance Programs brochure outlines (1) policies for flexible work schedules, including…
1989-04-20
International Business Machines Corporation, IBM Development System. for the Ada Language AIX/RT Ada Compiler, Version 1.1.1, Wright-Patterson APB...Certificate Number: 890420V1.10066 International Business Machines Corporation IBM Development System for the Ada Language AIX/RT Ada Compiler, Version 1.1.1...TEST INFORMATION The compiler was tested using command scripts provided by International Business Machines Corporation and reviewed by the validation
1988-03-28
International Business Machines Corporation IBM Development System for the Ada Language, Version 2.1.0 IBM 4381 under MVS/XA, host and target Completion...Joint Program Office, AJPO 20. ABSTRACT (Continue on reverse side if necessary and identify by block number) International Business Machines Corporation...in the compiler listed in this declaration. I declare that International Business Machines Corporation is the owner of record of the object code of
Simple model for deriving sdg interacting boson model Hamiltonians: 150Nd example
NASA Astrophysics Data System (ADS)
Devi, Y. D.; Kota, V. K. B.
1993-07-01
A simple and yet useful model for deriving sdg interacting boson model (IBM) Hamiltonians is to assume that single-boson energies derive from identical particle (pp and nn) interactions and proton, neutron single-particle energies, and that the two-body matrix elements for bosons derive from pn interaction, with an IBM-2 to IBM-1 projection of the resulting p-n sdg IBM Hamiltonian. The applicability of this model in generating sdg IBM Hamiltonians is demonstrated, using a single-j-shell Otsuka-Arima-Iachello mapping of the quadrupole and hexadecupole operators in proton and neutron spaces separately and constructing a quadrupole-quadrupole plus hexadecupole-hexadecupole Hamiltonian in the analysis of the spectra, B(E2)'s, and E4 strength distribution in the example of 150Nd.
Aircraft noise prediction program propeller analysis system IBM-PC version user's manual version 2.0
NASA Technical Reports Server (NTRS)
Nolan, Sandra K.
1988-01-01
The IBM-PC version of the Aircraft Noise Prediction Program (ANOPP) Propeller Analysis System (PAS) is a set of computational programs for predicting the aerodynamics, performance, and noise of propellers. The ANOPP-PAS is a subset of a larger version of ANOPP which can be executed on CDC or VAX computers. This manual provides a description of the IBM-PC version of the ANOPP-PAS and its prediction capabilities, and instructions on how to use the system on an IBM-XT or IBM-AT personal computer. Sections within the manual document installation, system design, ANOPP-PAS usage, data entry preprocessors, and ANOPP-PAS functional modules and procedures. Appendices to the manual include a glossary of ANOPP terms and information on error diagnostics and recovery techniques.
Effects of an Appearance-Focused Interpretation Training Intervention on Eating Disorder Symptoms.
Summers, Berta J; Cougle, Jesse R
2018-03-13
Previous research suggests that computerized interpretation bias modification (IBM) techniques may be useful for modifying thoughts and behaviours relevant to eating pathology; however, little is known about the utility of IBM for decreasing specific eating disorder (ED) symptoms (e.g. bulimia, drive for thinness). The current study sought to further examine the utility of IBM for ED symptoms via secondary analyses of an examination of IBM for individuals with elevated body dysmorphic disorder (BDD) symptoms (see Summers and Cougle, 2016), as these disorders are both characterized by threat interpretation biases of ambiguous appearance-related information. We recruited 41 participants for a randomized trial comparing four sessions of IBM aimed at modifying problematic social and appearance-related threat interpretation biases with a placebo control training (PC). At 1-week post-treatment, and relative to the PC, the IBM group reported greater reductions in negative/threat interpretations of ambiguous information in favour of positive/benign biases. Furthermore, among individuals with high pre-treatment bulimia symptoms, IBM yielded greater reductions in bulimia symptoms compared with PC at post-treatment. No treatment effects were observed on drive for thinness symptoms. The current study suggests that cognitive interventions for individuals with primary BDD symptoms may improve co-occurring ED symptoms such as bulimia.
Rare variants in SQSTM1 and VCP genes and risk of sporadic inclusion body myositis.
Gang, Qiang; Bettencourt, Conceição; Machado, Pedro M; Brady, Stefen; Holton, Janice L; Pittman, Alan M; Hughes, Deborah; Healy, Estelle; Parton, Matthew; Hilton-Jones, David; Shieh, Perry B; Needham, Merrilee; Liang, Christina; Zanoteli, Edmar; de Camargo, Leonardo Valente; De Paepe, Boel; De Bleecker, Jan; Shaibani, Aziz; Ripolone, Michela; Violano, Raffaella; Moggio, Maurizio; Barohn, Richard J; Dimachkie, Mazen M; Mora, Marina; Mantegazza, Renato; Zanotti, Simona; Singleton, Andrew B; Hanna, Michael G; Houlden, Henry
2016-11-01
Genetic factors have been suggested to be involved in the pathogenesis of sporadic inclusion body myositis (sIBM). Sequestosome 1 (SQSTM1) and valosin-containing protein (VCP) are 2 key genes associated with several neurodegenerative disorders but have yet to be thoroughly investigated in sIBM. A candidate gene analysis was conducted using whole-exome sequencing data from 181 sIBM patients, and whole-transcriptome expression analysis was performed in patients with genetic variants of interest. We identified 6 rare missense variants in the SQSTM1 and VCP in 7 sIBM patients (4.0%). Two variants, the SQSTM1 p.G194R and the VCP p.R159C, were significantly overrepresented in this sIBM cohort compared with controls. Five of these variants had been previously reported in patients with degenerative diseases. The messenger RNA levels of major histocompatibility complex genes were upregulated, this elevation being more pronounced in SQSTM1 patient group. We report for the first time potentially pathogenic SQSTM1 variants and expand the spectrum of VCP variants in sIBM. These data suggest that defects in neurodegenerative pathways may confer genetic susceptibility to sIBM and reinforce the mechanistic overlap in these neurodegenerative disorders. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elrod, D.C.; Turner, W.D.
TRUMP solves a general nonlinear parabolic partial differential equation describing flow in various kinds of potential fields, such as fields of temperature, pressure, or electricity and magnetism; simultaneously, it will solve two additional equations representing, in thermal problems, heat production by decomposition of two reactants having rate constants with a general Arrhenius temperature dependence. Steady-state and transient flow in one, two, or three dimensions are considered in geometrical configurations having simple or complex shapes and structures. Problem parameters may vary with spatial position, time, or primary dependent variables--temperature, pressure, or field strength. Initial conditions may vary with spatial position, andmore » among the criteria that may be specified for ending a problem are upper and lower limits on the size of the primary dependent variable, upper limits on the problem time or on the number of time-steps or on the computer time, and attainment of steady state.IBM360,370;CDC7600; FORTRAN IV (95%) and BAL (5%) (IBM); FORTRAN IV (CDC); OS/360 (IBM360), OS/370 (IBM370), SCOPE 2.1.5 (CDC7600); As dimensioned, the program requires 400K bytes of storage on an IBM370 and 145,100 (octal) words on a CDC7600.« less
Hybrid fuel cell/diesel generation total energy system, part 2
NASA Astrophysics Data System (ADS)
Blazek, C. F.
1982-11-01
Meeting the Goldstone Deep Space Communications Complex (DGSCC) electrical and thermal requirements with the existing system was compared with using fuel cells. Fuel cell technology selection was based on a 1985 time frame for installation. The most cost-effective fuel feedstock for fuel cell application was identified. Fuels considered included diesel oil, natural gas, methanol and coal. These fuel feedstocks were considered not only on the cost and efficiency of the fuel conversion process, but also on complexity and integration of the fuel processor on system operation and thermal energy availability. After a review of fuel processor technology, catalytic steam reformer technology was selected based on the ease of integration and the economics of hydrogen production. The phosphoric acid fuel cell was selected for application at the GDSCC due to its commercial readiness for near term application. Fuel cell systems were analyzed for both natural gas and methanol feedstock. The subsequent economic analysis indicated that a natural gas fueled system was the most cost effective of the cases analyzed.
Hybrid fuel cell/diesel generation total energy system, part 2
NASA Technical Reports Server (NTRS)
Blazek, C. F.
1982-01-01
Meeting the Goldstone Deep Space Communications Complex (DGSCC) electrical and thermal requirements with the existing system was compared with using fuel cells. Fuel cell technology selection was based on a 1985 time frame for installation. The most cost-effective fuel feedstock for fuel cell application was identified. Fuels considered included diesel oil, natural gas, methanol and coal. These fuel feedstocks were considered not only on the cost and efficiency of the fuel conversion process, but also on complexity and integration of the fuel processor on system operation and thermal energy availability. After a review of fuel processor technology, catalytic steam reformer technology was selected based on the ease of integration and the economics of hydrogen production. The phosphoric acid fuel cell was selected for application at the GDSCC due to its commercial readiness for near term application. Fuel cell systems were analyzed for both natural gas and methanol feedstock. The subsequent economic analysis indicated that a natural gas fueled system was the most cost effective of the cases analyzed.
High-speed parallel implementation of a modified PBR algorithm on DSP-based EH topology
NASA Astrophysics Data System (ADS)
Rajan, K.; Patnaik, L. M.; Ramakrishna, J.
1997-08-01
Algebraic Reconstruction Technique (ART) is an age-old method used for solving the problem of three-dimensional (3-D) reconstruction from projections in electron microscopy and radiology. In medical applications, direct 3-D reconstruction is at the forefront of investigation. The simultaneous iterative reconstruction technique (SIRT) is an ART-type algorithm with the potential of generating in a few iterations tomographic images of a quality comparable to that of convolution backprojection (CBP) methods. Pixel-based reconstruction (PBR) is similar to SIRT reconstruction, and it has been shown that PBR algorithms give better quality pictures compared to those produced by SIRT algorithms. In this work, we propose a few modifications to the PBR algorithms. The modified algorithms are shown to give better quality pictures compared to PBR algorithms. The PBR algorithm and the modified PBR algorithms are highly compute intensive, Not many attempts have been made to reconstruct objects in the true 3-D sense because of the high computational overhead. In this study, we have developed parallel two-dimensional (2-D) and 3-D reconstruction algorithms based on modified PBR. We attempt to solve the two problems encountered by the PBR and modified PBR algorithms, i.e., the long computational time and the large memory requirements, by parallelizing the algorithm on a multiprocessor system. We investigate the possible task and data partitioning schemes by exploiting the potential parallelism in the PBR algorithm subject to minimizing the memory requirement. We have implemented an extended hypercube (EH) architecture for the high-speed execution of the 3-D reconstruction algorithm using the commercially available fast floating point digital signal processor (DSP) chips as the processing elements (PEs) and dual-port random access memories (DPR) as channels between the PEs. We discuss and compare the performances of the PBR algorithm on an IBM 6000 RISC workstation, on a Silicon Graphics Indigo 2 workstation, and on an EH system. The results show that an EH(3,1) using DSP chips as PEs executes the modified PBR algorithm about 100 times faster than an LBM 6000 RISC workstation. We have executed the algorithms on a 4-node IBM SP2 parallel computer. The results show that execution time of the algorithm on an EH(3,1) is better than that of a 4-node IBM SP2 system. The speed-up of an EH(3,1) system with eight PEs and one network controller is approximately 7.85.
Scalability of Parallel Spatial Direct Numerical Simulations on Intel Hypercube and IBM SP1 and SP2
NASA Technical Reports Server (NTRS)
Joslin, Ronald D.; Hanebutte, Ulf R.; Zubair, Mohammad
1995-01-01
The implementation and performance of a parallel spatial direct numerical simulation (PSDNS) approach on the Intel iPSC/860 hypercube and IBM SP1 and SP2 parallel computers is documented. Spatially evolving disturbances associated with the laminar-to-turbulent transition in boundary-layer flows are computed with the PSDNS code. The feasibility of using the PSDNS to perform transition studies on these computers is examined. The results indicate that PSDNS approach can effectively be parallelized on a distributed-memory parallel machine by remapping the distributed data structure during the course of the calculation. Scalability information is provided to estimate computational costs to match the actual costs relative to changes in the number of grid points. By increasing the number of processors, slower than linear speedups are achieved with optimized (machine-dependent library) routines. This slower than linear speedup results because the computational cost is dominated by FFT routine, which yields less than ideal speedups. By using appropriate compile options and optimized library routines on the SP1, the serial code achieves 52-56 M ops on a single node of the SP1 (45 percent of theoretical peak performance). The actual performance of the PSDNS code on the SP1 is evaluated with a "real world" simulation that consists of 1.7 million grid points. One time step of this simulation is calculated on eight nodes of the SP1 in the same time as required by a Cray Y/MP supercomputer. For the same simulation, 32-nodes of the SP1 and SP2 are required to reach the performance of a Cray C-90. A 32 node SP1 (SP2) configuration is 2.9 (4.6) times faster than a Cray Y/MP for this simulation, while the hypercube is roughly 2 times slower than the Y/MP for this application. KEY WORDS: Spatial direct numerical simulations; incompressible viscous flows; spectral methods; finite differences; parallel computing.
Pinto, Nicolas; Doukhan, David; DiCarlo, James J; Cox, David D
2009-11-01
While many models of biological object recognition share a common set of "broad-stroke" properties, the performance of any one model depends strongly on the choice of parameters in a particular instantiation of that model--e.g., the number of units per layer, the size of pooling kernels, exponents in normalization operations, etc. Since the number of such parameters (explicit or implicit) is typically large and the computational cost of evaluating one particular parameter set is high, the space of possible model instantiations goes largely unexplored. Thus, when a model fails to approach the abilities of biological visual systems, we are left uncertain whether this failure is because we are missing a fundamental idea or because the correct "parts" have not been tuned correctly, assembled at sufficient scale, or provided with enough training. Here, we present a high-throughput approach to the exploration of such parameter sets, leveraging recent advances in stream processing hardware (high-end NVIDIA graphic cards and the PlayStation 3's IBM Cell Processor). In analogy to high-throughput screening approaches in molecular biology and genetics, we explored thousands of potential network architectures and parameter instantiations, screening those that show promising object recognition performance for further analysis. We show that this approach can yield significant, reproducible gains in performance across an array of basic object recognition tasks, consistently outperforming a variety of state-of-the-art purpose-built vision systems from the literature. As the scale of available computational power continues to expand, we argue that this approach has the potential to greatly accelerate progress in both artificial vision and our understanding of the computational underpinning of biological vision.
Pinto, Nicolas; Doukhan, David; DiCarlo, James J.; Cox, David D.
2009-01-01
While many models of biological object recognition share a common set of “broad-stroke” properties, the performance of any one model depends strongly on the choice of parameters in a particular instantiation of that model—e.g., the number of units per layer, the size of pooling kernels, exponents in normalization operations, etc. Since the number of such parameters (explicit or implicit) is typically large and the computational cost of evaluating one particular parameter set is high, the space of possible model instantiations goes largely unexplored. Thus, when a model fails to approach the abilities of biological visual systems, we are left uncertain whether this failure is because we are missing a fundamental idea or because the correct “parts” have not been tuned correctly, assembled at sufficient scale, or provided with enough training. Here, we present a high-throughput approach to the exploration of such parameter sets, leveraging recent advances in stream processing hardware (high-end NVIDIA graphic cards and the PlayStation 3's IBM Cell Processor). In analogy to high-throughput screening approaches in molecular biology and genetics, we explored thousands of potential network architectures and parameter instantiations, screening those that show promising object recognition performance for further analysis. We show that this approach can yield significant, reproducible gains in performance across an array of basic object recognition tasks, consistently outperforming a variety of state-of-the-art purpose-built vision systems from the literature. As the scale of available computational power continues to expand, we argue that this approach has the potential to greatly accelerate progress in both artificial vision and our understanding of the computational underpinning of biological vision. PMID:19956750
Improved High-Energy Response of AlGaAs/GaAs Solar Cells Using a Low-Cost Technology
NASA Astrophysics Data System (ADS)
Noorzad, Camron D.; Zhao, Xin; Harotoonian, Vache; Woodall, Jerry M.
2016-12-01
We report on an AlGaAs/GaAs solar cell with a significantly increased high-energy response that was produced via a modified liquid phase epitaxy (LPE) technique. This technique uses a one-step process in which the solid-liquid equilibrium Al-Ga-As:Zn melt in contact with an n-type vendor GaAs substrate simultaneously getters impurities in the substrate that shorten minority carrier lifetimes, diffuses Zn into the substrate to create a p- n junction, and forms a thin p-AlGaAs window layer that enables more high-energy light to be efficiently absorbed. Unlike conventional LPE, this process is performed isothermally. In our "double Al" method, the ratio of Al in the melt ("Al melt ratio") that was used in our process was two times more than what was previously reported in the record 1977 International Business Machines (IBM) solar cell. Photoluminescence (PL) results showed our double Al sample yielded a response to 405 nm light ("blue light"), which was more than twice as intense as the response from our replicated IBM cell. The original 1977 cell had a low-intensity spectral response to photon wavelengths under 443 nm (Woodall and Hovel in Sol Energy Mater Sol Cells 29:176, 1990). Secondary ion mass spectrometry results confirmed the increased blue light response was due to a large reduction in AlGaAs window layer thickness. These results proved increasing the Al melt ratio broadens the spectrum of light that can be transmitted through the window layer into the active GaAs region for absorption, increasing the overall solar cell efficiency. Our enhanced double Al method can pave the way for large-scale manufacturing of low-cost, high-efficiency solar cells.
A Comparison of the Apple Macintosh and IBM PC in Laboratory Applications.
ERIC Educational Resources Information Center
Williams, Ron
1986-01-01
Compares Apple Macintosh and IBM PC microcomputers in terms of their usefulness in the laboratory. No attempt is made to equalize the two computer systems since they represent opposite ends of the computer spectrum. Indicates that the IBM PC is the most useful general-purpose personal computer for laboratory applications. (JN)
Training in the Workplace: An IBM Case Study. Contractor Report.
ERIC Educational Resources Information Center
Grubb, Ralph E.
International Business Machines Corporation's (IBM) efforts to develop a corporate culture are associated with its founder, Thomas J. Watson, Sr. From the start of his association with the company in 1914, the importance of education was stressed. The expansion of the education and training organization paralleled IBM's 75-year growth. In January…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-02
... DEPARTMENT OF LABOR Employment and Training Administration [TA-W-74,554] International Business Machines (IBM), Software Group Business Unit, Optim Data Studio Tools QA, San Jose, CA; Notice of... determination of the TAA petition filed on behalf of workers at International Business Machines (IBM), Software...
SERDAROGLU OFLAZER, P.; DEYMEER, F.; PARMAN, Y.
2011-01-01
SUMMARY In a muscle biopsy based study, only 9 out of 5450 biopsy samples, received from all parts of greater Istanbul area, had typical clinical and most suggestive light microscopic sporadic-inclusion body myositis (s-IBM) findings. Two other patients with and ten further patients without characteristic light microscopic findings had referring diagnosis of s-IBM. As the general and the ageadjusted populations of Istanbul in 2010 were 13.255.685 and 2.347.300 respectively, the calculated corresponding ‘estimated prevalences' of most suggestive s-IBM in the Istanbul area were 0.679 X 10-6 and 3.834 X 10-6. Since Istanbul receives heavy migration from all regions of Turkey and ours is the only muscle pathology laboratory in Istanbul, projection of these figures to the Turkish population was considered to be reasonable and an estimate of the prevalence of s-IBM in Turkey was obtained. The calculated ‘estimated prevalence' of s-IBM in Turkey is lower than the previously reported rates from other countries. The wide variation in the prevalence rates of s-IBM may reflect different genetic, immunogenetic or environmental factors in different populations. PMID:21842592
Broccolini, A; Engel, W K; Alvarez, R B; Askanas, V
2000-04-01
Sporadic inclusion-body myositis (s-IBM) is the most common progressive muscle disease of older persons. Pathologically, the muscle biopsy manifests various degrees of inflammation and specific vacuolar degeneration of muscle fibers characterized by paired helical filaments (PHFs) composed of phosphorylated tau. IBM vacuolated fibers also contain accumulations of several other Alzheimer-characteristic proteins. Molecular mechanisms leading to formation of the PHFs and accumulations of proteins in IBM muscle are not known. We report that the abnormal muscle fibers of IBM contained (i) acridine-orange-positive RNA inclusions that colocalized with the immunoreactivity of phosphorylated tau and (ii) survival motor neuron protein immunoreactive inclusions, which by immuno-electron microscopy were confined to paired helical filaments. This study demonstrates two novel components of the IBM paired helical filaments, which may lead to better understanding of their pathogenesis.
Best kept secrets ... First Coast Systems, Inc. (FCS).
Andrew, W F
1991-04-01
The FCS/APaCS system is a viable option for small-to medium-size hospitals (up to 400 beds). The table-driven system takes full advantage of IBM AS/400 computer architecture. A comprehensive application set, provided in an integrated database environment, is adaptable to multi-facility environments. Price/performance appears to be competitive. Commitment to IBM AS/400 environment assures cost-effective hardware platforms backed by IBM support and resources. As an IBM Health Industry Business Partner, FCS (and its clients) benefits from IBM's well-known commitment to quality and service. Corporate emphasis on user involvement and satisfaction, along with a commitment to quality and service for the APaCS systems, assures clients of "leading edge" capabilities in this evolutionary healthcare delivery environment. FCS/APaCS will be a strong contender in selected marketing environments.
NASA Astrophysics Data System (ADS)
Karstedt, Jörg; Ogrzewalla, Jürgen; Severin, Christopher; Pischinger, Stefan
In this work, the concept development, system layout, component simulation and the overall DOE system optimization of a HT-PEM fuel cell APU with a net electric power output of 4.5 kW and an onboard methane fuel processor are presented. A highly integrated system layout has been developed that enables fast startup within 7.5 min, a closed system water balance and high fuel processor efficiencies of up to 85% due to the recuperation of the anode offgas burner heat. The integration of the system battery into the load management enhances the transient electric performance and the maximum electric power output of the APU system. Simulation models of the carbon monoxide influence on HT-PEM cell voltage, the concentration and temperature profiles within the autothermal reformer (ATR) and the CO conversion rates within the watergas shift stages (WGSs) have been developed. They enable the optimization of the CO concentration in the anode gas of the fuel cell in order to achieve maximum system efficiencies and an optimized dimensioning of the ATR and WGS reactors. Furthermore a DOE optimization of the global system parameters cathode stoichiometry, anode stoichiometry, air/fuel ratio and steam/carbon ratio of the fuel processing system has been performed in order to achieve maximum system efficiencies for all system operating points under given boundary conditions.
Dimachkie, Mazen M; Barohn, Richard J
2014-08-01
The idiopathic inflammatory myopathies (IIMs) are a heterogeneous group of rare disorders that share many similarities. In addition to sporadic inclusion body myositis (IBM), these include dermatomyositis, polymyositis, and autoimmune necrotizing myopathy. IBM is the most common IIM after age 50 years. Muscle histopathology shows endomysial inflammatory exudates surrounding and invading nonnecrotic muscle fibers often accompanied by rimmed vacuoles and protein deposits. It is likely that IBM is has a prominent degenerative component. This article reviews the evolution of knowledge in IBM, with emphasis on recent developments in the field, and discusses ongoing clinical trials. Copyright © 2014 Elsevier Inc. All rights reserved.
LLNL Partners with IBM on Brain-Like Computing Chip
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Essen, Brian
Lawrence Livermore National Laboratory (LLNL) will receive a first-of-a-kind brain-inspired supercomputing platform for deep learning developed by IBM Research. Based on a breakthrough neurosynaptic computer chip called IBM TrueNorth, the scalable platform will process the equivalent of 16 million neurons and 4 billion synapses and consume the energy equivalent of a hearing aid battery – a mere 2.5 watts of power. The brain-like, neural network design of the IBM Neuromorphic System is able to infer complex cognitive tasks such as pattern recognition and integrated sensory processing far more efficiently than conventional chips.
LLNL Partners with IBM on Brain-Like Computing Chip
Van Essen, Brian
2018-06-25
Lawrence Livermore National Laboratory (LLNL) will receive a first-of-a-kind brain-inspired supercomputing platform for deep learning developed by IBM Research. Based on a breakthrough neurosynaptic computer chip called IBM TrueNorth, the scalable platform will process the equivalent of 16 million neurons and 4 billion synapses and consume the energy equivalent of a hearing aid battery â a mere 2.5 watts of power. The brain-like, neural network design of the IBM Neuromorphic System is able to infer complex cognitive tasks such as pattern recognition and integrated sensory processing far more efficiently than conventional chips.
1988-05-19
System for the Ada Language System, Version 1.1.0, 1.% International Business Machines Corporation, Wright-Patterson AFB. IBM 4381 under VM/SP CMS...THIS PAGE (When Data Enre’ed) AVF Control Number: AVF-VSR-82.1087 87-03-10-TEL ! Ada® COMPILER VALIDATION SUMMARY REPORT: International Business Machines...Organization (AVO). On-site testing was conducted from !8 May 1987 through 19 May 1987 at International Business Machines -orporation, San Diego CA. 1.2
Taste function assessed by electrogustometry in burning mouth syndrome: a case-control study.
Braud, A; Descroix, V; Ungeheuer, M-N; Rougeot, C; Boucher, Y
2017-04-01
Idiopathic burning mouth syndrome (iBMS) is characterized by oral persistent pain without any clinical or biological abnormality. The aim of this study was to evaluate taste function in iBMS subjects and healthy controls. Electrogustometric thresholds (EGMt) were recorded in 21 iBMS patients and 21 paired-matched controls at nine loci of the tongue assessing fungiform and foliate gustatory papillae function. Comparison of EGMt was performed using the nonparametric Wilcoxon signed-rank test. A correlation between EGMt and self-perceived pain intensity assessed using a visual analogic scale (VAS) was analyzed with the Spearman coefficient. The level of significance was fixed at P < 0.05. Mean EGMt were significantly increased with iBMS for right side of the dorsum of the tongue and right lateral side of the tongue (P < 0.05). In the iBMS group, VAS scores were significantly correlated to EGMt at the tip of the tongue (r = -0.59; P < 0.05) and at the right and left lateral sides of the tongue (respectively, r = -0.49 and r = -0.47; P < 0.05). These data depicted impaired taste sensitivity in iBMS patients within fungiform and foliate taste bud fields and support potent gustatory/nociceptive interaction in iBMS. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Dynamically programmable cache
NASA Astrophysics Data System (ADS)
Nakkar, Mouna; Harding, John A.; Schwartz, David A.; Franzon, Paul D.; Conte, Thomas
1998-10-01
Reconfigurable machines have recently been used as co- processors to accelerate the execution of certain algorithms or program subroutines. The problems with the above approach include high reconfiguration time and limited partial reconfiguration. By far the most critical problems are: (1) the small on-chip memory which results in slower execution time, and (2) small FPGA areas that cannot implement large subroutines. Dynamically Programmable Cache (DPC) is a novel architecture for embedded processors which offers solutions to the above problems. To solve memory access problems, DPC processors merge reconfigurable arrays with the data cache at various cache levels to create a multi-level reconfigurable machines. As a result DPC machines have both higher data accessibility and FPGA memory bandwidth. To solve the limited FPGA resource problem, DPC processors implemented multi-context switching (Virtualization) concept. Virtualization allows implementation of large subroutines with fewer FPGA cells. Additionally, DPC processors can parallelize the execution of several operations resulting in faster execution time. In this paper, the speedup improvement for DPC machines are shown to be 5X faster than an Altera FLEX10K FPGA chip and 2X faster than a Sun Ultral SPARC station for two different algorithms (convolution and motion estimation).
ERIC Educational Resources Information Center
International Business Machines Corp., Milford, CT. Academic Information Systems.
This agenda lists activities scheduled for the second IBM (International Business Machines) Academic Information Systems University AEP (Advanced Education Projects) Conference, which was designed to afford the universities participating in the IBM-sponsored AEPs an opportunity to demonstrate their AEP experiments in educational computing. In…
Computer Associates International, CA-ACF2/VM Release 3.1
1987-09-09
Associates CA-ACF2/VM Bibliography International Business Machines Corporation, IBM Virtual Machine/Directory Maintenance Program Logic Manual...publication number LY20-0889 International Business Machines International Business Machines Corporation, IBM System/370 Principles of Operation...publication number GA22-7000 International Business Machines Corporation, IBM Virtual Machine/Directory Maintenance Installation and System Administrator’s
Applications Performance Under MPL and MPI on NAS IBM SP2
NASA Technical Reports Server (NTRS)
Saini, Subhash; Simon, Horst D.; Lasinski, T. A. (Technical Monitor)
1994-01-01
On July 5, 1994, an IBM Scalable POWER parallel System (IBM SP2) with 64 nodes, was installed at the Numerical Aerodynamic Simulation (NAS) Facility Each node of NAS IBM SP2 is a "wide node" consisting of a RISC 6000/590 workstation module with a clock of 66.5 MHz which can perform four floating point operations per clock with a peak performance of 266 Mflop/s. By the end of 1994, 64 nodes of IBM SP2 will be upgraded to 160 nodes with a peak performance of 42.5 Gflop/s. An overview of the IBM SP2 hardware is presented. The basic understanding of architectural details of RS 6000/590 will help application scientists the porting, optimizing, and tuning of codes from other machines such as the CRAY C90 and the Paragon to the NAS SP2. Optimization techniques such as quad-word loading, effective utilization of two floating point units, and data cache optimization of RS 6000/590 is illustrated, with examples giving performance gains at each optimization step. The conversion of codes using Intel's message passing library NX to codes using native Message Passing Library (MPL) and the Message Passing Interface (NMI) library available on the IBM SP2 is illustrated. In particular, we will present the performance of Fast Fourier Transform (FFT) kernel from NAS Parallel Benchmarks (NPB) under MPL and MPI. We have also optimized some of Fortran BLAS 2 and BLAS 3 routines, e.g., the optimized Fortran DAXPY runs at 175 Mflop/s and optimized Fortran DGEMM runs at 230 Mflop/s per node. The performance of the NPB (Class B) on the IBM SP2 is compared with the CRAY C90, Intel Paragon, TMC CM-5E, and the CRAY T3D.
A parallel simulated annealing algorithm for standard cell placement on a hypercube computer
NASA Technical Reports Server (NTRS)
Jones, Mark Howard
1987-01-01
A parallel version of a simulated annealing algorithm is presented which is targeted to run on a hypercube computer. A strategy for mapping the cells in a two dimensional area of a chip onto processors in an n-dimensional hypercube is proposed such that both small and large distance moves can be applied. Two types of moves are allowed: cell exchanges and cell displacements. The computation of the cost function in parallel among all the processors in the hypercube is described along with a distributed data structure that needs to be stored in the hypercube to support parallel cost evaluation. A novel tree broadcasting strategy is used extensively in the algorithm for updating cell locations in the parallel environment. Studies on the performance of the algorithm on example industrial circuits show that it is faster and gives better final placement results than the uniprocessor simulated annealing algorithms. An improved uniprocessor algorithm is proposed which is based on the improved results obtained from parallelization of the simulated annealing algorithm.
Broccolini, Aldobrando; Engel, W. King; Alvarez, Renate B.; Askanas, Valerie
2000-01-01
Sporadic inclusion-body myositis (s-IBM) is the most common progressive muscle disease of older persons. Pathologically, the muscle biopsy manifests various degrees of inflammation and specific vacuolar degeneration of muscle fibers characterized by paired helical filaments (PHFs) composed of phosphorylated tau. IBM vacuolated fibers also contain accumulations of several other Alzheimer-characteristic proteins. Molecular mechanisms leading to formation of the PHFs and accumulations of proteins in IBM muscle are not known. We report that the abnormal muscle fibers of IBM contained (i) acridine-orange-positive RNA inclusions that colocalized with the immunoreactivity of phosphorylated tau and (ii) survival motor neuron protein immunoreactive inclusions, which by immuno-electron microscopy were confined to paired helical filaments. This study demonstrates two novel components of the IBM paired helical filaments, which may lead to better understanding of their pathogenesis. PMID:10751338
Individual-based modeling of ecological and evolutionary processes
DeAngelis, Donald L.; Mooij, Wolf M.
2005-01-01
Individual-based models (IBMs) allow the explicit inclusion of individual variation in greater detail than do classical differential-equation and difference-equation models. Inclusion of such variation is important for continued progress in ecological and evolutionary theory. We provide a conceptual basis for IBMs by describing five major types of individual variation in IBMs: spatial, ontogenetic, phenotypic, cognitive, and genetic. IBMs are now used in almost all subfields of ecology and evolutionary biology. We map those subfields and look more closely at selected key papers on fish recruitment, forest dynamics, sympatric speciation, metapopulation dynamics, maintenance of diversity, and species conservation. Theorists are currently divided on whether IBMs represent only a practical tool for extending classical theory to more complex situations, or whether individual-based theory represents a radically new research program. We feel that the tension between these two poles of thinking can be a source of creativity in ecology and evolutionary theory.
New Generation General Purpose Computer (GPC) compact IBM unit
NASA Technical Reports Server (NTRS)
1991-01-01
New Generation General Purpose Computer (GPC) compact IBM unit replaces a two-unit earlier generation computer. The new IBM unit is documented in table top views alone (S91-26867, S91-26868), with the onboard equipment it supports including the flight deck CRT screen and keypad (S91-26866), and next to the two earlier versions it replaces (S91-26869).
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-16
... delist? B. How does IBM generate the waste? C. How did IBM sample and analyze the petitioned waste? D..., thickened/conditioned, and pressed to generate the F006 waste stream. C. How did IBM sample and analyze the... the volatiles and semi-volatiles samples were non- detect. E. How did EPA evaluate the risk of...
Survey of Mass Storage Systems
1975-09-01
software that Pre- cision Instruments can provide. System Name: IBM 3850 Mass Storage System Manufacturer and Location: International Business Machines...34 Datamation, pp. 52-58, October 1973. 15 17. International Business Machines, IBM 3850 Mass Storage System Facts Folder, White Plains, NY, n.d. 18... International Business Machines, Introduction to the IBM 3850 Mass Storage System (MSS), White Plains, NY, n.d. 19. International Business Machines
Steck, Dominik T; Choi, Christine; Gollapudy, Suneeta; Pagel, Paul S
2016-04-01
Sporadic inclusion body myositis (IBM) is an inflammatory myopathy characterized by progressive asymmetric extremity weakness, oropharyngeal dysphagia, and the potential for exaggerated sensitivity to neuromuscular blockers and respiratory compromise. The authors describe their management of a patient with IBM undergoing urgent orthopedic surgery. An 81-year-old man with IBM suffered a left intertrochanteric femoral fracture after falling down stairs. His IBM caused progressive left proximal lower extremity, bilateral distal upper extremity weakness (left > right), and oropharyngeal dysphagia (solid food, pills). He denied dyspnea, exercise intolerance, and a history of aspiration. Because respiratory insufficiency resulting from diaphragmatic dysfunction and prolonged duration of action of neuromuscular blockers may occur in IBM, the authors avoided using a neuromuscular blocker. After applying cricoid pressure, anesthesia was induced using intravenous lidocaine, propofol, remifentanil followed by manual ventilation with inhaled sevoflurane in oxygen. Endotracheal intubation was accomplished without difficulty; anesthesia was then maintained using remifentanil and sevoflurane. The fracture was repaired with a trochanteric femoral nail. The patient was extubated without difficulty and made an uneventful recovery. In summary, there is a lack of consensus about the use of neuromuscular blockers in patients with IBM. The authors avoided these drugs and were able to easily secure the patient's airway and maintain adequate muscle relaxation using a balanced sevoflurane-remifentanil anesthetic. Clinical trials are necessary to define the pharmacology of neuromuscular blockers in patients with IBM and determine whether use of these drugs contributes to postoperative respiratory insufficiency in these vulnerable patients.
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, J.E.; Roussin, R.W.; Gilpin, H.
A version of the CRAC2 computer code applicable for use in analyses of consequences and risks of reactor accidents in case work for environmental statements has been implemented for use on the Nuclear Regulatory Commission Data General MV/8000 computer system. Input preparation is facilitated through the use of an interactive computer program which operates on an IBM personal computer. The resulting CRAC2 input deck is transmitted to the MV/8000 by using an error-free file transfer mechanism. To facilitate the use of CRAC2 at NRC, relevant background material on input requirements and model descriptions has been extracted from four reports -more » ''Calculations of Reactor Accident Consequences,'' Version 2, NUREG/CR-2326 (SAND81-1994) and ''CRAC2 Model Descriptions,'' NUREG/CR-2552 (SAND82-0342), ''CRAC Calculations for Accident Sections of Environmental Statements, '' NUREG/CR-2901 (SAND82-1693), and ''Sensitivity and Uncertainty Studies of the CRAC2 Computer Code,'' NUREG/CR-4038 (ORNL-6114). When this background information is combined with instructions on the input processor, this report provides a self-contained guide for preparing CRAC2 input data with a specific orientation toward applications on the MV/8000. 8 refs., 11 figs., 10 tabs.« less
Comparisons of some large scientific computers
NASA Technical Reports Server (NTRS)
Credeur, K. R.
1981-01-01
In 1975, the National Aeronautics and Space Administration (NASA) began studies to assess the technical and economic feasibility of developing a computer having sustained computational speed of one billion floating point operations per second and a working memory of at least 240 million words. Such a powerful computer would allow computational aerodynamics to play a major role in aeronautical design and advanced fluid dynamics research. Based on favorable results from these studies, NASA proceeded with developmental plans. The computer was named the Numerical Aerodynamic Simulator (NAS). To help insure that the estimated cost, schedule, and technical scope were realistic, a brief study was made of past large scientific computers. Large discrepancies between inception and operation in scope, cost, or schedule were studied so that they could be minimized with NASA's proposed new compter. The main computers studied were the ILLIAC IV, STAR 100, Parallel Element Processor Ensemble (PEPE), and Shuttle Mission Simulator (SMS) computer. Comparison data on memory and speed were also obtained on the IBM 650, 704, 7090, 360-50, 360-67, 360-91, and 370-195; the CDC 6400, 6600, 7600, CYBER 203, and CYBER 205; CRAY 1; and the Advanced Scientific Computer (ASC). A few lessons learned conclude the report.
Simple procedure for phase-space measurement and entanglement validation
NASA Astrophysics Data System (ADS)
Rundle, R. P.; Mills, P. W.; Tilma, Todd; Samson, J. H.; Everitt, M. J.
2017-08-01
It has recently been shown that it is possible to represent the complete quantum state of any system as a phase-space quasiprobability distribution (Wigner function) [Phys. Rev. Lett. 117, 180401 (2016), 10.1103/PhysRevLett.117.180401]. Such functions take the form of expectation values of an observable that has a direct analogy to displaced parity operators. In this work we give a procedure for the measurement of the Wigner function that should be applicable to any quantum system. We have applied our procedure to IBM's Quantum Experience five-qubit quantum processor to demonstrate that we can measure and generate the Wigner functions of two different Bell states as well as the five-qubit Greenberger-Horne-Zeilinger state. Because Wigner functions for spin systems are not unique, we define, compare, and contrast two distinct examples. We show how the use of these Wigner functions leads to an optimal method for quantum state analysis especially in the situation where specific characteristic features are of particular interest (such as for spin Schrödinger cat states). Furthermore we show that this analysis leads to straightforward, and potentially very efficient, entanglement test and state characterization methods.
Three-dimensional object recognition based on planar images
NASA Astrophysics Data System (ADS)
Mital, Dinesh P.; Teoh, Eam-Khwang; Au, K. C.; Chng, E. K.
1993-01-01
This paper presents the development and realization of a robotic vision system for the recognition of 3-dimensional (3-D) objects. The system can recognize a single object from among a group of known regular convex polyhedron objects that is constrained to lie on a calibrated flat platform. The approach adopted comprises a series of image processing operations on a single 2-dimensional (2-D) intensity image to derive an image line drawing. Subsequently, a feature matching technique is employed to determine 2-D spatial correspondences of the image line drawing with the model in the database. Besides its identification ability, the system can also provide important position and orientation information of the recognized object. The system was implemented on an IBM-PC AT machine executing at 8 MHz without the 80287 Maths Co-processor. In our overall performance evaluation based on a 600 recognition cycles test, the system demonstrated an accuracy of above 80% with recognition time well within 10 seconds. The recognition time is, however, indirectly dependent on the number of models in the database. The reliability of the system is also affected by illumination conditions which must be clinically controlled as in any industrial robotic vision system.
Recent Progress on the Parallel Implementation of Moving-Body Overset Grid Schemes
NASA Technical Reports Server (NTRS)
Wissink, Andrew; Allen, Edwin (Technical Monitor)
1998-01-01
Viscous calculations about geometrically complex bodies in which there is relative motion between component parts is one of the most computationally demanding problems facing CFD researchers today. This presentation documents results from the first two years of a CHSSI-funded effort within the U.S. Army AFDD to develop scalable dynamic overset grid methods for unsteady viscous calculations with moving-body problems. The first pan of the presentation will focus on results from OVERFLOW-D1, a parallelized moving-body overset grid scheme that employs traditional Chimera methodology. The two processes that dominate the cost of such problems are the flow solution on each component and the intergrid connectivity solution. Parallel implementations of the OVERFLOW flow solver and DCF3D connectivity software are coupled with a proposed two-part static-dynamic load balancing scheme and tested on the IBM SP and Cray T3E multi-processors. The second part of the presentation will cover some recent results from OVERFLOW-D2, a new flow solver that employs Cartesian grids with various levels of refinement, facilitating solution adaption. A study of the parallel performance of the scheme on large distributed- memory multiprocessor computer architectures will be reported.
Research on Key Technologies of Cloud Computing
NASA Astrophysics Data System (ADS)
Zhang, Shufen; Yan, Hongcan; Chen, Xuebin
With the development of multi-core processors, virtualization, distributed storage, broadband Internet and automatic management, a new type of computing mode named cloud computing is produced. It distributes computation task on the resource pool which consists of massive computers, so the application systems can obtain the computing power, the storage space and software service according to its demand. It can concentrate all the computing resources and manage them automatically by the software without intervene. This makes application offers not to annoy for tedious details and more absorbed in his business. It will be advantageous to innovation and reduce cost. It's the ultimate goal of cloud computing to provide calculation, services and applications as a public facility for the public, So that people can use the computer resources just like using water, electricity, gas and telephone. Currently, the understanding of cloud computing is developing and changing constantly, cloud computing still has no unanimous definition. This paper describes three main service forms of cloud computing: SAAS, PAAS, IAAS, compared the definition of cloud computing which is given by Google, Amazon, IBM and other companies, summarized the basic characteristics of cloud computing, and emphasized on the key technologies such as data storage, data management, virtualization and programming model.
BLAS- BASIC LINEAR ALGEBRA SUBPROGRAMS
NASA Technical Reports Server (NTRS)
Krogh, F. T.
1994-01-01
The Basic Linear Algebra Subprogram (BLAS) library is a collection of FORTRAN callable routines for employing standard techniques in performing the basic operations of numerical linear algebra. The BLAS library was developed to provide a portable and efficient source of basic operations for designers of programs involving linear algebraic computations. The subprograms available in the library cover the operations of dot product, multiplication of a scalar and a vector, vector plus a scalar times a vector, Givens transformation, modified Givens transformation, copy, swap, Euclidean norm, sum of magnitudes, and location of the largest magnitude element. Since these subprograms are to be used in an ANSI FORTRAN context, the cases of single precision, double precision, and complex data are provided for. All of the subprograms have been thoroughly tested and produce consistent results even when transported from machine to machine. BLAS contains Assembler versions and FORTRAN test code for any of the following compilers: Lahey F77L, Microsoft FORTRAN, or IBM Professional FORTRAN. It requires the Microsoft Macro Assembler and a math co-processor. The PC implementation allows individual arrays of over 64K. The BLAS library was developed in 1979. The PC version was made available in 1986 and updated in 1988.
Concurrent processing simulation of the space station
NASA Technical Reports Server (NTRS)
Gluck, R.; Hale, A. L.; Sunkel, John W.
1989-01-01
The development of a new capability for the time-domain simulation of multibody dynamic systems and its application to the study of a large angle rotational maneuvers of the Space Station is described. The effort was divided into three sequential tasks, which required significant advancements of the state-of-the art to accomplish. These were: (1) the development of an explicit mathematical model via symbol manipulation of a flexible, multibody dynamic system; (2) the development of a methodology for balancing the computational load of an explicit mathematical model for concurrent processing; and (3) the implementation and successful simulation of the above on a prototype Custom Architectured Parallel Processing System (CAPPS) containing eight processors. The throughput rate achieved by the CAPPS operating at only 70 percent efficiency, was 3.9 times greater than that obtained sequentially by the IBM 3090 supercomputer simulating the same problem. More significantly, analysis of the results leads to the conclusion that the relative cost effectiveness of concurrent vs. sequential digital computation will grow substantially as the computational load is increased. This is a welcomed development in an era when very complex and cumbersome mathematical models of large space vehicles must be used as substitutes for full scale testing which has become impractical.
NASA Technical Reports Server (NTRS)
1993-01-01
This is the Final Technical Report for the NetView Technical Research task. This report is prepared in accordance with Contract Data Requirements List (CDRL) item A002. NetView assistance was provided and details are presented under the following headings: NetView Management Systems (NMS) project tasks; WBAFB IBM 3090; WPAFB AMDAHL; WPAFB IBM 3084; Hill AFB; McClellan AFB AMDAHL; McClellan AFB IBM 3090; and Warner-Robins AFB.
NASA Astrophysics Data System (ADS)
Fang, Jiannong; Porté-Agel, Fernando
2016-09-01
Accurate modeling of complex terrain, especially steep terrain, in the simulation of wind fields remains a challenge. It is well known that the terrain-following coordinate transformation method (TFCT) generally used in atmospheric flow simulations is restricted to non-steep terrain with slope angles less than 45 degrees. Due to the advantage of keeping the basic computational grids and numerical schemes unchanged, the immersed boundary method (IBM) has been widely implemented in various numerical codes to handle arbitrary domain geometry including steep terrain. However, IBM could introduce considerable implementation errors in wall modeling through various interpolations because an immersed boundary is generally not co-located with a grid line. In this paper, we perform an intercomparison of TFCT and IBM in large-eddy simulation of a turbulent wind field over a three-dimensional (3D) hill for the purpose of evaluating the implementation errors in IBM. The slopes of the three-dimensional hill are not steep and, therefore, TFCT can be applied. Since TFCT is free from interpolation-induced implementation errors in wall modeling, its results can serve as a reference for the evaluation so that the influence of errors from wall models themselves can be excluded. For TFCT, a new algorithm for solving the pressure Poisson equation in the transformed coordinate system is proposed and first validated for a laminar flow over periodic two-dimensional hills by comparing with a benchmark solution. For the turbulent flow over the 3D hill, the wind-tunnel measurements used for validation contain both vertical and horizontal profiles of mean velocities and variances, thus allowing an in-depth comparison of the numerical models. In this case, TFCT is expected to be preferable to IBM. This is confirmed by the presented results of comparison. It is shown that the implementation errors in IBM lead to large discrepancies between the results obtained by TFCT and IBM near the surface. The effects of different schemes used to implement wall boundary conditions in IBM are studied. The source of errors and possible ways to improve the IBM implementation are discussed.
FFT Computation with Systolic Arrays, A New Architecture
NASA Technical Reports Server (NTRS)
Boriakoff, Valentin
1994-01-01
The use of the Cooley-Tukey algorithm for computing the l-d FFT lends itself to a particular matrix factorization which suggests direct implementation by linearly-connected systolic arrays. Here we present a new systolic architecture that embodies this algorithm. This implementation requires a smaller number of processors and a smaller number of memory cells than other recent implementations, as well as having all the advantages of systolic arrays. For the implementation of the decimation-in-frequency case, word-serial data input allows continuous real-time operation without the need of a serial-to-parallel conversion device. No control or data stream switching is necessary. Computer simulation of this architecture was done in the context of a 1024 point DFT with a fixed point processor, and CMOS processor implementation has started.
NASA Astrophysics Data System (ADS)
Feng, Bing
Electron cloud instabilities have been observed in many circular accelerators around the world and raised concerns of future accelerators and possible upgrades. In this thesis, the electron cloud instabilities are studied with the quasi-static particle-in-cell (PIC) code QuickPIC. Modeling in three-dimensions the long timescale propagation of beam in electron clouds in circular accelerators requires faster and more efficient simulation codes. Thousands of processors are easily available for parallel computations. However, it is not straightforward to increase the effective speed of the simulation by running the same problem size on an increasingly number of processors because there is a limit to domain size in the decomposition of the two-dimensional part of the code. A pipelining algorithm applied on the fully parallelized particle-in-cell code QuickPIC is implemented to overcome this limit. The pipelining algorithm uses multiple groups of processors and optimizes the job allocation on the processors in parallel computing. With this novel algorithm, it is possible to use on the order of 102 processors, and to expand the scale and the speed of the simulation with QuickPIC by a similar factor. In addition to the efficiency improvement with the pipelining algorithm, the fidelity of QuickPIC is enhanced by adding two physics models, the beam space charge effect and the dispersion effect. Simulation of two specific circular machines is performed with the enhanced QuickPIC. First, the proposed upgrade to the Fermilab Main Injector is studied with an eye upon guiding the design of the upgrade and code validation. Moderate emittance growth is observed for the upgrade of increasing the bunch population by 5 times. But the simulation also shows that increasing the beam energy from 8GeV to 20GeV or above can effectively limit the emittance growth. Then the enhanced QuickPIC is used to simulate the electron cloud effect on electron beam in the Cornell Energy Recovery Linac (ERL) due to extremely small emittance and high peak currents anticipated in the machine. A tune shift is discovered from the simulation; however, emittance growth of the electron beam in electron cloud is not observed for ERL parameters.
IBM PC enhances the world's future
NASA Technical Reports Server (NTRS)
Cox, Jozelle
1988-01-01
Although the purpose of this research is to illustrate the importance of computers to the public, particularly the IBM PC, present examinations will include computers developed before the IBM PC was brought into use. IBM, as well as other computing facilities, began serving the public years ago, and is continuing to find ways to enhance the existence of man. With new developments in supercomputers like the Cray-2, and the recent advances in artificial intelligence programming, the human race is gaining knowledge at a rapid pace. All have benefited from the development of computers in the world; not only have they brought new assets to life, but have made life more and more of a challenge everyday.
Issues Identified During September 2016 IBM OpenMP 4.5 Hackathon
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richards, David F.
In September, 2016 IBM hosted an OpenMP 4.5 Hackathon at the TJ Watson Research Center. Teams from LLNL, ORNL, SNL, LANL, and LBNL attended the event. As with the 2015 hackathon, IBM produced an extremely useful and successful event with unmatched support from compiler team, applications staff, and facilities. Approximately 24 IBM staff supported 4-day hackathon and spent significant time 4-6 weeks out to prepare environment and become familiar with apps. This hackathon was also the first event to feature LLVM & XL C/C++ and Fortran compilers. This report records many of the issues encountered by the LLNL teams duringmore » the hackathon.« less
CTF Preprocessor User's Manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Avramova, Maria; Salko, Robert K.
2016-05-26
This document describes how a user should go about using the CTF pre- processor tool to create an input deck for modeling rod-bundle geometry in CTF. The tool was designed to generate input decks in a quick and less error-prone manner for CTF. The pre-processor is a completely independent utility, written in Fortran, that takes a reduced amount of input from the user. The information that the user must supply is basic information on bundle geometry, such as rod pitch, clad thickness, and axial location of spacer grids--the pre-processor takes this basic information and determines channel placement and connection informationmore » to be written to the input deck, which is the most time-consuming and error-prone segment of creating a deck. Creation of the model is also more intuitive, as the user can specify assembly and water-tube placement using visual maps instead of having to place them by determining channel/channel and rod/channel connections. As an example of the benefit of the pre-processor, a quarter-core model that contains 500,000 scalar-mesh cells was read into CTF from an input deck containing 200,000 lines of data. This 200,000 line input deck was produced automatically from a set of pre-processor decks that contained only 300 lines of data.« less
Barohn, Richard J.
2014-01-01
The idiopathic inflammatory myopathies (IIM) are a heterogenous group of rare disorders that share many similarities. In addition to sporadic inclusion body myositis (IBM), these include dematomyositis (DM), polymyositis (PM), and autoimmune necrotizing myopathy (NM). For discussion of later three disorders, the reader is referred to the IIM review in this issue. IBM is the most common IIM after age 50. It typically presents with chronic insidious proximal leg and/or distal arm asymmetric muscle weakness leading to recurrent falls and loss of dexterity. Creatine kinase (CK) is up to 15 times elevated in IBM and needle electromyograhy (EMG) mostly shows a chronic irritative myopathy. Muscle histopathology demonstrates endomysial inflammatory exudates surrounding and invading non-necrotic muscle fibers often times accompanied by rimmed vacuoles and protein deposits. Despite inflammatory muscle pathology suggesting similarity with PM, it likely that IBM is has a prominent degenerative component as supported by refractoriness to immunosuppressive therapy. We review the evolution of our knowledge in IBM with emphasis on recent developments in the field and discuss ongoing clinical trials. PMID:25037082
Enhancements to the IBM version of COSMIC/NASTRAN
NASA Technical Reports Server (NTRS)
Brown, W. Keith
1989-01-01
Major improvements were made to the IBM version of COSMIC/NASTRAN by RPK Corporation under contract to IBM Corporation. These improvements will become part of COSMIC's IBM version and will be available in the second quarter of 1989. The first improvement is the inclusion of code to take advantage of IBM's new Vector Facility (VF) on its 3090 machines. The remaining improvements are modifications that will benefit all users as a result of the extended addressing capability provided by the MVS/XA operating system. These improvements include the availability of an in-memory data base that potentially eliminates the need for I/O to the PRIxx disk files. Another improvement is the elimination of multiple load modules that have to be loaded for every link switch within NASTRAN. The last improvement allows for NASTRAN to execute above the 16 mega-byte line. This improvement allows for NASTRAN to have access to 2 giga-bytes of memory for open core and the in-memory data base.
Computer Architecture for Energy Efficient SFQ
2014-08-27
IBM Corporation (T.J. Watson Research Laboratory) 1101 Kitchawan Road Yorktown Heights, NY 10598 -0000 2 ABSTRACT Number of Papers published in peer...accomplished during this ARO-sponsored project at IBM Research to identify and model an energy efficient SFQ-based computer architecture. The... IBM Windsor Blue (WB), illustrated schematically in Figure 2. The basic building block of WB is a "tile" comprised of a 64-bit arithmetic logic unit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giucci, D.
1963-01-01
A program was devised for calculating the cubic and fifth roots of a number of Newton's method using the 610 IBM electronic computer. For convenience a program was added for obtaining n/sup th/ roots by the logarithmic method. (auth)
1987-06-01
International Business Machines ( IBM ) Corporation compatible synchronous terminals (2780/3780/327X), and the Federal Data Corporation (FDC) has developed...the interfaces for Burroughs look-alike asynchronous and synchronous terminals. Basically, this means that the IBM and Burroughs protocols are...and other vendor computers, such as IBM , UNIVAC, and Honeywell. The Navy has developed file transfer capabilities between Tandem and Burroughs. These
MEMS-based fuel cells with integrated catalytic fuel processor and method thereof
Jankowski, Alan F [Livermore, CA; Morse, Jeffrey D [Martinez, CA; Upadhye, Ravindra S [Pleasanton, CA; Havstad, Mark A [Davis, CA
2011-08-09
Described herein is a means to incorporate catalytic materials into the fuel flow field structures of MEMS-based fuel cells, which enable catalytic reforming of a hydrocarbon based fuel, such as methane, methanol, or butane. Methods of fabrication are also disclosed.
Compositional Variations of Paleogene and Neogene Tephra From the Northern Izu-Bonin-Mariana Arc
NASA Astrophysics Data System (ADS)
Tepley, F. J., III; Barth, A. P.; Brandl, P. A.; Hickey-Vargas, R.; Jiang, F.; Kanayama, K.; Kusano, Y.; Li, H.; Marsaglia, K. M.; McCarthy, A.; Meffre, S.; Savov, I. P.; Yogodzinski, G. M.
2014-12-01
A primary objective of IODP Expedition 351 was to evaluate arc initiation processes of the Izu-Bonin-Mariana (IBM) volcanic arc and its compositional evolution through time. To this end, a single thick section of sediment overlying oceanic crust was cored in the Amami Sankaku Basin where a complete sediment record of arc inception and evolution is preserved. This sediment record includes ash and pyroclasts, deposited in fore-arc, arc, and back-arc settings, likely associated with both the ~49-25 Ma emergent IBM volcanic arc and the evolving Ryukyu-Kyushu volcanic arc. Our goal was to assess the major element evolution of the nascent and evolving IBM system using the temporally constrained record of the early and developing system. In all, more than 100 ash and tuff layers, and pyroclastic fragments were selected from temporally resolved portions of the core, and from representative fractions of the overall core ("core catcher"). The samples were prepared to determine major and minor element compositions via electron microprobe analyses. This ash and pyroclast record will allow us to 1) resolve the Paleogene evolutionary history of the northern IBM arc in greater detail; 2) determine compositional variations of this portion of the IBM arc through time; 3) compare the acquired data to an extensive whole rock and tephra dataset from other segments of the IBM arc; 4) test hypotheses of northern IBM arc evolution and the involvement of different source reservoirs; and 5) mark important stratigraphic markers associated with the Neogene volcanic history of the adjacent evolving Ryukyu-Kyushu arc.
Affordable Emerging Computer Hardware for Neuromorphic Computing Applications
2011-09-01
DATES COVERED (From - To) 4 . TITLE AND SUBTITLE AFFORDABLE EMERGING COMPUTER HARDWARE FOR NEUROMORPHIC COMPUTING APPLICATIONS 5a. CONTRACT NUMBER...speedup over software [3, 4 ]. 3 Table 1 shows a comparison of the computing performance, communication performance, power consumption...time is probably 5 frames per second, corresponding to 5 saccades. III. RESULTS AND DISCUSSION The use of IBM Cell-BE technology (Sony PlayStation
NASA Astrophysics Data System (ADS)
Fekkar, Hakim; Benbernou, N.; Esnault, S.; Shin, H. C.; Guenounou, Moncef
1998-04-01
Immune responses are strongly influenced by the cytokines following antigenic stimulation. Distinct cytokine-producing T cell subsets are well known to play a major role in immune responses and to be differentially regulated during immunological disorders, although the characterization and quantification of the TH-1/TH-2 cytokine pattern in T cells remained not clearly defined. Expression of cytokines by T lymphocytes is a highly balanced process, involving stimulatory and inhibitory intracellular signaling pathways. The aim of this study was (1) to quantify the cytokine expression in T cells at the single cell level using optical imaging, (2) and to analyze the influence of cyclic AMP- dependent signal transduction pathway in the balance between the TH-1 and TH-2 cytokine profile. We attempted to study several cytokines (IL-2, IFN-(gamma) , IL-4, IL-10 and IL-13) in peripheral blood mononuclear cells. Cells were prestimulated in vitro using phytohemagglutinin and phorbol ester for 36h, and then further cultured for 8h in the presence of monensin. Cells were permeabilized and then simple-, double- or triple-labeled with the corresponding specific fluorescent monoclonal antibodies. The cell phenotype was also determined by analyzing the expression of each of CD4, CD8, CD45RO and CD45RA with the cytokine expression. Conventional images of cells were recorded with a Peltier- cooled CCD camera (B/W C5985, Hamamatsu photonics) through an inverted microscope equipped with epi-fluorescence (Diaphot 300, Nikon). Images were digitalized using an acquisition video interface (Oculus TCX Coreco) in 762 by 570 pixels coded in 8 bits (256 gray levels), and analyzed thereafter in an IBM PC computer based on an intel pentium processor with an adequate software (Visilog 4, Noesis). The first image processing step is the extraction of cell areas using an edge detection and a binary thresholding method. In order to reduce the background noise of fluorescence, we performed an opening procedure of the original image using a structuring element. The opened image was therefore subtracted from the original one, and the gray intensities were subsequently measured. Fluorescence intensities are mapped in MD representation using Matlab software. Consequently, quantitative comparative expression of intracellular cytokines and cell membrane markers was achieved. Using this technique, we showed that CD4+ and CD8+T lymphocytes expressed a large panel of cytokines, and that protein kinase A (PKA) activation pathway induced a polarization of activated human T cells to the TH-2 type profile. Data also showed different sensitivities of CD45 RO/CD45RA lymphocytes to the activation of PKA, thus suggesting the implication of memory CD4+- and CD8+-T cells in the T cell specific immune and inflammatory processes and their control by PKA activation pathway. Finally, this method represents a powerful tool for the detection and quantification of intracellular cytokine expression and the analysis of the functional properties of T lymphocytes during immune responses.
Thomas, David A
2004-09-01
IBM's turnaround in the last decade is an impressive and well-documented business story. But behind that success is a less told people story, which explains how the corporation dramatically altered its already diverse composition and created millions of dollars in new business. By the time Lou Gerstner took the helm in 1993, IBM had a long history of progressive management when it came to civil rights and equal-opportunity employment. But Gerstner felt IBM wasn't taking full advantage of a diverse market for talent, nor was it maximizing the potential of its diverse customer and employee base. So in 1995, he launched a diversity task force initiative to uncover and understand differences among people within the organization and find ways to appeal to an even broader set of employees and customers. Gerstner established a task force for each of eight constituencies: Asians; blacks; the gay, lesbian, bisexual, transgendered community; Hispanics; white men; Native Americans; people with disabilities; and women. He asked the task forces to research four questions: What does your constituency need to feel welcome and valued at IBM? What can the corporation do, in partnership with your group, to maximize your constituency's productivity? What can the corporation do to influence your constituency's buying decisions so that IBM is seen as a preferred solution provider? And with which external organizations should IBM form relationships to better understand the needs of your constituency? The answers to these questions became the basis for IBM's diversity strategy. Thomas stresses that four factors are key to implementing any major change initiative: strong support from company leaders, an employee base that is fully engaged with the initiative, management practices that are integrated and aligned with the effort, and a strong and well-articulated business case for action. All four elements have helped IBM make diversity a key corporate strategy tied to real growth.
Vincenot, Christian E
2018-03-14
Progress in understanding and managing complex systems comprised of decision-making agents, such as cells, organisms, ecosystems or societies, is-like many scientific endeavours-limited by disciplinary boundaries. These boundaries, however, are moving and can actively be made porous or even disappear. To study this process, I advanced an original bibliometric approach based on network analysis to track and understand the development of the model-based science of agent-based complex systems (ACS). I analysed research citations between the two communities devoted to ACS research, namely agent-based (ABM) and individual-based modelling (IBM). Both terms refer to the same approach, yet the former is preferred in engineering and social sciences, while the latter prevails in natural sciences. This situation provided a unique case study for grasping how a new concept evolves distinctly across scientific domains and how to foster convergence into a universal scientific approach. The present analysis based on novel hetero-citation metrics revealed the historical development of ABM and IBM, confirmed their past disjointedness, and detected their progressive merger. The separation between these synonymous disciplines had silently opposed the free flow of knowledge among ACS practitioners and thereby hindered the transfer of methodological advances and the emergence of general systems theories. A surprisingly small number of key publications sparked the ongoing fusion between ABM and IBM research. Beside reviews raising awareness of broad-spectrum issues, generic protocols for model formulation and boundary-transcending inference strategies were critical means of science integration. Accessible broad-spectrum software similarly contributed to this change. From the modelling viewpoint, the discovery of the unification of ABM and IBM demonstrates that a wide variety of systems substantiate the premise of ACS research that microscale behaviours of agents and system-level dynamics are inseparably bound. © 2018 The Author(s).
β4 systematics in rare-earth and actinide nuclei: sdg interacting boson model description
NASA Astrophysics Data System (ADS)
Devi, Y. D.; Kota, V. K. B.
1992-07-01
The observed variation of hexadecupole deformation parameter β4 with mass number A in rare-earth and actinide nuclei is studied in the sdg interacting boson model (IBM) using single j-shell Otsuka-Arima-Iachello mapped and IBM-2 to IBM-1 projected hexadecupole transition operator together with SUsdg(3) and SUsdg(5) coherent states. The SUsdg(3) limit is found to provide a good description of data.
A performance evaluation of the IBM 370/XT personal computer
NASA Technical Reports Server (NTRS)
Dominick, Wayne D. (Editor); Triantafyllopoulos, Spiros
1984-01-01
An evaluation of the IBM 370/XT personal computer is given. This evaluation focuses primarily on the use of the 370/XT for scientific and technical applications and applications development. A measurement of the capabilities of the 370/XT was performed by means of test programs which are presented. Also included is a review of facilities provided by the operating system (VM/PC), along with comments on the IBM 370/XT hardware configuration.
1990-05-30
phase HPLC using an IBM Instruments Inc. model LC 9533 ternary liquid chromatograph attached to a model F9522 fixed UV module and a model F9523...acid analyses are done by separation and quantitation of phenylthiocarbamyl amino acid derivatives using a second IBM model LC 9533 ternary liquid...computer which controls the HPLC and an IBM Instruments Inc. model LC 9505 automatic sampler. The hemoglobin present in the effluent from large
DDN (Defense Data Network) Protocol Implementations and Vendors Guide
1989-02-01
Announcement 286-259 6/16/86 MACHINE-TYPE/CPU: IBM RT/PC O/S: AIX DISTRIBUTOR: 1. IBM Marketing 2. IBM Authorized VAR’s 3. Authorized Personal Computer...Vendors Guide 12. PERSONAL AUTHOR(S) Dorio, Nan; Johnson, Marlyn; Lederman. Sol; Redfield, Elizabeth; Ward, Carol 13a. TYPE OF REPORT 13b. TIME COVERED 114...documentation, contact person , and distributor. The fourth section describes analysis tools. It includes information about network analysis products
Trajectory Reconstruction Program Milestone 2/3 Report. Volume 1. Description and Overview
1974-12-16
Simulation Data Generation Missile Trajectory Error Analysis Modularized Program Guidance and Targeting Multiple Vehicle Simulation IBM 360/370 Numerical...consists of vehicle simulation subprograms designed and written in FORTRAN for CDC 6600/7600, IBM 360/370, and UNIVAC 1108/1110 series computers. The o-erall...vehicle simulation subprograms designed and written in FORTRAN fcr CDC 6600/7600, IBM 360/370, and UNIVAC l08/1110 series computers. The overall
1980-11-01
Systems: A Raytheon Project History", RADC-TR-77-188, Final Technical Report, June 1977. 4. IBM Federal Systems Division, "Statistical Prediction of...147, June 1979. 4. W. D. Brooks, R. W. Motley, "Analysis of Discrete Software Reliability Models", IBM Corp., RADC-TR-80-84, RADC, New York, April 1980...J. C. King of IBM (Reference 9) and Lori A. Clark (Reference 10) of the University of Massachusetts. Programs, so exercised must be augmented so they
SIAM Data Mining Brings It’ to Annual Meeting
2017-02-24
address space) languages. Jose Moreira and Manoj Kumar from IBM presented the Graph Programming Interface (GPI) as well as a proposal for a common...Samsi (MIT), Dr. Manoj Kumar (IBM Research), Dr. Michel Kinsy (Boston University), and Dr. Shashank Yellapantula (GE Global Research). Dr. Gadepally...and Dr. Samsi discussed advances in data management technologies [22–25], and Dr. Kumar presented a brief overview of a graph-based API IBM is
Interactive graphics system for IBM 1800 computer
NASA Technical Reports Server (NTRS)
Carleton, T. P.; Howell, D. R.; Mish, W. H.
1972-01-01
A FORTRAN compatible software system that has been developed to provide an interactive graphics capability for the IBM 1800 computer is described. The interactive graphics hardware consists of a Hewlett-Packard 1300A cathode ray tube, Sanders photopen, digital to analog converters, pulse counter, and necessary interface. The hardware is available from IBM as several related RPQ's. The software developed permits the application programmer to use IBM 1800 FORTRAN to develop a display on the cathode ray tube which consists of one or more independent units called pictures. The software permits a great deal of flexibility in the manipulation of these pictures and allows the programmer to use the photopen to interact with the displayed data and make decisions based on information returned by the photopen.
Redox factor-1 in muscle biopsies of patients with inclusion-body myositis.
Broccolini, A; Engel, W K; Alvarez, R B; Askanas, V
2000-06-16
To determine whether redox factor-1 (Ref-1) participates in the pathogenesis of inclusion-body myositis (IBM), we immunolocalized Ref-1 in muscle biopsies of IBM patients by light- and electron-microscopy. Approximately 70-80% of the IBM vacuolated muscle fibers had focal inclusions strongly immunoreactive for Ref-1. By immunoelectronmicroscopy, Ref-1 was localized to paired-helical filaments, 6-10 nm amyloid-like fibrils and amorphous material. Virtually all regenerating and necrotic muscle fibers in various muscle biopsies had diffusely strong Ref-1 immunoreactivity. At all neuromuscular junctions, postsynaptically there was strong Ref-1 immunoreactivity. Our study suggests that Ref-1 plays a role in IBM pathogenesis, and in other pathologic and normal processes of human muscle.
A DNA sequence analysis package for the IBM personal computer.
Lagrimini, L M; Brentano, S T; Donelson, J E
1984-01-01
We present here a collection of DNA sequence analysis programs, called "PC Sequence" (PCS), which are designed to run on the IBM Personal Computer (PC). These programs are written in IBM PC compiled BASIC and take full advantage of the IBM PC's speed, error handling, and graphics capabilities. For a modest initial expense in hardware any laboratory can use these programs to quickly perform computer analysis on DNA sequences. They are written with the novice user in mind and require very little training or previous experience with computers. Also provided are a text editing program for creating and modifying DNA sequence files and a communications program which enables the PC to communicate with and collect information from mainframe computers and DNA sequence databases. PMID:6546433
SHABERTH - ANALYSIS OF A SHAFT BEARING SYSTEM (CRAY VERSION)
NASA Technical Reports Server (NTRS)
Coe, H. H.
1994-01-01
The SHABERTH computer program was developed to predict operating characteristics of bearings in a multibearing load support system. Lubricated and non-lubricated bearings can be modeled. SHABERTH calculates the loads, torques, temperatures, and fatigue life for ball and/or roller bearings on a single shaft. The program also allows for an analysis of the system reaction to the termination of lubricant supply to the bearings and other lubricated mechanical elements. SHABERTH has proven to be a valuable tool in the design and analysis of shaft bearing systems. The SHABERTH program is structured with four nested calculation schemes. The thermal scheme performs steady state and transient temperature calculations which predict system temperatures for a given operating state. The bearing dimensional equilibrium scheme uses the bearing temperatures, predicted by the temperature mapping subprograms, and the rolling element raceway load distribution, predicted by the bearing subprogram, to calculate bearing diametral clearance for a given operating state. The shaft-bearing system load equilibrium scheme calculates bearing inner ring positions relative to the respective outer rings such that the external loading applied to the shaft is brought into equilibrium by the rolling element loads which develop at each bearing inner ring for a given operating state. The bearing rolling element and cage load equilibrium scheme calculates the rolling element and cage equilibrium positions and rotational speeds based on the relative inner-outer ring positions, inertia effects, and friction conditions. The ball bearing subprograms in the current SHABERTH program have several model enhancements over similar programs. These enhancements include an elastohydrodynamic (EHD) film thickness model that accounts for thermal heating in the contact area and lubricant film starvation; a new model for traction combined with an asperity load sharing model; a model for the hydrodynamic rolling and shear forces in the inlet zone of lubricated contacts, which accounts for the degree of lubricant film starvation; modeling normal and friction forces between a ball and a cage pocket, which account for the transition between the hydrodynamic and elastohydrodynamic regimes of lubrication; and a model of the effect on fatigue life of the ratio of the EHD plateau film thickness to the composite surface roughness. SHABERTH is intended to be as general as possible. The models in SHABERTH allow for the complete mathematical simulation of real physical systems. Systems are limited to a maximum of five bearings supporting the shaft, a maximum of thirty rolling elements per bearing, and a maximum of one hundred temperature nodes. The SHABERTH program structure is modular and has been designed to permit refinement and replacement of various component models as the need and opportunities develop. A preprocessor is included in the IBM PC version of SHABERTH to provide a user friendly means of developing SHABERTH models and executing the resulting code. The preprocessor allows the user to create and modify data files with minimal effort and a reduced chance for errors. Data is utilized as it is entered; the preprocessor then decides what additional data is required to complete the model. Only this required information is requested. The preprocessor can accommodate data input for any SHABERTH compatible shaft bearing system model. The system may include ball bearings, roller bearings, and/or tapered roller bearings. SHABERTH is written in FORTRAN 77, and two machine versions are available from COSMIC. The CRAY version (LEW-14860) has a RAM requirement of 176K of 64 bit words. The IBM PC version (MFS-28818) is written for IBM PC series and compatible computers running MS-DOS, and includes a sample MS-DOS executable. For execution, the PC version requires at least 1Mb of RAM and an 80386 or 486 processor machine with an 80x87 math co-processor. The standard distribution medium for the IBM PC version is a set of two 5.25 inch 360K MS-DOS format diskettes. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. The standard distribution medium for the CRAY version is also a 5.25 inch 360K MS-DOS format diskette, but alternate distribution media and formats are available upon request. The original version of SHABERTH was developed in FORTRAN IV at Lewis Research Center for use on a UNIVAC 1100 series computer. The Cray version was released in 1988, and was updated in 1990 to incorporate fluid rheological data for Rocket Propellant 1 (RP-1), thereby allowing the analysis of bearings lubricated with RP-1. The PC version is a port of the 1990 CRAY version and was developed in 1992 by SRS Technologies under contract to NASA Marshall Space Flight Center.
Performance of Distributed CFAR Processors in Pearson Distributed Clutter
NASA Astrophysics Data System (ADS)
Messali, Zoubeida; Soltani, Faouzi
2006-12-01
This paper deals with the distributed constant false alarm rate (CFAR) radar detection of targets embedded in heavy-tailed Pearson distributed clutter. In particular, we extend the results obtained for the cell averaging (CA), order statistics (OS), and censored mean level CMLD CFAR processors operating in positive alpha-stable (P&S) random variables to more general situations, specifically to the presence of interfering targets and distributed CFAR detectors. The receiver operating characteristics of the greatest of (GO) and the smallest of (SO) CFAR processors are also determined. The performance characteristics of distributed systems are presented and compared in both homogeneous and in presence of interfering targets. We demonstrate, via simulation results, that the distributed systems when the clutter is modelled as positive alpha-stable distribution offer robustness properties against multiple target situations especially when using the "OR" fusion rule.
Linear Approximation SAR Azimuth Processing Study
NASA Technical Reports Server (NTRS)
Lindquist, R. B.; Masnaghetti, R. K.; Belland, E.; Hance, H. V.; Weis, W. G.
1979-01-01
A segmented linear approximation of the quadratic phase function that is used to focus the synthetic antenna of a SAR was studied. Ideal focusing, using a quadratic varying phase focusing function during the time radar target histories are gathered, requires a large number of complex multiplications. These can be largely eliminated by using linear approximation techniques. The result is a reduced processor size and chip count relative to ideally focussed processing and a correspondingly increased feasibility for spaceworthy implementation. A preliminary design and sizing for a spaceworthy linear approximation SAR azimuth processor meeting requirements similar to those of the SEASAT-A SAR was developed. The study resulted in a design with approximately 1500 IC's, 1.2 cubic feet of volume, and 350 watts of power for a single look, 4000 range cell azimuth processor with 25 meters resolution.
High speed quantitative digital microscopy
NASA Technical Reports Server (NTRS)
Castleman, K. R.; Price, K. H.; Eskenazi, R.; Ovadya, M. M.; Navon, M. A.
1984-01-01
Modern digital image processing hardware makes possible quantitative analysis of microscope images at high speed. This paper describes an application to automatic screening for cervical cancer. The system uses twelve MC6809 microprocessors arranged in a pipeline multiprocessor configuration. Each processor executes one part of the algorithm on each cell image as it passes through the pipeline. Each processor communicates with its upstream and downstream neighbors via shared two-port memory. Thus no time is devoted to input-output operations as such. This configuration is expected to be at least ten times faster than previous systems.
NASA Astrophysics Data System (ADS)
Medynski, S.; Busby, C.; DeBari, S. M.; Morris, R.; Andrews, G. D.; Brown, S. R.; Schmitt, A. K.
2016-12-01
The Rosario segment of the Cretaceous Alisitos arc in Baja California is an outstanding field analog for the Izu-Bonin-Mariana (IBM) arc, because it is structurally intact, unmetamorphosed, and has superior three-dimensional exposures of an upper- to middle-crustal section through an extensional oceanic arc. Previous work1, done in the pre-digital era, used geologic mapping to define two phases of arc evolution, with normal faulting in both phases: (1) extensional oceanic arc, with silicic calderas, and (2) oceanic arc rifting, with widespread diking and dominantly mafic effusions. Our new geochemical data match the extensional zone immediately behind the Izu arc front, and is different from the arc front and rear arc, consistent with geologic relations. Our study is developing a 3D oceanic arc crustal model, with geologic maps draped on Google Earth images, and GPS-located outcrop information linked to new geochemical, geochronological and petrographic data, with the goal of detailing the relationships between plutonic, hypabyssal, and volcanic rocks. This model will be used by scientists as a reference model for past (IBM-1, 2, 3) and proposed IBM (IBM-4) drilling activities. New single-crystal zircon analysis by TIMS supports the interpretation, based on batch SIMS analysis of chemically-abraded zircon1, that the entire upper-middle crustal section accumulated in about 1.5 Myr. Like the IBM, volcanic zircons are very sparse, but zircon chemistry on the plutonic rocks shows trace element compositions that overlap to those measured in IBM volcanic zircons by A. Schmitt (unpublished data). Zircons have U-Pb ages up to 20 Myr older than the eruptive age, suggesting remelting of older parts of the arc, similar to that proposed for IBM (using different evidence). Like IBM, some very old zircons are also present, indicating the presence of old crustal fragments, or sediments derived from them, in the basement. However, our geochemical data show that the magmas are differentiated from a single mantle source, so any older crust that was remelted had the same compositional characteristics. This is similar to previous conclusion that the different parts of the Izu arc have retained their distinct compositions over the last 15 Myr2. 1Busby et al., 2006 JVGR 149, 1-46 2 Hochstaedter et al., 2000 JGR 105, 495-512
NASA Astrophysics Data System (ADS)
Wu, J. E.; Suppe, J.; Renqi, L.; Kanda, R. V. S.
2014-12-01
Published plate reconstructions typically show the Izu-Bonin Marianas arc (IBM) forming as a result of long-lived ~50 Ma Pacific subduction beneath the Philippine Sea. These reconstructions rely on the critical assumption that the Philippine Sea was continuously coupled to the Pacific during the lifetime of the IBM arc. Because of this assumption, significant (up to 1500 km) Pacific trench retreat is required to accommodate the 2000 km of Philippine Sea/IBM northward motion since the Eocene that is constrained by paleomagnetic data. In this study, we have mapped subducted slabs of mantle lithosphere from MITP08 global seismic tomography (Li et al., 2008) and restored them to a model Earth surface to constrain plate tectonic reconstructions. Here we present two subducted slab constraints that call into question current IBM arc reconstructions: 1) The northern and central Marianas slabs form a sub-vertical 'slab wall' down to maximum 1500 km depths in the lower mantle. This slab geometry is best explained by a near-stationary Marianas trench that has remained +/- 250 km E-W of its present-day position since ~45 Ma, and does not support any significant Pacific slab retreat. 2) A vanished ocean is revealed by an extensive swath of sub-horizontal slabs at 700 to 1000 km depths in the lower mantle below present-day Philippine Sea to Papua New Guinea. We call this vanished ocean the 'East Asian Sea'. When placed in an Eocene plate reconstruction, the East Asian Sea fits west of the reconstructed Marianas Pacific trench position and north of the Philippine Sea plate. This implies that the Philippine Sea and Pacific were not adjacent at IBM initiation, but were in fact separated by a lost ocean. Here we propose a new IBM arc reconstruction constrained by subducted slabs mapped under East Asia. At ~50 Ma, the present-day IBM arc initiated at equatorial latitudes from East Asian Sea subduction below the Philippine Sea. A separate arc was formed from Pacific subduction below the East Asian Sea. The Philippine Sea plate moved northwards, overrunning the East Asian Sea and the two arcs collided between 15 to 20 Ma. From 15 Ma to the present, IBM arc magmatism was produced by Pacific subduction beneath the Philippine Sea.
A parallel implementation of an off-lattice individual-based model of multicellular populations
NASA Astrophysics Data System (ADS)
Harvey, Daniel G.; Fletcher, Alexander G.; Osborne, James M.; Pitt-Francis, Joe
2015-07-01
As computational models of multicellular populations include ever more detailed descriptions of biophysical and biochemical processes, the computational cost of simulating such models limits their ability to generate novel scientific hypotheses and testable predictions. While developments in microchip technology continue to increase the power of individual processors, parallel computing offers an immediate increase in available processing power. To make full use of parallel computing technology, it is necessary to develop specialised algorithms. To this end, we present a parallel algorithm for a class of off-lattice individual-based models of multicellular populations. The algorithm divides the spatial domain between computing processes and comprises communication routines that ensure the model is correctly simulated on multiple processors. The parallel algorithm is shown to accurately reproduce the results of a deterministic simulation performed using a pre-existing serial implementation. We test the scaling of computation time, memory use and load balancing as more processes are used to simulate a cell population of fixed size. We find approximate linear scaling of both speed-up and memory consumption on up to 32 processor cores. Dynamic load balancing is shown to provide speed-up for non-regular spatial distributions of cells in the case of a growing population.
Design and development of an IBM/VM menu system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cazzola, D.J.
1992-10-01
This report describes a full screen menu system developed using IBM's Interactive System Productivity Facility (ISPF) and the REXX programming language. The software was developed for the 2800 IBM/VM Electrical Computer Aided Design (ECAD) system. The system was developed to deliver electronic drawing definitions to a corporate drawing release system. Although this report documents the status of the menu system when it was retired, the methodologies used and the requirements defined are very applicable to replacement systems.
List of Research Publications 1940-1980
1981-10-01
comparison of the amount of tolerance for misplaced answers found in the GPO and the IBM machine-scored answer sheets. January 1942. (X6304) 1-18 A& .1...machine scoring of answer sheets. March 1942. The effect of the use of No. I pencils on the accuracy of scoring IBM answer sheets by machine. July 1942...X6427) 482 Hobbies - IBM code. 483 Relationship of Classification Test, R-I and WAC Classi- 4023 fication Test-2 for a recruiting station population
A Dimensionality Reduction Technique for Enhancing Information Context.
1980-06-01
table, memory requirements for the difference arrays are based on the FORTRAN G programming languaee as implementated on an IBM 360/67. Single...the greatest amount of insight. All studies were performed on an IBM 360/67. Transformation 53 numerical results were produced as well as two...the origin to (19,19,19,19,19,19,19,19,19,l9). Two classes were generated in each case. The samples were synthetically derived using the IBM 360/57 and
IBM PC/IX operating system evaluation plan
NASA Technical Reports Server (NTRS)
Dominick, Wayne D. (Editor); Granier, Martin; Hall, Philip P.; Triantafyllopoulos, Spiros
1984-01-01
An evaluation plan for the IBM PC/IX Operating System designed for IBM PC/XT computers is discussed. The evaluation plan covers the areas of performance measurement and evaluation, software facilities available, man-machine interface considerations, networking, and the suitability of PC/IX as a development environment within the University of Southwestern Louisiana NASA PC Research and Development project. In order to compare and evaluate the PC/IX system, comparisons with other available UNIX-based systems are also included.
SEI Software Engineering Education Directory.
1987-02-01
Software Design and Development Gilbert. Philip Systems: CDC Cyber 170/750 CDC Cyber 170760 DEC POP 11/44 PRIME AT&T 3B5 IBM PC IBM XT IBM RT...Macintosh VAx 8300 Software System Development and Laboratory CS 480/480L U P X T Textbooks: Software Design and Development Gilbert, Philip Systems: CDC...Acting Chair (618) 692-2386 Courses: Software Design and Development CS 424 U P E Y Textbooks: Software Design and Development, Gilbert, Philip Topics
Quality of red blood cells washed using a second wash sequence on an automated cell processor.
Hansen, Adele L; Turner, Tracey R; Kurach, Jayme D R; Acker, Jason P
2015-10-01
Washed red blood cells (RBCs) are indicated for immunoglobulin (Ig)A-deficient recipients when RBCs from IgA-deficient donors are not available. Canadian Blood Services recently began using the automated ACP 215 cell processor (Haemonetics Corporation) for RBC washing, and its suitability to produce IgA-deficient RBCs was investigated. RBCs produced from whole blood donations by the buffy coat (BC) and whole blood filtration (WBF) methods were washed using the ACP 215 or the COBE 2991 cell processors and IgA and total protein levels were assessed. A double-wash procedure using the ACP 215 was developed, tested, and validated by assessing hemolysis, hematocrit, recovery, and other in vitro quality variables in RBCs stored after washing, with and without irradiation. A single wash using the ACP 215 did not meet Canadian Standards Association recommendations for washing with more than 2 L of solution and could not consistently reduce IgA to levels suitable for IgA-deficient recipients (24/26 BC RBCs and 0/9 WBF RBCs had IgA levels < 0.05 mg/dL). Using a second wash sequence, all BC and WBF units were washed with more than 2 L and had levels of IgA of less than 0.05 mg/dL. During 7 days' postwash storage, with and without irradiation, double-washed RBCs met quality control criteria, except for the failure of one RBC unit for inadequate (69%) postwash recovery. Using the ACP 215, a double-wash procedure for the production of components for IgA-deficient recipients from either BC or WBF RBCs was developed and validated. © 2015 AABB.
NASA Astrophysics Data System (ADS)
Maling, George C.
2005-09-01
Bill Lang joined IBM in the late 1950s with a mandate from Thomas Watson Jr. himself to establish an acoustics program at IBM. Bill created the facilities in Poughkeepsie, developed the local program, and was the leader in having other IBM locations with development and manufacturing responsibilities construct facilities and hire staff under the Interdivisional Liaison Program. He also directed IBMs acoustics technology program. In the mid-1960s, he led an IEEE standards group in Audio and Electroacoustics, and, with the help of James Cooley, Peter Welch, and others, introduced the fast Fourier transform to the acoustics community. He was the convenor of ISO TC 43 SC1 WG6 that began writing the 3740 series of standards in the 1970s. It was his suggestion to promote professionalism in noise control engineering, and, through meetings with Leo Beranek and others, led the founding of INCE/USA in 1971. He was also a leader of the team that founded International INCE in 1974, and he served as president from 1988 until 1999.
Microscopic derivation of IBM and structural evolution in nuclei
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nomura, Kosuke
A Hamiltonian of the interacting boson model (IBM) is derived based on the mean-field calculations with nuclear energy density functionals (EDFs). The multi-nucleon dynamics of the surface deformation is simulated in terms of the boson degrees of freedom. The interaction strengths of the IBM Hamiltonian are determined by mapping the potential energy surfaces (PESs) of a given EDF with quadrupole degrees of freedom onto the corresponding PES of IBM. A fermion-to-boson mapping for a rotational nucleus is discussed in terms of the rotational response, which reflects a specific time-dependent feature. Ground-state correlation energy is evaluated as a signature of structuralmore » evolution. Some examples resulting from the present spectroscopic calculations are shown for neutron-rich Pt, Os and W isotopes including exotic ones.« less
Kork, John O.
1983-01-01
Version 1.00 of the Asynchronous Communications Support supplied with the IBM Personal Computer must be modified to be used for communications with Multics. Version 2.00 can be used as supplied, but error checking and screen printing capabilities can be added by using modifications very similar to those required for Version 1.00. This paper describes and lists required programs on Multics and appropriate modifications to both Versions 1.00 and 2.00 of the programs supplied by IBM.
1977-07-01
on an IBM 370/165 computer at The University of Kentucky using the Fortran IV, G level compiler and should be easily implemented on other computers...order as the columns of T. 3.5.3 Subroutines NROOT and EIGEN Subroutines NROOT and EIGEN are a set of subroutines from the IBM Scientific Subroutine...November 1975). [10] System/360 Scientific Subroutine Package, Version III, Fifth Edition (August 1970), IBM Corporation, Technical Publications
2007-09-01
the smaller ERP companies that produce specialized ERPs for particular industries. Five former IBM employees founded SAP and created the first ERP...Computer Sciences Corporations (CSC), Price Waterhouse Coopers, EDS, and IBM [2]. Selecting the right integrators is critical because they are the link... IBM was chosen as the integrator for the NEMAIS pilot. 5. Pilot Results and Road Ahead Between late 1998 and early 2002, the four Navy pilots took
NASA Astrophysics Data System (ADS)
2009-09-01
IBM scientist wins magnetism prizes Stuart Parkin, an applied physicist at IBM's Almaden Research Center, has won the European Geophysical Society's Néel Medal and the Magnetism Award from the International Union of Pure and Applied Physics (IUPAP) for his fundamental contributions to nanodevices used in information storage. Parkin's research on giant magnetoresistance in the late 1980s led IBM to develop computer hard drives that packed 1000 times more data onto a disk; his recent work focuses on increasing the storage capacity of solid-state electronic devices.
Authoritarian Decision-Making and Alternative Patterns of Power and Influence: The Mexican IBM Case
1988-05-01
34 p. 111. For a fuller treatment, see Olga Pellicer de Brody, "El Ilamado a las Inversiones extranjeras, 1953-1958," In Las Empresas Transnaclonales en ...Development, Hewlett-Packard, 27 February 1987. See also Steve Frazier, "Apple y Hewlett-Packard Contra una Empresa 100% de IBM en Mexico," Excelsior, 27...Heraldo. See "IBM Piensa Crear en Mexico una Planta de Microcomputadoras," El Heraldo, 22 January 1985. 125. Orme, ibid. 126. ibid. 127. It is just as
IODP Expedition 351 Izu-Bonin-Mariana Arc Origins: Preliminary Results
NASA Astrophysics Data System (ADS)
Ishizuka, O.; Arculus, R. J.; Bogus, K.
2014-12-01
Understanding how subduction zones initiate and continental crust forms in intraoceanic arcs requires knowledge of the inception and evolution of a representative intraoceanic arc, such as the Izu-Bonin-Mariana (IBM) Arc system. This can be obtained by exploring regions adjacent to an arc, where unequivocal pre-arc crust overlain by undisturbed arc-derived materials exists. IODP Exp. 351 (June-July 2014) specifically targeted evidence for the earliest evolution of the IBM system following inception. Site U1438 (4711 m water depth) is located in the Amami Sankaku Basin (ASB), west of the Kyushu-Palau Ridge (KPR), a paleo-IBM arc. Primary objectives of Exp. 351 were: 1) determine the nature of the crust and mantle pre-existing the IBM arc; 2) identify and model the process of subduction initiation and initial arc crust formation; 3) determine the compositional evolution of the IBM arc during the Paleogene; 4) establish geophysical properties of the ASB. Seismic reflection profiles indicate a ~1.3 km thick sediment layer overlying ~5.5 km thick igneous crust, presumed to be oceanic. This igneous crust seemed likely to be the basement of the IBM arc. Four holes were cored at Site U1438 spanning the entire sediment section and into basement. The cored interval comprises 5 units: uppermost Unit I is hemipelagic sediment with intercalated ash layers, presumably recording explosive volcanism mainly from the Ryukyu and Kyushu arcs; Units II and III host a series of volcaniclastic gravity-flow deposits, likely recording the magmatic history of the IBM Arc from arc initiation until 25 Ma; Siliceous pelagic sediment (Unit IV) underlies these deposits with minimal coarse-grained sediment input and may pre-date arc initiation. Sediment-basement contact occurs at 1461 mbsf. A basaltic lava flow section dominantly composed of plagioclase and clinopyroxene with rare chilled margins continues to the bottom of the Site (1611 mbsf). The expedition successfully recovered pre-IBM Arc basement, a volcanic and geologic record spanning pre-Arc, Arc initiation to remnant Arc stages, which permits testing for subduction initiation and subsequent Arc evolution.
Analysis of the energy efficiency of an integrated ethanol processor for PEM fuel cell systems
NASA Astrophysics Data System (ADS)
Francesconi, Javier A.; Mussati, Miguel C.; Mato, Roberto O.; Aguirre, Pio A.
The aim of this work is to investigate the energy integration and to determine the maximum efficiency of an ethanol processor for hydrogen production and fuel cell operation. Ethanol, which can be produced from renewable feedstocks or agriculture residues, is an attractive option as feed to a fuel processor. The fuel processor investigated is based on steam reforming, followed by high- and low-temperature shift reactors and preferential oxidation, which are coupled to a polymeric fuel cell. Applying simulation techniques and using thermodynamic models the performance of the complete system has been evaluated for a variety of operating conditions and possible reforming reactions pathways. These models involve mass and energy balances, chemical equilibrium and feasible heat transfer conditions (Δ T min). The main operating variables were determined for those conditions. The endothermic nature of the reformer has a significant effect on the overall system efficiency. The highest energy consumption is demanded by the reforming reactor, the evaporator and re-heater operations. To obtain an efficient integration, the heat exchanged between the reformer outgoing streams of higher thermal level (reforming and combustion gases) and the feed stream should be maximized. Another process variable that affects the process efficiency is the water-to-fuel ratio fed to the reformer. Large amounts of water involve large heat exchangers and the associated heat losses. A net electric efficiency around 35% was calculated based on the ethanol HHV. The responsibilities for the remaining 65% are: dissipation as heat in the PEMFC cooling system (38%), energy in the flue gases (10%) and irreversibilities in compression and expansion of gases. In addition, it has been possible to determine the self-sufficient limit conditions, and to analyze the effect on the net efficiency of the input temperatures of the clean-up system reactors, combustion preheating, expander unit and crude ethanol as fuel.
ERIC Educational Resources Information Center
Bitter, Gary G., Ed.
1989-01-01
Describes three software packages: (1) "MacMendeleev"--database/graphic display for chemistry, grades 10-12, Macintosh; (2) "Geometry One: Foundations"--geometry tutorial, grades 7-12, IBM; (3) "Mathematics Exploration Toolkit"--algebra and calculus tutorial, grades 8-12, IBM. (MVL)
International Business Machines (IBM) Corporation Interim Agreement EPA Case No. 08-0113-00
On March 27, 2008, the United States Environmental Protection Agency (EPA), suspended International Business Machines (IBM) from receiving Federal Contracts, approved subcontracts, assistance, loans and other benefits.
The web server of IBM's Bioinformatics and Pattern Discovery group.
Huynh, Tien; Rigoutsos, Isidore; Parida, Laxmi; Platt, Daniel; Shibuya, Tetsuo
2003-07-01
We herein present and discuss the services and content which are available on the web server of IBM's Bioinformatics and Pattern Discovery group. The server is operational around the clock and provides access to a variety of methods that have been published by the group's members and collaborators. The available tools correspond to applications ranging from the discovery of patterns in streams of events and the computation of multiple sequence alignments, to the discovery of genes in nucleic acid sequences and the interactive annotation of amino acid sequences. Additionally, annotations for more than 70 archaeal, bacterial, eukaryotic and viral genomes are available on-line and can be searched interactively. The tools and code bundles can be accessed beginning at http://cbcsrv.watson.ibm.com/Tspd.html whereas the genomics annotations are available at http://cbcsrv.watson.ibm.com/Annotations/.
The web server of IBM's Bioinformatics and Pattern Discovery group
Huynh, Tien; Rigoutsos, Isidore; Parida, Laxmi; Platt, Daniel; Shibuya, Tetsuo
2003-01-01
We herein present and discuss the services and content which are available on the web server of IBM's Bioinformatics and Pattern Discovery group. The server is operational around the clock and provides access to a variety of methods that have been published by the group's members and collaborators. The available tools correspond to applications ranging from the discovery of patterns in streams of events and the computation of multiple sequence alignments, to the discovery of genes in nucleic acid sequences and the interactive annotation of amino acid sequences. Additionally, annotations for more than 70 archaeal, bacterial, eukaryotic and viral genomes are available on-line and can be searched interactively. The tools and code bundles can be accessed beginning at http://cbcsrv.watson.ibm.com/Tspd.html whereas the genomics annotations are available at http://cbcsrv.watson.ibm.com/Annotations/. PMID:12824385
NASA Technical Reports Server (NTRS)
Sforzini, R. H.
1972-01-01
An analysis and a computer program are presented which represent a compromise between the more sophisticated programs using precise burning geometric relations and the textbook type of solutions. The program requires approximately 900 computer cards including a set of 20 input data cards required for a typical problem. The computer operating time for a single configuration is approximately 1 minute and 30 seconds on the IBM 360 computer. About l minute and l5 seconds of the time is compilation time so that additional configurations input at the same time require approximately 15 seconds each. The program uses approximately 11,000 words on the IBM 360. The program is written in FORTRAN 4 and is readily adaptable for use on a number of different computers: IBM 7044, IBM 7094, and Univac 1108.
Detailed description of the Mayo/IBM PACS
NASA Astrophysics Data System (ADS)
Gehring, Dale G.; Persons, Kenneth R.; Rothman, Melvyn L.; Salutz, James R.; Morin, Richard L.
1991-07-01
The Mayo Clinic and IBM/Rochester have jointly developed a picture archiving system (PACS) for use with Mayo's MRI and Neuro-CT imaging modalities. The system was developed to replace the imaging system's vendor-supplied magnetic tape archiving capability. The system consists of seven MR imagers and nine CT scanners, each interfaced to the PACS via IBM Personal System/2(tm) (PS/2) computers, which act as gateways from the imaging modality to the PACS network. The PAC system operates on the token-ring component of Mayo's city-wide local area network. Also on the PACS network are four optical storage subsystems used for image archival, three optical subsystems used for image retrieval, an IBM Application System/400(tm) (AS/400) computer used for database management and multiple PS/2-based image display systems and their image servers.
Mitochondrial pathology in inclusion body myositis.
Lindgren, Ulrika; Roos, Sara; Hedberg Oldfors, Carola; Moslemi, Ali-Reza; Lindberg, Christopher; Oldfors, Anders
2015-04-01
Inclusion body myositis (IBM) is usually associated with a large number of cytochrome c oxidase (COX)-deficient muscle fibers and acquired mitochondrial DNA (mtDNA) deletions. We studied the number of COX-deficient fibers and the amount of mtDNA deletions, and if variants in nuclear genes involved in mtDNA maintenance may contribute to the occurrence of mtDNA deletions in IBM muscle. Twenty-six IBM patients were included. COX-deficient fibers were assayed by morphometry and mtDNA deletions by qPCR. POLG was analyzed in all patients by Sanger sequencing and C10orf2 (Twinkle), DNA2, MGME1, OPA1, POLG2, RRM2B, SLC25A4 and TYMP in six patients by next generation sequencing. Patients with many COX-deficient muscle fibers had a significantly higher proportion of mtDNA deletions than patients with few COX-deficient fibers. We found previously unreported variants in POLG and C10orf2 and IBM patients had a significantly higher frequency of an RRM2B variant than controls. POLG variants appeared more common in IBM patients with many COX-deficient fibers, but the difference was not statistically significant. We conclude that COX-deficient fibers in inclusion body myositis are associated with multiple mtDNA deletions. In IBM patients we found novel and also previously reported variants in genes of importance for mtDNA maintenance that warrants further studies. Copyright © 2014 Elsevier B.V. All rights reserved.
van Dijk, Lisanne V; Brouwer, Charlotte L; van der Schaaf, Arjen; Burgerhof, Johannes G M; Beukinga, Roelof J; Langendijk, Johannes A; Sijtsema, Nanna M; Steenbakkers, Roel J H M
2017-02-01
Current models for the prediction of late patient-rated moderate-to-severe xerostomia (XER 12m ) and sticky saliva (STIC 12m ) after radiotherapy are based on dose-volume parameters and baseline xerostomia (XER base ) or sticky saliva (STIC base ) scores. The purpose is to improve prediction of XER 12m and STIC 12m with patient-specific characteristics, based on CT image biomarkers (IBMs). Planning CT-scans and patient-rated outcome measures were prospectively collected for 249 head and neck cancer patients treated with definitive radiotherapy with or without systemic treatment. The potential IBMs represent geometric, CT intensity and textural characteristics of the parotid and submandibular glands. Lasso regularisation was used to create multivariable logistic regression models, which were internally validated by bootstrapping. The prediction of XER 12m could be improved significantly by adding the IBM "Short Run Emphasis" (SRE), which quantifies heterogeneity of parotid tissue, to a model with mean contra-lateral parotid gland dose and XER base . For STIC 12m , the IBM maximum CT intensity of the submandibular gland was selected in addition to STIC base and mean dose to submandibular glands. Prediction of XER 12m and STIC 12m was improved by including IBMs representing heterogeneity and density of the salivary glands, respectively. These IBMs could guide additional research to the patient-specific response of healthy tissue to radiation dose. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
A Survey of Some Approaches to Distributed Data Base & Distributed File System Architecture.
1980-01-01
BUS POD A DD A 12 12 A = A Cell D = D Cell Figure 7-1: MUFFIN logical architecture - 45 - MUFI January 1980 ".-.Bus Interface V Conventional Processor...and Applied Mathematics (14), * December, 1966. [Kimbleton 791 Kimbleton, Stephen; Wang, Pearl; and Fong, Elizabeth. XNDM: An Experimental Network
Design of a hybrid battery charger system fed by a wind-turbine and photovoltaic power generators.
Chang Chien, Jia-Ren; Tseng, Kuo-Ching; Yan, Bo-Yi
2011-03-01
This paper is aimed to develop a digital signal processor (DSP) for controlling a solar cell and wind-turbine hybrid charging system. The DSP consists of solar cells, a wind turbine, a lead acid battery, and a buck-boost converter. The solar cells and wind turbine serve as the system's main power sources and the battery as an energy storage element. The output powers of solar cells and wind turbine have large fluctuations with the weather and climate conditions. These unstable powers can be adjusted by a buck-boost converter and thus the most suitable output powers can be obtained. This study designs a booster by using a dsPIC30F4011 digital signal controller as a core processor. The DSP is controlled by the perturbation and observation methods to obtain an effective energy circuit with a full 100 W charging system. Also, this DSP can, day and night, be easily controlled and charged by a simple program, which can change the state of the system to reach a flexible application based on the reading weather conditions.
Solid Oxide Fuel Cells Operating on Alternative and Renewable Fuels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Xiaoxing; Quan, Wenying; Xiao, Jing
2014-09-30
This DOE project at the Pennsylvania State University (Penn State) initially involved Siemens Energy, Inc. to (1) develop new fuel processing approaches for using selected alternative and renewable fuels – anaerobic digester gas (ADG) and commercial diesel fuel (with 15 ppm sulfur) – in solid oxide fuel cell (SOFC) power generation systems; and (2) conduct integrated fuel processor – SOFC system tests to evaluate the performance of the fuel processors and overall systems. Siemens Energy Inc. was to provide SOFC system to Penn State for testing. The Siemens work was carried out at Siemens Energy Inc. in Pittsburgh, PA. Themore » unexpected restructuring in Siemens organization, however, led to the elimination of the Siemens Stationary Fuel Cell Division within the company. Unfortunately, this led to the Siemens subcontract with Penn State ending on September 23rd, 2010. SOFC system was never delivered to Penn State. With the assistance of NETL project manager, the Penn State team has since developed a collaborative research with Delphi as the new subcontractor and this work involved the testing of a stack of planar solid oxide fuel cells from Delphi.« less
NASA Technical Reports Server (NTRS)
Aggarwal, Arun K.
1993-01-01
The computer program SASHBEAN (Sikorsky Aircraft Spherical Roller High Speed Bearing Analysis) analyzes and predicts the operating characteristics of a Single Row, Angular Contact, Spherical Roller Bearing (SRACSRB). The program runs on an IBM or IBM compatible personal computer, and for a given set of input data analyzes the bearing design for it's ring deflections (axial and radial), roller deflections, contact areas and stresses, induced axial thrust, rolling element and cage rotation speeds, lubrication parameters, fatigue lives, and amount of heat generated in the bearing. The dynamic loading of rollers due to centrifugal forces and gyroscopic moments, which becomes quite significant at high speeds, is fully considered in this analysis. For a known application and it's parameters, the program is also capable of performing steady-state and time-transient thermal analyses of the bearing system. The steady-state analysis capability allows the user to estimate the expected steady-state temperature map in and around the bearing under normal operating conditions. On the other hand, the transient analysis feature provides the user a means to simulate the 'lost lubricant' condition and predict a time-temperature history of various critical points in the system. The bearing's 'time-to-failure' estimate may also be made from this (transient) analysis by considering the bearing as failed when a certain temperature limit is reached in the bearing components. The program is fully interactive and allows the user to get started and access most of its features with a minimal of training. For the most part, the program is menu driven, and adequate help messages were provided to guide a new user through various menu options and data input screens. All input data, both for mechanical and thermal analyses, are read through graphical input screens, thereby eliminating any need of a separate text editor/word processor to edit/create data files. Provision is also available to select and view the contents of output files on the monitor screen if no paper printouts are required. A separate volume (Volume-2) of this documentation describes, in detail, the underlying mathematical formulations, assumptions, and solution algorithms of this program.
NASA Astrophysics Data System (ADS)
Biset, S.; Nieto Deglioumini, L.; Basualdo, M.; Garcia, V. M.; Serra, M.
The aim of this work is to investigate which would be a good preliminary plantwide control structure for the process of Hydrogen production from bioethanol to be used in a proton exchange membrane (PEM) accounting only steady-state information. The objective is to keep the process under optimal operation point, that is doing energy integration to achieve the maximum efficiency. Ethanol, produced from renewable feedstocks, feeds a fuel processor investigated for steam reforming, followed by high- and low-temperature shift reactors and preferential oxidation, which are coupled to a polymeric fuel cell. Applying steady-state simulation techniques and using thermodynamic models the performance of the complete system with two different control structures have been evaluated for the most typical perturbations. A sensitivity analysis for the key process variables together with the rigorous operability requirements for the fuel cell are taking into account for defining acceptable plantwide control structure. This is the first work showing an alternative control structure applied to this kind of process.
NASA Astrophysics Data System (ADS)
Varady, M. J.; McLeod, L.; Meacham, J. M.; Degertekin, F. L.; Fedorov, A. G.
2007-09-01
Portable fuel cells are an enabling technology for high efficiency and ultra-high density distributed power generation, which is essential for many terrestrial and aerospace applications. A key element of fuel cell power sources is the fuel processor, which should have the capability to efficiently reform liquid fuels and produce high purity hydrogen that is consumed by the fuel cells. To this end, we are reporting on the development of two novel MEMS hydrogen generators with improved functionality achieved through an innovative process organization and system integration approach that exploits the advantages of transport and catalysis on the micro/nano scale. One fuel processor design utilizes transient, reverse-flow operation of an autothermal MEMS microreactor with an intimately integrated, micromachined ultrasonic fuel atomizer and a Pd/Ag membrane for in situ hydrogen separation from the product stream. The other design features a simpler, more compact planar structure with the atomized fuel ejected directly onto the catalyst layer, which is coupled to an integrated hydrogen selective membrane.
On-board diesel autothermal reforming for PEM fuel cells: Simulation and optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cozzolino, Raffaello, E-mail: raffaello.cozzolino@unicusano.it; Tribioli, Laura
2015-03-10
Alternative power sources are nowadays the only option to provide a quick response to the current regulations on automotive pollutant emissions. Hydrogen fuel cell is one promising solution, but the nature of the gas is such that the in-vehicle conversion of other fuels into hydrogen is necessary. In this paper, autothermal reforming, for Diesel on-board conversion into a hydrogen-rich gas suitable for PEM fuel cells, has investigated using the simulation tool Aspen Plus. A steady-state model has been developed to analyze the fuel processor and the overall system performance. The components of the fuel processor are: the fuel reforming reactor,more » two water gas shift reactors, a preferential oxidation reactor and H{sub 2} separation unit. The influence of various operating parameters such as oxygen to carbon ratio, steam to carbon ratio, and temperature on the process components has been analyzed in-depth and results are presented.« less
NASA Astrophysics Data System (ADS)
Lashkin, S. V.; Kozelkov, A. S.; Yalozo, A. V.; Gerasimov, V. Yu.; Zelensky, D. K.
2017-12-01
This paper describes the details of the parallel implementation of the SIMPLE algorithm for numerical solution of the Navier-Stokes system of equations on arbitrary unstructured grids. The iteration schemes for the serial and parallel versions of the SIMPLE algorithm are implemented. In the description of the parallel implementation, special attention is paid to computational data exchange among processors under the condition of the grid model decomposition using fictitious cells. We discuss the specific features for the storage of distributed matrices and implementation of vector-matrix operations in parallel mode. It is shown that the proposed way of matrix storage reduces the number of interprocessor exchanges. A series of numerical experiments illustrates the effect of the multigrid SLAE solver tuning on the general efficiency of the algorithm; the tuning involves the types of the cycles used (V, W, and F), the number of iterations of a smoothing operator, and the number of cells for coarsening. Two ways (direct and indirect) of efficiency evaluation for parallelization of the numerical algorithm are demonstrated. The paper presents the results of solving some internal and external flow problems with the evaluation of parallelization efficiency by two algorithms. It is shown that the proposed parallel implementation enables efficient computations for the problems on a thousand processors. Based on the results obtained, some general recommendations are made for the optimal tuning of the multigrid solver, as well as for selecting the optimal number of cells per processor.
NASA Astrophysics Data System (ADS)
Christeson, G. L.; Morgan, S.; Kodaira, S.; Yamashita, M.
2015-12-01
Most of the well-preserved ophiolite complexes are believed to form in supra-subduction zone settings. One of the goals of IODP Expedition 352 was to test the supra-subduction zone ophiolite model by drilling forearc crust at the northern Izu-Bonin-Mariana (IBM) system. IBM forearc drilling successfully cored 1.22 km of volcanic lavas and underlying dikes at four sites. A surprising observation is that basement compressional velocities measured from downhole logging average ~3.0 km/s, compared to values of 5 km/s at similar basement depths at oceanic crust sites 504B and 1256D. Typically there is an inverse relationship in extrusive lavas between velocity and porosity, but downhole logging shows similar porosities for the IBM and oceanic crust sites, despite the large difference in measured compressional velocities. These observations can be explained by a difference in crack morphologies between IBM forearc and oceanic crust, with a smaller fractional area of asperity contact across cracks at EXP 352 sites than at sites 504B and 1256D. Seismic profiles at the IBM forearc image many faults, which may be related to the crack population.
Comparison of the AMDAHL 470V/6 and the IBM 370/195 using benchmarks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snider, D.R.; Midlock, J.L.; Hinds, A.R.
1976-03-01
Six groups of jobs were run on the IBM 370/195 at the Applied Mathematics Division (AMD) of Argonne National Laboratory using the current production versions of OS/MVT 21.7 and ASP 3.1. The same jobs were then run on an AMDAHL 470V/6 at the AMDAHL manufacturing facilities in Sunnyvale, California, using the identical operating systems. Performances of the two machines are compared. Differences in the configurations were minimized. The memory size on each machine was the same, all software which had an impact on run times was the same, and the I/O configurations were as similar as possible. This allowed themore » comparison to be based on the relative performance of the two CPU's. As part of the studies preliminary to the acquisition of the IBM 195 in 1972, two of the groups of jobs had been run on a CDC 7600 by CDC personnel in Arden Hills, Minnesota, on an IBM 360/195 by IBM personnel in Poughkeepsie, New York, and on the AMD 360/50/75 production system in June, 1971. 6 figures, 9 tables.« less
Ahluwalia, Rajesh K [Burr Ridge, IL; Ahmed, Shabbir [Naperville, IL; Lee, Sheldon H. D. [Willowbrook, IL
2011-08-02
An improved fuel processor for fuel cells is provided whereby the startup time of the processor is less than sixty seconds and can be as low as 30 seconds, if not less. A rapid startup time is achieved by either igniting or allowing a small mixture of air and fuel to react over and warm up the catalyst of an autothermal reformer (ATR). The ATR then produces combustible gases to be subsequently oxidized on and simultaneously warm up water-gas shift zone catalysts. After normal operating temperature has been achieved, the proportion of air included with the fuel is greatly diminished.
Compact gasoline fuel processor for passenger vehicle APU
NASA Astrophysics Data System (ADS)
Severin, Christopher; Pischinger, Stefan; Ogrzewalla, Jürgen
Due to the increasing demand for electrical power in today's passenger vehicles, and with the requirements regarding fuel consumption and environmental sustainability tightening, a fuel cell-based auxiliary power unit (APU) becomes a promising alternative to the conventional generation of electrical energy via internal combustion engine, generator and battery. It is obvious that the on-board stored fuel has to be used for the fuel cell system, thus, gasoline or diesel has to be reformed on board. This makes the auxiliary power unit a complex integrated system of stack, air supply, fuel processor, electrics as well as heat and water management. Aside from proving the technical feasibility of such a system, the development has to address three major barriers:start-up time, costs, and size/weight of the systems. In this paper a packaging concept for an auxiliary power unit is presented. The main emphasis is placed on the fuel processor, as good packaging of this large subsystem has the strongest impact on overall size. The fuel processor system consists of an autothermal reformer in combination with water-gas shift and selective oxidation stages, based on adiabatic reactors with inter-cooling. The configuration was realized in a laboratory set-up and experimentally investigated. The results gained from this confirm a general suitability for mobile applications. A start-up time of 30 min was measured, while a potential reduction to 10 min seems feasible. An overall fuel processor efficiency of about 77% was measured. On the basis of the know-how gained by the experimental investigation of the laboratory set-up a packaging concept was developed. Using state-of-the-art catalyst and heat exchanger technology, the volumes of these components are fixed. However, the overall volume is higher mainly due to mixing zones and flow ducts, which do not contribute to the chemical or thermal function of the system. Thus, the concept developed mainly focuses on minimization of those component volumes. Therefore, the packaging utilizes rectangular catalyst bricks and integrates flow ducts into the heat exchangers. A concept is presented with a 25 l fuel processor volume including thermal isolation for a 3 kW el auxiliary power unit. The overall size of the system, i.e. including stack, air supply and auxiliaries can be estimated to 44 l.
Pedretti, Kevin
2008-11-18
A compute processor allocator architecture for allocating compute processors to run applications in a multiple processor computing apparatus is distributed among a subset of processors within the computing apparatus. Each processor of the subset includes a compute processor allocator. The compute processor allocators can share a common database of information pertinent to compute processor allocation. A communication path permits retrieval of information from the database independently of the compute processor allocators.
Dynamic load balance scheme for the DSMC algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Jin; Geng, Xiangren; Jiang, Dingwu
The direct simulation Monte Carlo (DSMC) algorithm, devised by Bird, has been used over a wide range of various rarified flow problems in the past 40 years. While the DSMC is suitable for the parallel implementation on powerful multi-processor architecture, it also introduces a large load imbalance across the processor array, even for small examples. The load imposed on a processor by a DSMC calculation is determined to a large extent by the total of simulator particles upon it. Since most flows are impulsively started with initial distribution of particles which is surely quite different from the steady state, themore » total of simulator particles will change dramatically. The load balance based upon an initial distribution of particles will break down as the steady state of flow is reached. The load imbalance and huge computational cost of DSMC has limited its application to rarefied or simple transitional flows. In this paper, by taking advantage of METIS, a software for partitioning unstructured graphs, and taking the total of simulator particles in each cell as a weight information, the repartitioning based upon the principle that each processor handles approximately the equal total of simulator particles has been achieved. The computation must pause several times to renew the total of simulator particles in each processor and repartition the whole domain again. Thus the load balance across the processors array holds in the duration of computation. The parallel efficiency can be improved effectively. The benchmark solution of a cylinder submerged in hypersonic flow has been simulated numerically. Besides, hypersonic flow past around a complex wing-body configuration has also been simulated. The results have displayed that, for both of cases, the computational time can be reduced by about 50%.« less
1982-03-01
POSTGRADUATE SCHOOL fMonterey, California THESIS A VERSION OF THE GRAPHICS-ORIENTED INTERACTIVE FINITE ELEMENT TIME-SHARING SYSTEM ( GIFTS ) FOR AN IBM...Master’s & Engineer’s active Finite Element Time-sharing System Thesis - March 1982 ( GIFTS ) for an IBM with CP/CMS 6. penromm.oOn. REPoRT MUlmiR 1. AUTHOIee...ss0in D dinuf 5W M memisi) ’A version of the Graphics-oriented, Interactive, Finite element, Time-sharing System ( GIFTS ) has been developed for, and
Study of Even-Even/Odd-Even/Odd-Odd Nuclei in Zn-Ga-Ge Region in the Proton-Neutron IBM/IBFM/IBFFM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoshida, N.; Brant, S.; Zuffi, L.
We study the even-even, odd-even and odd-odd nuclei in the region including Zn-Ga-Ge in the proton-neutron IBM and the models derived from it: IBM2, IBFM2, IBFFM2. We describe {sup 67}Ga, {sup 65}Zn, and {sup 68}Ga by coupling odd particles to a boson core {sup 66}Zn. We also calculate the beta{sup +}-decay rates among {sup 68}Ge, {sup 68}Ga and {sup 68}Zn.
TAP II Beamforming System Software Final Report
1977-05-01
3.0 IBM TAPE FORMATS......... ............. -1 • 3 . 2 C om p le x C o e f f ic i e n t 8 . *.. . o t... . . . . . . .. . . . . . . . 3 - 5 :i • 3.3...parameters. Output to the line - printer and CRT are inhibited, but the IBM tapes are written. d Li UNCLASSIFIED CONFIDENTIALI A2 2.5 (C) EDITING (U...t• •• ", .. . .. . ..•" ;"• :, • .••.•-. ~ ., ~ ,., ,~.’ " ’t••. •± CONFIDENTIAL LI 3.0 (C) IBM TAPE FORMATS (U) .B 3.1 (U) GENERAL
1989-04-20
20. ARS1AAI . (Contimne on reverse side olnetessary *rwenPtif) by bfoci nur~be’) International Business Machines Corporation, IBM Development System...Number: AVF-VSR-261.0789 89-01-26-TEL Ada COMPILER VALIDATION SUMMARY REPORT: Certificate Number: 890420W1.10074 International Business Machines...computer. The compiler was tested using command scripts provided by International Business Machines Corporation and reviewed by the validation team. The
NASA Astrophysics Data System (ADS)
Mohan, C.
In this paper, I survey briefly some of the recent and emerging trends in hardware and software features which impact high performance transaction processing and data analytics applications. These features include multicore processor chips, ultra large main memories, flash storage, storage class memories, database appliances, field programmable gate arrays, transactional memory, key-value stores, and cloud computing. While some applications, e.g., Web 2.0 ones, were initially built without traditional transaction processing functionality in mind, slowly system architects and designers are beginning to address such previously ignored issues. The availability, analytics and response time requirements of these applications were initially given more importance than ACID transaction semantics and resource consumption characteristics. A project at IBM Almaden is studying the implications of phase change memory on transaction processing, in the context of a key-value store. Bitemporal data management has also become an important requirement, especially for financial applications. Power consumption and heat dissipation properties are also major considerations in the emergence of modern software and hardware architectural features. Considerations relating to ease of configuration, installation, maintenance and monitoring, and improvement of total cost of ownership have resulted in database appliances becoming very popular. The MapReduce paradigm is now quite popular for large scale data analysis, in spite of the major inefficiencies associated with it.
A Comparison of PETSC Library and HPF Implementations of an Archetypal PDE Computation
NASA Technical Reports Server (NTRS)
Hayder, M. Ehtesham; Keyes, David E.; Mehrotra, Piyush
1997-01-01
Two paradigms for distributed-memory parallel computation that free the application programmer from the details of message passing are compared for an archetypal structured scientific computation a nonlinear, structured-grid partial differential equation boundary value problem using the same algorithm on the same hardware. Both paradigms, parallel libraries represented by Argonne's PETSC, and parallel languages represented by the Portland Group's HPF, are found to be easy to use for this problem class, and both are reasonably effective in exploiting concurrency after a short learning curve. The level of involvement required by the application programmer under either paradigm includes specification of the data partitioning (corresponding to a geometrically simple decomposition of the domain of the PDE). Programming in SPAM style for the PETSC library requires writing the routines that discretize the PDE and its Jacobian, managing subdomain-to-processor mappings (affine global- to-local index mappings), and interfacing to library solver routines. Programming for HPF requires a complete sequential implementation of the same algorithm, introducing concurrency through subdomain blocking (an effort similar to the index mapping), and modest experimentation with rewriting loops to elucidate to the compiler the latent concurrency. Correctness and scalability are cross-validated on up to 32 nodes of an IBM SP2.
Byers, J A
1992-09-01
A compiled program, JCE-REFS.EXE (coded in the QuickBASIC language), for use on IBM-compatible personal computers is described. The program converts a DOS text file of current B-I-T-S (BIOSIS Information Transfer System) or BIOSIS Previews references into a DOS file of citations, including abstracts, in a general style used by scientific journals. The latter file can be imported directly into a word processor or the program can convert the file into a random access data base of the references. The program can search the data base for up to 40 text strings with Boolean logic. Selected references in the data base can be exported as a DOS text file of citations. Using the search facility, articles in theJournal of Chemical Ecology from 1975 to 1991 were searched for certain key words in regard to semiochemicals, taxa, methods, chemical classes, and biological terms to determine trends in usage over the period. Positive trends were statistically significant in the use of the words: semiochemical, allomone, allelochemic, deterrent, repellent, plants, angiosperms, dicots, wind tunnel, olfactometer, electrophysiology, mass spectrometry, ketone, evolution, physiology, herbivore, defense, and receptor. Significant negative trends were found for: pheromone, vertebrates, mammals, Coleoptera, Scolytidae,Dendroctonus, lactone, isomer, and calling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Paul T.; Shadid, John N.; Sala, Marzio
In this study results are presented for the large-scale parallel performance of an algebraic multilevel preconditioner for solution of the drift-diffusion model for semiconductor devices. The preconditioner is the key numerical procedure determining the robustness, efficiency and scalability of the fully-coupled Newton-Krylov based, nonlinear solution method that is employed for this system of equations. The coupled system is comprised of a source term dominated Poisson equation for the electric potential, and two convection-diffusion-reaction type equations for the electron and hole concentration. The governing PDEs are discretized in space by a stabilized finite element method. Solution of the discrete system ismore » obtained through a fully-implicit time integrator, a fully-coupled Newton-based nonlinear solver, and a restarted GMRES Krylov linear system solver. The algebraic multilevel preconditioner is based on an aggressive coarsening graph partitioning of the nonzero block structure of the Jacobian matrix. Representative performance results are presented for various choices of multigrid V-cycles and W-cycles and parameter variations for smoothers based on incomplete factorizations. Parallel scalability results are presented for solution of up to 10{sup 8} unknowns on 4096 processors of a Cray XT3/4 and an IBM POWER eServer system.« less
A diesel fuel processor for fuel-cell-based auxiliary power unit applications
NASA Astrophysics Data System (ADS)
Samsun, Remzi Can; Krekel, Daniel; Pasel, Joachim; Prawitz, Matthias; Peters, Ralf; Stolten, Detlef
2017-07-01
Producing a hydrogen-rich gas from diesel fuel enables the efficient generation of electricity in a fuel-cell-based auxiliary power unit. In recent years, significant progress has been achieved in diesel reforming. One issue encountered is the stable operation of water-gas shift reactors with real reformates. A new fuel processor is developed using a commercial shift catalyst. The system is operated using optimized start-up and shut-down strategies. Experiments with diesel and kerosene fuels show slight performance drops in the shift reactor during continuous operation for 100 h. CO concentrations much lower than the target value are achieved during system operation in auxiliary power unit mode at partial loads of up to 60%. The regeneration leads to full recovery of the shift activity. Finally, a new operation strategy is developed whereby the gas hourly space velocity of the shift stages is re-designed. This strategy is validated using different diesel and kerosene fuels, showing a maximum CO concentration of 1.5% at the fuel processor outlet under extreme conditions, which can be tolerated by a high-temperature PEFC. The proposed operation strategy solves the issue of strong performance drop in the shift reactor and makes this technology available for reducing emissions in the transportation sector.
ERIC Educational Resources Information Center
Journal of Chemical Education, 1988
1988-01-01
Reviews three softwre packages: "Molecular Graphics on the Apple Microcomputer, Enhanced Version 2.0"; "Molecular Graphics on the IBM PC Microcomputer"; and "Molecular Animator, IBM PC Version." Packages are rated based on ease of use, subject matter content, pedagogic value, and student reaction. (CW)
Data Mining and Knowledge Discover - IBM Cognitive Alternatives for NASA KSC
NASA Technical Reports Server (NTRS)
Velez, Victor Hugo
2016-01-01
Skillful tools in cognitive computing to transform industries have been found favorable and profitable for different Directorates at NASA KSC. In this study is shown how cognitive computing systems can be useful for NASA when computers are trained in the same way as humans are to gain knowledge over time. Increasing knowledge through senses, learning and a summation of events is how the applications created by the firm IBM empower the artificial intelligence in a cognitive computing system. NASA has explored and applied for the last decades the artificial intelligence approach specifically with cognitive computing in few projects adopting similar models proposed by IBM Watson. However, the usage of semantic technologies by the dedicated business unit developed by IBM leads these cognitive computing applications to outperform the functionality of the inner tools and present outstanding analysis to facilitate the decision making for managers and leads in a management information system.
An individual-based model of zebrafish population dynamics accounting for energy dynamics.
Beaudouin, Rémy; Goussen, Benoit; Piccini, Benjamin; Augustine, Starrlight; Devillers, James; Brion, François; Péry, Alexandre R R
2015-01-01
Developing population dynamics models for zebrafish is crucial in order to extrapolate from toxicity data measured at the organism level to biological levels relevant to support and enhance ecological risk assessment. To achieve this, a dynamic energy budget for individual zebrafish (DEB model) was coupled to an individual based model of zebrafish population dynamics (IBM model). Next, we fitted the DEB model to new experimental data on zebrafish growth and reproduction thus improving existing models. We further analysed the DEB-model and DEB-IBM using a sensitivity analysis. Finally, the predictions of the DEB-IBM were compared to existing observations on natural zebrafish populations and the predicted population dynamics are realistic. While our zebrafish DEB-IBM model can still be improved by acquiring new experimental data on the most uncertain processes (e.g. survival or feeding), it can already serve to predict the impact of compounds at the population level.
An Individual-Based Model of Zebrafish Population Dynamics Accounting for Energy Dynamics
Beaudouin, Rémy; Goussen, Benoit; Piccini, Benjamin; Augustine, Starrlight; Devillers, James; Brion, François; Péry, Alexandre R. R.
2015-01-01
Developing population dynamics models for zebrafish is crucial in order to extrapolate from toxicity data measured at the organism level to biological levels relevant to support and enhance ecological risk assessment. To achieve this, a dynamic energy budget for individual zebrafish (DEB model) was coupled to an individual based model of zebrafish population dynamics (IBM model). Next, we fitted the DEB model to new experimental data on zebrafish growth and reproduction thus improving existing models. We further analysed the DEB-model and DEB-IBM using a sensitivity analysis. Finally, the predictions of the DEB-IBM were compared to existing observations on natural zebrafish populations and the predicted population dynamics are realistic. While our zebrafish DEB-IBM model can still be improved by acquiring new experimental data on the most uncertain processes (e.g. survival or feeding), it can already serve to predict the impact of compounds at the population level. PMID:25938409
Dimachkie, Mazen M.; Barohn, Richard J.
2012-01-01
The idiopathic inflammatory myopathies are a group of rare disorders that share many similarities. These include dermatomyositis (DM), polymyositis (PM), necrotizing myopathy (NM), and sporadic inclusion body myositis (IBM). Inclusion body myositis is the most common idiopathic inflammatory myopathy after age 50 and it presents with chronic proximal leg and distal arm asymmetric mucle weakness. Despite similarities with PM, it is likely that IBM is primarily a degenerative disorder rather than an inflammatory muscle disease. Inclusion body myositis is associated with a modest degree of creatine kinase (CK) elevation and an abnormal electromyogram demonstrating an irritative myopathy with some chronicity. The muscle histopathology demonstrates inflammatory exudates surrounding and invading nonnecrotic muscle fibers often times accompanied by rimmed vacuoles. In this chapter, we review sporadic IBM. We also examine past, essentially negative, clinical trials in IBM and review ongoing clinical trials. For further details on DM, PM, and NM, the reader is referred to the idiopathic inflammatory myopathies chapter. PMID:23117948
Hereditary inclusion-body myopathy: clues on pathogenesis and possible therapy.
Broccolini, Aldobrando; Gidaro, Teresa; Morosetti, Roberta; Mirabella, Massimiliano
2009-09-01
Hereditary inclusion-body myopathy (h-IBM), or distal myopathy with rimmed vacuoles (DMRV), is an autosomal recessive disorder with onset in early adult life and a progressive course leading to severe disability. h-IBM/DMRV is due to mutations of a gene (GNE) that codes for a rate-limiting enzyme in the sialic acid biosynthetic pathway. Despite the identification of the causative gene defect, it has not been unambiguously clarified how GNE gene mutations impair muscle metabolism. Although numerous studies have indicated a key role of hyposialylation of glycoproteins in h-IBM/DMRV pathogenesis, others have demonstrated new and unpredicted functions of the GNE gene, outside the sialic acid biosynthetic pathway, that may also be relevant. This review illustrates the clinical and pathologic characteristics of h-IBM/DMRV and the main clues available to date concerning the possible pathogenic mechanisms and therapeutic perspectives of this disorder.
IBM NJE protocol emulator for VAX/VMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engert, D.E.
1981-01-01
Communications software has been written at Argonne National Laboratory to enable a VAX/VMS system to participate as an end-node in a standard IBM network by emulating the Network Job Entry (NJE) protocol. NJE is actually a collection of programs that support job networking for the operating systems used on most large IBM-compatible computers (e.g., VM/370, MVS with JES2 or JES3, SVS, MVT with ASP or HASP). Files received by the VAX can be printed or saved in user-selected disk files. Files sent to the network can be routed to any node in the network for printing, punching, or job submission,more » as well as to a VM/370 user's virtual reader. Files sent from the VAX are queued and transmitted asynchronously to allow users to perform other work while files are awaiting transmission. No changes are required to the IBM software.« less
Delaunay, Agnès; Bromberg, Kenneth D; Hayashi, Yukiko; Mirabella, Massimiliano; Burch, Denise; Kirkwood, Brian; Serra, Carlo; Malicdan, May C; Mizisin, Andrew P; Morosetti, Roberta; Broccolini, Aldobrando; Guo, Ling T; Jones, Stephen N; Lira, Sergio A; Puri, Pier Lorenzo; Shelton, G Diane; Ronai, Ze'ev
2008-02-13
Growing evidence supports the importance of ubiquitin ligases in the pathogenesis of muscular disorders, although underlying mechanisms remain largely elusive. Here we show that the expression of RNF5 (aka RMA1), an ER-anchored RING finger E3 ligase implicated in muscle organization and in recognition and processing of malfolded proteins, is elevated and mislocalized to cytoplasmic aggregates in biopsies from patients suffering from sporadic-Inclusion Body Myositis (sIBM). Consistent with these findings, an animal model for hereditary IBM (hIBM), but not their control littermates, revealed deregulated expression of RNF5. Further studies for the role of RNF5 in the pathogenesis of s-IBM and more generally in muscle physiology were performed using RNF5 transgenic and KO animals. Transgenic mice carrying inducible expression of RNF5, under control of beta-actin or muscle specific promoter, exhibit an early onset of muscle wasting, muscle degeneration and extensive fiber regeneration. Prolonged expression of RNF5 in the muscle also results in the formation of fibers containing congophilic material, blue-rimmed vacuoles and inclusion bodies. These phenotypes were associated with altered expression and activity of ER chaperones, characteristic of myodegenerative diseases such as s-IBM. Conversely, muscle regeneration and induction of ER stress markers were delayed in RNF5 KO mice subjected to cardiotoxin treatment. While supporting a role for RNF5 Tg mice as model for s-IBM, our study also establishes the importance of RNF5 in muscle physiology and its deregulation in ER stress associated muscular disorders.
Delaunay, Agnès; Bromberg, Kenneth D.; Hayashi, Yukiko; Mirabella, Massimiliano; Burch, Denise; Kirkwood, Brian; Serra, Carlo; Malicdan, May C.; Mizisin, Andrew P.; Morosetti, Roberta; Broccolini, Aldobrando; Guo, Ling T.; Jones, Stephen N.; Lira, Sergio A.; Puri, Pier Lorenzo; Shelton, G. Diane; Ronai, Ze'ev
2008-01-01
Growing evidence supports the importance of ubiquitin ligases in the pathogenesis of muscular disorders, although underlying mechanisms remain largely elusive. Here we show that the expression of RNF5 (aka RMA1), an ER-anchored RING finger E3 ligase implicated in muscle organization and in recognition and processing of malfolded proteins, is elevated and mislocalized to cytoplasmic aggregates in biopsies from patients suffering from sporadic-Inclusion Body Myositis (sIBM). Consistent with these findings, an animal model for hereditary IBM (hIBM), but not their control littermates, revealed deregulated expression of RNF5. Further studies for the role of RNF5 in the pathogenesis of s-IBM and more generally in muscle physiology were performed using RNF5 transgenic and KO animals. Transgenic mice carrying inducible expression of RNF5, under control of β-actin or muscle specific promoter, exhibit an early onset of muscle wasting, muscle degeneration and extensive fiber regeneration. Prolonged expression of RNF5 in the muscle also results in the formation of fibers containing congophilic material, blue-rimmed vacuoles and inclusion bodies. These phenotypes were associated with altered expression and activity of ER chaperones, characteristic of myodegenerative diseases such as s-IBM. Conversely, muscle regeneration and induction of ER stress markers were delayed in RNF5 KO mice subjected to cardiotoxin treatment. While supporting a role for RNF5 Tg mice as model for s-IBM, our study also establishes the importance of RNF5 in muscle physiology and its deregulation in ER stress associated muscular disorders. PMID:18270596
A Method for Transferring Photoelectric Photometry Data from Apple II+ to IBM PC
NASA Astrophysics Data System (ADS)
Powell, Harry D.; Miller, James R.; Stephenson, Kipp
1989-06-01
A method is presented for transferring photoelectric photometry data files from an Apple II computer to an IBM PC computer in a form which is compatible with the AAVSO Photoelectric Photometry data collection process.
External audio for IBM-compatible computers
NASA Technical Reports Server (NTRS)
Washburn, David A.
1992-01-01
Numerous applications benefit from the presentation of computer-generated auditory stimuli at points discontiguous with the computer itself. Modification of an IBM-compatible computer for use of an external speaker is relatively easy but not intuitive. This modification is briefly described.
Two autowire versions for CDC-3200 and IBM-360
NASA Technical Reports Server (NTRS)
Billingsley, J. B.
1972-01-01
Microelectronics program was initiated to evaluate circuitry, packaging methods, and fabrication approaches necessary to produce completely procured logic system. Two autowire programs were developed for CDC-3200 and IBM-360 computers for use in designing logic systems.
ERIC Educational Resources Information Center
Batt, Russell H., Ed.
1989-01-01
Describes two chemistry computer programs: (1) "Eureka: A Chemistry Problem Solver" (problem files may be written by the instructor, MS-DOS 2.0, IBM with 384K); and (2) "PC-File+" (database management, IBM with 416K and two floppy drives). (MVL)
Controlled shutdown of a fuel cell
Clingerman, Bruce J.; Keskula, Donald H.
2002-01-01
A method is provided for the shutdown of a fuel cell system to relieve system overpressure while maintaining air compressor operation, and corresponding vent valving and control arrangement. The method and venting arrangement are employed in a fuel cell system, for instance a vehicle propulsion system, comprising, in fluid communication, an air compressor having an outlet for providing air to the system, a combustor operative to provide combustor exhaust to the fuel processor.
Generalized Monitoring Facility. Users Manual.
1982-05-01
based monitor. The RMC will sample system queues and tables on a 30-second time interval. The data captured from these queues and cells are written...period, only the final change will be reported. The following communication region cells are constantly monitored for changes, since a processor...is reported as zeros in WW6.4. When GMC terminates, it writes a record containing information read from communication region cells and information
Fuel Cell Power Plant Initiative. Volume 2; Preliminary Design of a Fixed-Base LFP/SOFC Power System
NASA Technical Reports Server (NTRS)
Veyo, S.E.
1997-01-01
This report documents the preliminary design for a military fixed-base power system of 3 MWe nominal capacity using Westinghouse's tubular Solid Oxide Fuel Cell [SOFC] and Haldor Topsoe's logistic fuels processor [LFP]. The LFP provides to the fuel cell a methane rich sulfur free fuel stream derived from either DF-2 diesel fuel, or JP-8 turbine fuel. Fuel cells are electrochemical devices that directly convert the chemical energy contained in fuels such as hydrogen, natural gas, or coal gas into electricity at high efficiency with no intermediate heat engine or dynamo. The SOFC is distinguished from other fuel cell types by its solid state ceramic structure and its high operating temperature, nominally 1000'C. The SOFC pioneered by Westinghouse has a tubular geometry closed at one end. A power generation stack is formed by aggregating many cells in an ordered array. The Westinghouse stack design is distinguished from other fuel cell stacks by the complete absence of high integrity seals between cell elements, cells, and between stack and manifolds. Further, the reformer for natural gas [predominantly methane] and the stack are thermally and hydraulically integrated with no requirement for process water. The technical viability of combining the tubular SOFC and a logistic fuels processor was demonstrated at 27 kWe scale in a test program sponsored by the Advanced Research Projects Agency [ARPA) and carried out at the Southern California Edison's [SCE] Highgrove generating station near San Bernardino, California in 1994/95. The LFP was a breadboard design supplied by Haldor Topsoe, Inc. under subcontract to Westinghouse. The test program was completely successful. The LFP fueled the SOFC for 766 hours on JP-8 and 1555 hours of DF-2. In addition, the fuel cell operated for 3261 hours on pipeline natural gas. Over the 5582 hours of operation, the SOFC generated 118 MVVH of electricity with no perceptible degradation in performance. The LFP processed military specification JP-8 and DF-2 removing the sulfur and reforming these liquid fuels to a methane rich gaseous fuel. Results of this program are documented in a companion report titled 'Final Report-Solid Oxide Fuel Cell/ Logistic Fuels Processor 27 kWe Power System'.
1983-06-01
1D-A132 95 DEVELOPMENT OF A GIFTS (GRAPHICS ORIENTED INTERACTIVE i/i FINITE-ELEMENT TIME..(U) NAVAL POSTGRADUATE SCHOOL I MONTEREY CA T R PICKLES JUN...183 THESIS " DEVELOPMENT OF A GIFTS PLOTTING PACKAGE COMPATIBLE WITH EITHER PLOT10 OR IBM/DSM GRAPHICS by Thomas R. Pickles June 1983 Thesis Advisor: G...TYPEAFtWEPORT & PERIOD COVERED Development of GIFTS Plotting Package Bi ’s Thesis; Compatible with either PLOTl0 or June 1983 IBM/DSM Graphics 6. PERFORMING ORO
An Input Routine Using Arithmetic Statements for the IBM 704 Digital Computer
NASA Technical Reports Server (NTRS)
Turner, Don N.; Huff, Vearl N.
1961-01-01
An input routine has been designed for use with FORTRAN or SAP coded programs which are to be executed on an IBM 704 digital computer. All input to be processed by the routine is punched on IBM cards as declarative statements of the arithmetic type resembling the FORTRAN language. The routine is 850 words in length. It is capable of loading fixed- or floating-point numbers, octal numbers, and alphabetic words, and of performing simple arithmetic as indicated on input cards. Provisions have been made for rapid loading of arrays of numbers in consecutive memory locations.
Botulinum toxin alleviates dysphagia of patients with inclusion body myositis.
Schrey, Aleksi; Airas, Laura; Jokela, Manu; Pulkkinen, Jaakko
2017-09-15
Oropharyngeal dysphagia is a disabling and undertreated symptom that often occurs in patients with sporadic inclusion body myositis (s-IBM). In this study, we examined the effect of botulinum neurotoxin A (BoNT-A) injections to the cricopharyngeus muscle (CPM) of patients with s-IBM and dysphagia. A single-center retrospective study involving 40 biopsy-proven s-IBM-patients treated in the District of Southwest Finland from 2000 to 2013. The incidence of dysphagia, rate of aspirations, rate of aspiration pneumonias and treatment results of dysphagia were analyzed. Patients treated for dysphagia were evaluated before and after surgery by video-fluoroscopy and/or using a questionnaire. Twenty-five of the 40 s-IBM patients (62.5%) experienced dysphagia. BoNT-A was injected a median of 2 times (range 1-7) in 12 patients with dysphagia. Before the injections 7 patients reported aspiration, none afterwards. The corresponding figures for aspiration pneumonia were 3 and 0. All of these patients had normal swallowing function 12months (median, range 2-60) after the last injection. BoNT-A injections to the CPM alleviate the dysphagia of s-IBM patients reversibly and appear to reduce the rate of aspiration effectively. Copyright © 2017 Elsevier B.V. All rights reserved.
Broccolini, Aldobrando; Gidaro, Teresa; De Cristofaro, Raimondo; Morosetti, Roberta; Gliubizzi, Carla; Ricci, Enzo; Tonali, Pietro A; Mirabella, Massimiliano
2008-05-01
Autosomal recessive hereditary inclusion-body myopathy (h-IBM) is caused by mutations of the UDP-N-acetylglucosamine 2-epimerase/N-acetylmannosamine kinase gene, a rate-limiting enzyme in the sialic acid metabolic pathway. Previous studies have demonstrated an abnormal sialylation of glycoproteins in h-IBM. h-IBM muscle shows the abnormal accumulation of proteins including amyloid-beta (Abeta). Neprilysin (NEP), a metallopeptidase that cleaves Abeta, is characterized by the presence of several N-glycosylation sites, and changes in these sugar moieties affect its stability and enzymatic activity. In the present study, we found that NEP is hyposialylated and its expression and enzymatic activity reduced in all h-IBM muscles analyzed. In vitro, the experimental removal of sialic acid by Vibrio Cholerae neuraminidase in cultured myotubes resulted in reduced expression of NEP. This was most likely because of a post-translational modification consisting in an abnormal sialylation of the protein that leads to its reduced stability. Moreover, treatment with Vibrio Cholerae neuraminidase was associated with an increased immunoreactivity for Abeta mainly in the form of distinct cytoplasmic foci within myotubes. We hypothesize that, in h-IBM muscle, hyposialylated NEP has a role in hampering the cellular Abeta clearing system, thus contributing to its abnormal accumulation within vulnerable fibers and possibly promoting muscle degeneration.
INDIVIDUAL-BASED MODELS: POWERFUL OR POWER STRUGGLE?
Willem, L; Stijven, S; Hens, N; Vladislavleva, E; Broeckhove, J; Beutels, P
2015-01-01
Individual-based models (IBMs) offer endless possibilities to explore various research questions but come with high model complexity and computational burden. Large-scale IBMs have become feasible but the novel hardware architectures require adapted software. The increased model complexity also requires systematic exploration to gain thorough system understanding. We elaborate on the development of IBMs for vaccine-preventable infectious diseases and model exploration with active learning. Investment in IBM simulator code can lead to significant runtime reductions. We found large performance differences due to data locality. Sorting the population once, reduced simulation time by a factor two. Storing person attributes separately instead of using person objects also seemed more efficient. Next, we improved model performance up to 70% by structuring potential contacts based on health status before processing disease transmission. The active learning approach we present is based on iterative surrogate modelling and model-guided experimentation. Symbolic regression is used for nonlinear response surface modelling with automatic feature selection. We illustrate our approach using an IBM for influenza vaccination. After optimizing the parameter spade, we observed an inverse relationship between vaccination coverage and the clinical attack rate reinforced by herd immunity. These insights can be used to focus and optimise research activities, and to reduce both dimensionality and decision uncertainty.
A framework for WRF to WRF-IBM grid nesting to enable multiscale simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wiersema, David John; Lundquist, Katherine A.; Chow, Fotini Katapodes
With advances in computational power, mesoscale models, such as the Weather Research and Forecasting (WRF) model, are often pushed to higher resolutions. As the model’s horizontal resolution is refined, the maximum resolved terrain slope will increase. Because WRF uses a terrain-following coordinate, this increase in resolved terrain slopes introduces additional grid skewness. At high resolutions and over complex terrain, this grid skewness can introduce large numerical errors that require methods, such as the immersed boundary method, to keep the model accurate and stable. Our implementation of the immersed boundary method in the WRF model, WRF-IBM, has proven effective at microscalemore » simulations over complex terrain. WRF-IBM uses a non-conforming grid that extends beneath the model’s terrain. Boundary conditions at the immersed boundary, the terrain, are enforced by introducing a body force term to the governing equations at points directly beneath the immersed boundary. Nesting between a WRF parent grid and a WRF-IBM child grid requires a new framework for initialization and forcing of the child WRF-IBM grid. This framework will enable concurrent multi-scale simulations within the WRF model, improving the accuracy of high-resolution simulations and enabling simulations across a wide range of scales.« less
NASA Astrophysics Data System (ADS)
Krasilenko, Vladimir G.; Lazarev, Alexander A.; Nikitovich, Diana V.
2017-10-01
The paper considers results of design and modeling of continuously logical base cells (CL BC) based on current mirrors (CM) with functions of preliminary analogue and subsequent analogue-digital processing for creating sensor multichannel analog-to-digital converters (SMC ADCs) and image processors (IP). For such with vector or matrix parallel inputs-outputs IP and SMC ADCs it is needed active basic photosensitive cells with an extended electronic circuit, which are considered in paper. Such basic cells and ADCs based on them have a number of advantages: high speed and reliability, simplicity, small power consumption, high integration level for linear and matrix structures. We show design of the CL BC and ADC of photocurrents and their various possible implementations and its simulations. We consider CL BC for methods of selection and rank preprocessing and linear array of ADCs with conversion to binary codes and Gray codes. In contrast to our previous works here we will dwell more on analogue preprocessing schemes for signals of neighboring cells. Let us show how the introduction of simple nodes based on current mirrors extends the range of functions performed by the image processor. Each channel of the structure consists of several digital-analog cells (DC) on 15-35 CMOS. The amount of DC does not exceed the number of digits of the formed code, and for an iteration type, only one cell of DC, complemented by the device of selection and holding (SHD), is required. One channel of ADC with iteration is based on one DC-(G) and SHD, and it has only 35 CMOS transistors. In such ADCs easily parallel code can be realized and also serial-parallel output code. The circuits and simulation results of their design with OrCAD are shown. The supply voltage of the DC is 1.8÷3.3V, the range of an input photocurrent is 0.1÷24μA, the transformation time is 20÷30nS at 6-8 bit binary or Gray codes. The general power consumption of the ADC with iteration is only 50÷100μW, if the maximum input current is 4μA. Such simple structure of linear array of ADCs with low power consumption and supply voltage 3.3V, and at the same time with good dynamic characteristics (frequency of digitization even for 1.5μm CMOS-technologies is 40÷50 MHz, and can be increased up to 10 times) and accuracy characteristics are show. The SMC ADCs based on CL BC and CM opens new prospects for realization of linear and matrix IP and photo-electronic structures with matrix operands, which are necessary for neural networks, digital optoelectronic processors, neural-fuzzy controllers.
A Controlled-Environment Chamber for Atmospheric Chemistry Studies Using FT-IR Spectroscopy
1990-06-01
necessary and identify by block number) FELD GROUP SUB-GROUP i >Chamber, controlled environment; long-path cell ; 07 04 FT-IR; Hydrazine decay...modification doubles the useable path length of the original multipass cell described by White (Reference 8). The pattern of images formed on the nesting...system is shown in Figure 13. 24 z C C02, Ibm, El4 944 C3 ta) caC E-4- 252 14 $4 41) 41) 0. 0 04 04 4 41) ~0 to 0.0 V-4 (A q14 0~ 1% 4-r4 $4 0 u P416 4 4
NASA Astrophysics Data System (ADS)
Pleros, Nikos; Maniotis, Pavlos; Alexoudi, Theonitsa; Fitsios, Dimitris; Vagionas, Christos; Papaioannou, Sotiris; Vyrsokinos, K.; Kanellos, George T.
2014-03-01
The processor-memory performance gap, commonly referred to as "Memory Wall" problem, owes to the speed mismatch between processor and electronic RAM clock frequencies, forcing current Chip Multiprocessor (CMP) configurations to consume more than 50% of the chip real-estate for caching purposes. In this article, we present our recent work spanning from Si-based integrated optical RAM cell architectures up to complete optical cache memory architectures for Chip Multiprocessor configurations. Moreover, we discuss on e/o router subsystems with up to Tb/s routing capacity for cache interconnection purposes within CMP configurations, currently pursued within the FP7 PhoxTrot project.
REMOTE: Modem Communicator Program for the IBM personal computer
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGirt, F.
1984-06-01
REMOTE, a Modem Communicator Program, was developed to provide full duplex serial communication with arbitrary remote computers via either dial-up telephone modems or direct lines. The latest version of REMOTE (documented in this report) was developed for the IBM Personal Computer.
Using IBMs to Investigate Spatially-dependent Processes in Landscape Genetics Theory
Much of landscape and conservation genetics theory has been derived using non-spatialmathematical models. Here, we use a mechanistic, spatially-explicit, eco-evolutionary IBM to examine the utility of this theoretical framework in landscapes with spatial structure. Our analysis...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Binney, E.J.
LION4 is a computer program for calculating one-, two-, or three-dimensional transient and steady-state temperature distributions in reactor and reactor plant components. It is used primarily for thermal-structural analyses. It utilizes finite difference techniques with first-order forward difference integration and is capable of handling a wide variety of bounding conditions. Heat transfer situations accommodated include forced and free convection in both reduced and fully-automated temperature dependent forms, coolant flow effects, a limited thermal radiation capability, a stationary or stagnant fluid gap, a dual dependency (temperature difference and temperature level) heat transfer, an alternative heat transfer mode comparison and selection facilitymore » combined with heat flux direction sensor, and any form of time-dependent boundary temperatures. The program, which handles time and space dependent internal heat generation, can also provide temperature dependent material properties with limited non-isotropic properties. User-oriented capabilities available include temperature means with various weightings and a complete heat flow rate surveillance system.CDC6600,7600;UNIVAC1108;IBM360,370; FORTRAN IV and ASCENT (CDC6600,7600), FORTRAN IV (UNIVAC1108A,B and IBM360,370); SCOPE (CDC6600,7600), EXEC8 (UNIVAC1108A,B), OS/360,370 (IBM360,370); The CDC6600 version plotter routine LAPL4 is used to produce the input required by the associated CalComp plotter for graphical output. The IBM360 version requires 350K for execution and one additional input/output unit besides the standard units.« less
Price, Mark A.; Barghout, Victoria; Benveniste, Olivier; Christopher-Stine, Lisa; Corbett, Alastair; de Visser, Marianne; Hilton-Jones, David; Kissel, John T.; Lloyd, Thomas E.; Lundberg, Ingrid E.; Mastaglia, Francis; Mozaffar, Tahseen; Needham, Merrilee; Schmidt, Jens; Sivakumar, Kumaraswamy; DeMuro, Carla; Tseng, Brian S.
2016-01-01
Background: There is a paucity of data on mortality and causes of death (CoDs) in patients with sporadic inclusion body myositis (sIBM), a rare, progressive, degenerative, inflammatory myopathy that typically affects those aged over 50 years. Objective: Based on patient records and expertise of clinical specialists, this study used questionnaires to evaluate physicians’ views on clinical characteristics of sIBM that may impact on premature mortality and CoDs in these patients. Methods: Thirteen physicians from seven countries completed two questionnaires online between December 20, 2012 and January 15, 2013. Responses to the first questionnaire were collated and presented in the second questionnaire to seek elaboration and identify consensus. Results: All 13 physicians completed both questionnaires, providing responses based on 585 living and 149 deceased patients under their care. Patients were reported to have experienced dysphagia (60.2%) and injurious falls (44.3%) during their disease. Over half of physicians reported that a subset of their patients with sIBM had a shortened lifespan (8/13), and agreed that bulbar dysfunction/dysphagia/oropharyngeal involvement (12/13), early-onset disease (8/13), severe symptoms (8/13), and falls (7/13) impacted lifespan. Factors related to sIBM were reported as CoDs in 40% of deceased patients. Oropharyngeal muscle dysfunction was ranked as the leading feature of sIBM that could contribute to death. The risk of premature mortality was higher than the age-matched comparison population. Conclusions: In the absence of data from traditional sources, this study suggests that features of sIBM may contribute to premature mortality and may be used to inform future studies. PMID:27854208
NASA Astrophysics Data System (ADS)
Christeson, G. L.; Morgan, S.; Kodaira, S.; Yamashita, M.; Almeev, R. R.; Michibayashi, K.; Sakuyama, T.; Ferré, E. C.; Kurz, W.
2016-12-01
Most of the well-preserved ophiolite complexes are believed to form in suprasubduction zone (SSZ) settings. We compare physical properties and seismic structure of SSZ crust at the Izu-Bonin-Mariana (IBM) fore arc with oceanic crust drilled at Holes 504B and 1256D to evaluate the similarities of SSZ and oceanic crust. Expedition 352 basement consists of fore-arc basalt (FAB) and boninite lavas and dikes. P-wave sonic log velocities are substantially lower for the IBM fore arc (mean values 3.1-3.4 km/s) compared to Holes 504B and 1256D (mean values 5.0-5.2 km/s) at depths of 0-300 m below the sediment-basement interface. For similar porosities, lower P-wave sonic log velocities are observed at the IBM fore arc than at Holes 504B and 1256D. We use a theoretical asperity compression model to calculate the fractional area of asperity contact Af across cracks. Af values are 0.021-0.025 at the IBM fore arc and 0.074-0.080 at Holes 504B and 1256D for similar depth intervals (0-300 m within basement). The Af values indicate more open (but not necessarily wider) cracks in the IBM fore arc than for the oceanic crust at Holes 504B and 1256D, which is consistent with observations of fracturing and alteration at the Expedition 352 sites. Seismic refraction data constrain a crustal thickness of 10-15 km along the IBM fore arc. Implications and inferences are that crust-composing ophiolites formed at SSZ settings could be thick and modified after accretion, and these processes should be considered when using ophiolites as an analog for oceanic crust.
Price, Mark A; Barghout, Victoria; Benveniste, Olivier; Christopher-Stine, Lisa; Corbett, Alastair; de Visser, Marianne; Hilton-Jones, David; Kissel, John T; Lloyd, Thomas E; Lundberg, Ingrid E; Mastaglia, Francis; Mozaffar, Tahseen; Needham, Merrilee; Schmidt, Jens; Sivakumar, Kumaraswamy; DeMuro, Carla; Tseng, Brian S
2016-03-03
There is a paucity of data on mortality and causes of death (CoDs) in patients with sporadic inclusion body myositis (sIBM), a rare, progressive, degenerative, inflammatory myopathy that typically affects those aged over 50 years. Based on patient records and expertise of clinical specialists, this study used questionnaires to evaluate physicians' views on clinical characteristics of sIBM that may impact on premature mortality and CoDs in these patients. Thirteen physicians from seven countries completed two questionnaires online between December 20, 2012 and January 15, 2013. Responses to the first questionnaire were collated and presented in the second questionnaire to seek elaboration and identify consensus. All 13 physicians completed both questionnaires, providing responses based on 585 living and 149 deceased patients under their care. Patients were reported to have experienced dysphagia (60.2%) and injurious falls (44.3%) during their disease. Over half of physicians reported that a subset of their patients with sIBM had a shortened lifespan (8/13), and agreed that bulbar dysfunction/dysphagia/oropharyngeal involvement (12/13), early-onset disease (8/13), severe symptoms (8/13), and falls (7/13) impacted lifespan. Factors related to sIBM were reported as CoDs in 40% of deceased patients. Oropharyngeal muscle dysfunction was ranked as the leading feature of sIBM that could contribute to death. The risk of premature mortality was higher than the age-matched comparison population. In the absence of data from traditional sources, this study suggests that features of sIBM may contribute to premature mortality and may be used to inform future studies.
Galle, J; Hoffmann, M; Aust, G
2009-01-01
Collective phenomena in multi-cellular assemblies can be approached on different levels of complexity. Here, we discuss a number of mathematical models which consider the dynamics of each individual cell, so-called agent-based or individual-based models (IBMs). As a special feature, these models allow to account for intracellular decision processes which are triggered by biomechanical cell-cell or cell-matrix interactions. We discuss their impact on the growth and homeostasis of multi-cellular systems as simulated by lattice-free models. Our results demonstrate that cell polarisation subsequent to cell-cell contact formation can be a source of stability in epithelial monolayers. Stroma contact-dependent regulation of tumour cell proliferation and migration is shown to result in invasion dynamics in accordance with the migrating cancer stem cell hypothesis. However, we demonstrate that different regulation mechanisms can equally well comply with present experimental results. Thus, we suggest a panel of experimental studies for the in-depth validation of the model assumptions.
Hydrogen Generation Via Fuel Reforming
NASA Astrophysics Data System (ADS)
Krebs, John F.
2003-07-01
Reforming is the conversion of a hydrocarbon based fuel to a gas mixture that contains hydrogen. The H2 that is produced by reforming can then be used to produce electricity via fuel cells. The realization of H2-based power generation, via reforming, is facilitated by the existence of the liquid fuel and natural gas distribution infrastructures. Coupling these same infrastructures with more portable reforming technology facilitates the realization of fuel cell powered vehicles. The reformer is the first component in a fuel processor. Contaminants in the H2-enriched product stream, such as carbon monoxide (CO) and hydrogen sulfide (H2S), can significantly degrade the performance of current polymer electrolyte membrane fuel cells (PEMFC's). Removal of such contaminants requires extensive processing of the H2-rich product stream prior to utilization by the fuel cell to generate electricity. The remaining components of the fuel processor remove the contaminants in the H2 product stream. For transportation applications the entire fuel processing system must be as small and lightweight as possible to achieve desirable performance requirements. Current efforts at Argonne National Laboratory are focused on catalyst development and reactor engineering of the autothermal processing train for transportation applications.
Maintenance of Microcomputers. Manual and Apple II Session, IBM Session.
ERIC Educational Resources Information Center
Coffey, Michael A.; And Others
This guide describes maintenance procedures for IBM and Apple personal computers, provides information on detecting and diagnosing problems, and details diagnostic programs. Included are discussions of printers, terminals, disks, disk drives, keyboards, hardware, and software. The text is supplemented by various diagrams. (EW)
An Apple for Your IBM PC--The Quadlink Board.
ERIC Educational Resources Information Center
Owen, G. Scott
1984-01-01
Describes nature and installation of the QUADLINK board which allows Apple software to be run on IBM PC microcomputers. Although programs tested ran without problems, users should test their own programs since there are some copy protection schemes that can baffle the board. (JN)
ERIC Educational Resources Information Center
Journal of Chemical Education, 1988
1988-01-01
Reviews two computer programs: "Molecular Graphics," which allows molecule manipulation in three-dimensional space (requiring IBM PC with 512K, EGA monitor, and math coprocessor); and "Periodic Law," a database which contains up to 20 items of information on each of the first 103 elements (Apple II or IBM PC). (MVL)
Injuries and Illnesses of Vietnam War POWs Revisited: IV. Air Force Risk Factors
2017-03-22
predominantly aviators imprisoned in North Vietnam. Statistical analyses were performed using SPSS version 19. Pearson correlations were obtained...Repatriated Prisoner of War Initial Medical Evaluation Forms. Department of Defense. Washington, D.C. 5. IBM Corporation (2010). IBM SPSS Statistics for
Performance Support Case Studies from IBM.
ERIC Educational Resources Information Center
Duke-Moran, Celia; Swope, Ginger; Morariu, Janis; deKam, Peter
1999-01-01
Presents two case studies that show how IBM addressed performance support solutions and electronic learning. The first developed a performance support and expert coaching solution; the second applied performance support to reducing implementation time and total cost of ownership of enterprise resource planning systems. (Author/LRW)
Hazardous Waste Cleanup: IBM Corporation in Poughkeepsie, New York
This site covers approximately 423 acres, two-thirds of which is occupied by a manufacturing complex with more than 50 buildings. The land use in the area is a mix of industrial, commercial and residential. IBM is located approximately six miles south of t
The web server of IBM's Bioinformatics and Pattern Discovery group: 2004 update
Huynh, Tien; Rigoutsos, Isidore
2004-01-01
In this report, we provide an update on the services and content which are available on the web server of IBM's Bioinformatics and Pattern Discovery group. The server, which is operational around the clock, provides access to a large number of methods that have been developed and published by the group's members. There is an increasing number of problems that these tools can help tackle; these problems range from the discovery of patterns in streams of events and the computation of multiple sequence alignments, to the discovery of genes in nucleic acid sequences, the identification—directly from sequence—of structural deviations from α-helicity and the annotation of amino acid sequences for antimicrobial activity. Additionally, annotations for more than 130 archaeal, bacterial, eukaryotic and viral genomes are now available on-line and can be searched interactively. The tools and code bundles continue to be accessible from http://cbcsrv.watson.ibm.com/Tspd.html whereas the genomics annotations are available at http://cbcsrv.watson.ibm.com/Annotations/. PMID:15215340
The web server of IBM's Bioinformatics and Pattern Discovery group: 2004 update.
Huynh, Tien; Rigoutsos, Isidore
2004-07-01
In this report, we provide an update on the services and content which are available on the web server of IBM's Bioinformatics and Pattern Discovery group. The server, which is operational around the clock, provides access to a large number of methods that have been developed and published by the group's members. There is an increasing number of problems that these tools can help tackle; these problems range from the discovery of patterns in streams of events and the computation of multiple sequence alignments, to the discovery of genes in nucleic acid sequences, the identification--directly from sequence--of structural deviations from alpha-helicity and the annotation of amino acid sequences for antimicrobial activity. Additionally, annotations for more than 130 archaeal, bacterial, eukaryotic and viral genomes are now available on-line and can be searched interactively. The tools and code bundles continue to be accessible from http://cbcsrv.watson.ibm.com/Tspd.html whereas the genomics annotations are available at http://cbcsrv.watson.ibm.com/Annotations/.
Dimachkie, Mazen M; Barohn, Richard J
2012-07-01
The idiopathic inflammatory myopathies are a group of rare disorders that share many similarities. These include dermatomyositis (DM), polymyositis (PM), necrotizing myopathy (NM), and sporadic inclusion body myositis (IBM). Inclusion body myositis is the most common idiopathic inflammatory myopathy after age 50 and it presents with chronic proximal leg and distal arm asymmetric mucle weakness. Despite similarities with PM, it is likely that IBM is primarily a degenerative disorder rather than an inflammatory muscle disease. Inclusion body myositis is associated with a modest degree of creatine kinase (CK) elevation and an abnormal electromyogram demonstrating an irritative myopathy with some chronicity. The muscle histopathology demonstrates inflammatory exudates surrounding and invading nonnecrotic muscle fibers often times accompanied by rimmed vacuoles. In this chapter, we review sporadic IBM. We also examine past, essentially negative, clinical trials in IBM and review ongoing clinical trials. For further details on DM, PM, and NM, the reader is referred to the idiopathic inflammatory myopathies chapter. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engert, D.E.; Raffenetti, C.
NJE is communications software developed to enable a VAX VMS system to participate as an end-node in a standard IBM network by emulating the Network Job Entry (NJE) protocol. NJE supports job networking for the operating systems used on most large IBM-compatible computers (e.g., VM/370, MVS with JES2 or JES3, SVS, MVT with ASP or HASP). Files received by the VAX can be printed or saved in user-selected disk files. Files sent to the network can be routed to any network node for printing, punching, or job submission, or to a VM/370 user's virtual reader. Files sent from the VAXmore » are queued and transmitted asynchronously. No changes are required to the IBM software.DEC VAX11/780; VAX-11 FORTRAN 77 (99%) and MACRO-11 (1%); VMS 2.5; VAX11/780 with DUP-11 UNIBUS interface and 9600 baud synchronous modem..« less
NASA Technical Reports Server (NTRS)
Perry, Jimmy L.
1992-01-01
The same kind of standard and controls are established that are currently in use for the procurement of new analog, digital, and IBM/IBM compatible 3480 tape cartridges, and 1 in wide channel video magnetic tapes. The Magnetic Tape Certification Facility (MTCF) maintains a Qualified Products List (QPL) for the procurement of new magnetic media and uses the following specifications for the QPL and Acceptance Tests: (1) NASA TM-79724 is used for the QPL and Acceptance Testing of new analog magnetic tapes; (2) NASA TM-80599 is used for the QPL and Acceptance Testing of new digital magnetic tapes; (3) NASA TM-100702 is used for the QPL and Acceptance Testing of new IBM/IBM compatible 3840 magnetic tape cartridges; and (4) NASA TM-100712 is used for the QPL and Acceptance Testing of new 1 in wide channel video magnetic tapes. This document will be used for the QPL and Acceptance Testing of new Helical Scan 8 mm digital data tape cartridges.
Making Predictions in a Changing World: The Benefits of Individual-Based Ecology
Stillman, Richard A.; Railsback, Steven F.; Giske, Jarl; Berger, Uta; Grimm, Volker
2014-01-01
Ecologists urgently need a better ability to predict how environmental change affects biodiversity. We examine individual-based ecology (IBE), a research paradigm that promises better a predictive ability by using individual-based models (IBMs) to represent ecological dynamics as arising from how individuals interact with their environment and with each other. A key advantage of IBMs is that the basis for predictions—fitness maximization by individual organisms—is more general and reliable than the empirical relationships that other models depend on. Case studies illustrate the usefulness and predictive success of long-term IBE programs. The pioneering programs had three phases: conceptualization, implementation, and diversification. Continued validation of models runs throughout these phases. The breakthroughs that make IBE more productive include standards for describing and validating IBMs, improved and standardized theory for individual traits and behavior, software tools, and generalized instead of system-specific IBMs. We provide guidelines for pursuing IBE and a vision for future IBE research. PMID:26955076
Solving Coupled Gross--Pitaevskii Equations on a Cluster of PlayStation 3 Computers
NASA Astrophysics Data System (ADS)
Edwards, Mark; Heward, Jeffrey; Clark, C. W.
2009-05-01
At Georgia Southern University we have constructed an 8+1--node cluster of Sony PlayStation 3 (PS3) computers with the intention of using this computing resource to solve problems related to the behavior of ultra--cold atoms in general with a particular emphasis on studying bose--bose and bose--fermi mixtures confined in optical lattices. As a first project that uses this computing resource, we have implemented a parallel solver of the coupled time--dependent, one--dimensional Gross--Pitaevskii (TDGP) equations. These equations govern the behavior of dual-- species bosonic mixtures. We chose the split--operator/FFT to solve the coupled 1D TDGP equations. The fast Fourier transform component of this solver can be readily parallelized on the PS3 cpu known as the Cell Broadband Engine (CellBE). Each CellBE chip contains a single 64--bit PowerPC Processor Element known as the PPE and eight ``Synergistic Processor Element'' identified as the SPE's. We report on this algorithm and compare its performance to a non--parallel solver as applied to modeling evaporative cooling in dual--species bosonic mixtures.
Towards implementation of cellular automata in Microbial Fuel Cells.
Tsompanas, Michail-Antisthenis I; Adamatzky, Andrew; Sirakoulis, Georgios Ch; Greenman, John; Ieropoulos, Ioannis
2017-01-01
The Microbial Fuel Cell (MFC) is a bio-electrochemical transducer converting waste products into electricity using microbial communities. Cellular Automaton (CA) is a uniform array of finite-state machines that update their states in discrete time depending on states of their closest neighbors by the same rule. Arrays of MFCs could, in principle, act as massive-parallel computing devices with local connectivity between elementary processors. We provide a theoretical design of such a parallel processor by implementing CA in MFCs. We have chosen Conway's Game of Life as the 'benchmark' CA because this is the most popular CA which also exhibits an enormously rich spectrum of patterns. Each cell of the Game of Life CA is realized using two MFCs. The MFCs are linked electrically and hydraulically. The model is verified via simulation of an electrical circuit demonstrating equivalent behaviours. The design is a first step towards future implementations of fully autonomous biological computing devices with massive parallelism. The energy independence of such devices counteracts their somewhat slow transitions-compared to silicon circuitry-between the different states during computation.
Towards implementation of cellular automata in Microbial Fuel Cells
Adamatzky, Andrew; Sirakoulis, Georgios Ch.; Greenman, John; Ieropoulos, Ioannis
2017-01-01
The Microbial Fuel Cell (MFC) is a bio-electrochemical transducer converting waste products into electricity using microbial communities. Cellular Automaton (CA) is a uniform array of finite-state machines that update their states in discrete time depending on states of their closest neighbors by the same rule. Arrays of MFCs could, in principle, act as massive-parallel computing devices with local connectivity between elementary processors. We provide a theoretical design of such a parallel processor by implementing CA in MFCs. We have chosen Conway’s Game of Life as the ‘benchmark’ CA because this is the most popular CA which also exhibits an enormously rich spectrum of patterns. Each cell of the Game of Life CA is realized using two MFCs. The MFCs are linked electrically and hydraulically. The model is verified via simulation of an electrical circuit demonstrating equivalent behaviours. The design is a first step towards future implementations of fully autonomous biological computing devices with massive parallelism. The energy independence of such devices counteracts their somewhat slow transitions—compared to silicon circuitry—between the different states during computation. PMID:28498871
ISS Payload Racks Automated Flow Control Calibration Method
NASA Technical Reports Server (NTRS)
Simmonds, Boris G.
2003-01-01
Payload Racks utilize MTL and/or LTL station water for cooling of payloads and avionics. Flow control range from valves of fully closed, to up to 300 Ibmhr. Instrument accuracies are as high as f 7.5 Ibm/hr for flow sensors and f 3 Ibm/hr for valve controller, for a total system accuracy of f 10.5 Ibm/hr. Improved methodology was developed, tested and proven that reduces accuracy of the commanded flows to less than f 1 Ibmhr. Uethodology could be packed in a "calibration kit" for on- orbit flow sensor checkout and recalibration, extending the rack operations before return to earth. -
Final Report on Contract N00014-85-C-0078 (IBM Thomas J. Watson Research Center)
1990-01-01
beam was TIME(psec) sent through the sodium cell; its peak input power was Fig. 3. Observed polarization beats (solid curves) compared 350 W, and the...that describe a new and powerful method of are obtained by ombining these two methods, L~e., by sending measuring 4)(t) and thereby obtaining the...first application of this powerful combi- frequency-swept pulse is passed through a resonant vapornation was by Shank et al.,6 who compressed 90.fsec
Mathematics Programming on the Apple II and IBM PC.
ERIC Educational Resources Information Center
Myers, Roy E.; Schneider, David I.
1987-01-01
Details the features of BASIC used in mathematics programming and provides the information needed to translate between the Apple II and IBM PC computers. Discusses inputing a user-defined function, setting scroll windows, displaying subscripts and exponents, variable names, mathematical characters and special symbols. (TW)
Desk-top publishing using IBM-compatible computers.
Grencis, P W
1991-01-01
This paper sets out to describe one Medical Illustration Departments' experience of the introduction of computers for desk-top publishing. In this particular case, after careful consideration of all the options open, an IBM-compatible system was installed rather than the often popular choice of an Apple Macintosh.
Software Reviews: Programs Worth a Second Look.
ERIC Educational Resources Information Center
Olds, Henry F., Jr.; And Others
1988-01-01
Examines four software packages: (1) "Wordbench"--writing and word processing, grades 9-12 (IBM and Apple); (2) "Muppet Slate"--language arts, grades K-2 (Apple); (3) "Accu-Weather Forecaster"--weather analysis and forecasting, grades 3-12 (modem with IBM or Mac); and (4) "The Ripple That Changed American…
Technology for Persons with Disabilities. An Introduction.
ERIC Educational Resources Information Center
IBM, Atlanta, GA. National Support Center for Persons with Disabilities.
This paper contains an overview of technology, national support organizations, and IBM support available to persons with disabilities related to impairments affecting hearing, learning, mobility, speech or language, and vision. The information was obtained from the IBM National Support Center for Persons with Disabilities, which was created to…
Performance statistics of the FORTRAN 4 /H/ library for the IBM system/360
NASA Technical Reports Server (NTRS)
Clark, N. A.; Cody, W. J., Jr.; Hillstrom, K. E.; Thieleker, E. A.
1969-01-01
Test procedures and results for accuracy and timing tests of the basic IBM 360/50 FORTRAN 4 /H/ subroutine library are reported. The testing was undertaken to verify performance capability and as a prelude to providing some replacement routines of improved performance.
Hazardous Waste Cleanup: IBM Corporation-TJ Watson Research Center in Yorktown Heights, New York
IBM Corporation -TJ Watson Research Center is located in southern Yorktown near the boundary separating the Town of Yorktown from the Town of New Castle. The site occupies an area of approximately 217 acres and adjoins land uses are predominantly residenti
Summers, Berta J; Cougle, Jesse R
2016-12-01
Individuals meeting diagnostic criteria for body dysmorphic disorder (BDD; N = 40) were enrolled in a randomized, four-session trial comparing interpretation bias modification (IBM) training designed to target social evaluation- and appearance-related interpretation biases with a placebo control training condition (PC). Sessions took place over the course of two weeks (two sessions per week). Analyses indicated that, relative to the PC condition, IBM led to a significant increase in benign biases and reduction in threat biases at post-treatment. IBM also led to greater reductions in BDD symptoms compared to PC, though this effect was present at high but not low levels of pre-treatment BDD symptoms. Additionally, compared to PC, IBM led to lower urge to check and lower fear in response to an in vivo appearance-related stressor (having their picture taken from different angles), though the latter effect was present only among those reporting elevated fear at pre-treatment. The effects of treatment on interpretation biases and BDD symptoms were largely maintained at a one-month follow-up assessment. Moderated-mediation analyses showed that change in threat bias mediated the effect of condition on post-treatment symptoms for individuals high in pre-treatment BDD symptoms. The current study provides preliminary support for the efficacy of IBM for BDD. Copyright © 2016 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burdick, G.R.; Wilson, J.R.
COMCAN2A and COMCAN are designed to analyze complex systems such as nuclear plants for common causes of failure. A common cause event, or common mode failure, is a secondary cause that could contribute to the failure of more than one component and violates the assumption of independence. Analysis of such events is an integral part of system reliability and safety analysis. A significant common cause event is a secondary cause common to all basic events in one or more minimal cut sets. Minimal cut sets containing events from components sharing a common location or a common link are called commonmore » cause candidates. Components share a common location if no barrier insulates any one of them from the secondary cause. A common link is a dependency among components which cannot be removed by a physical barrier (e.g., a common energy source or common maintenance instructions).IBM360;CDC CYBER176,175; FORTRAN IV (30%) and BAL (70%) (IBM360), FORTRAN IV (97%) and COMPASS (3%) (CDC CYBER176).; OS/360 (IBM360) and NOS/BE 1.4 (CDC CYBER176), NOS 1.3 (CDC CYBER175); 140K bytes of memory for COMCAN and 242K (octal) words of memory for COMCAN2A.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kopp, H.J.; Mortensen, G.A.
1978-04-01
Approximately 60% of the full CDC 6600/7600 Datatran 2.0 capability was made operational on IBM 360/370 equipment. Sufficient capability was made operational to demonstrate adequate performance for modular program linking applications. Also demonstrated were the basic capabilities and performance required to support moderate-sized data base applications and moderately active scratch input/output applications. Approximately one to two calendar years are required to develop DATATRAN 2.0 capabilities fully for the entire spectrum of applications proposed. Included in the next stage of conversion should be syntax checking and syntax conversion features that would foster greater FORTRAN compatibility between IBM and CDC developed modules.more » The batch portion of the JOSHUA Modular System, which was developed by Savannah River Laboratory to run on an IBM computer, was examined for the feasibility of conversion to run on a Control Data Corporation (CDC) computer. Portions of the JOSHUA Precompiler were changed so as to be operable on the CDC computer. The Data Manager and Batch Monitor were also examined for conversion feasibility, but no changes were made in them. It appears to be feasible to convert the batch portion of the JOSHUA Modular System to run on a CDC computer with an estimated additional two to three man-years of effort. 9 tables.« less
NASA Astrophysics Data System (ADS)
Ichiyama, Yuji; Ito, Hisatoshi; Hokanishi, Natsumi; Tamura, Akihiro; Arai, Shoji
2017-06-01
A Paleogene accretionary complex, the Mineoka-Setogawa Belt, is distributed around the Izu Collision Zone, central Japan. Plutonic rocks of gabbro, diorite and tonalite compositions are included as fragments and dykes in an ophiolitic mélange in this belt. Zircon U-Pb dating of the plutonic rocks indicates that they were formed at ca. 35 Ma simultaneously. These ages are consistent with Eocene-Oligocene tholeiite and calc-alkaline arc magmatism in the Izu-Bonin-Mariana (IBM) Arc and exclude several previous models for the origin of the Mineoka-Setogawa ophiolitic rocks. The geochemical characteristics of these plutonic rocks are similar to those of the Eocene-Oligocene IBM tholeiite and calc-alkaline volcanic rocks as well as to the accreted middle crust of the IBM Arc, the Tanzawa Plutonic Complex. Moreover, their lithology is consistent with those of the middle and lower crust of the IBM Arc estimated from the seismic velocity structure. These lines of evidence strongly indicate that the plutonic rocks in the Mineoka-Setogawa ophiolitic mélange are fragments of the middle to lower crust of the IBM Arc. Additionally, the presence of the Mineoka-Setogawa intermediate to felsic plutonic rocks supports the hypothesis that intermediate magma can form continental crust in intra-oceanic arcs.
Huang, Ching-Ying; Ho, Ming-Ching; Lee, Jia-Jung; Hwang, Daw-Yang; Ko, Hui-Wen; Cheng, Yu-Che; Hsu, Yu-Hung; Lu, Huai-En; Chen, Hung-Chun; Hsieh, Patrick C H
2017-10-01
Autosomal dominant polycystic kidney disease is one of the most prevalent forms of inherited cystic kidney disease, and can be characterized by kidney cyst formation and enlargement. Here we report the generation of a Type 1 ADPKD disease iPS cell line, IBMS-iPSC-012-12, which retains the conserved deletion of PKD1, normal karyotype and exhibits the properties of pluripotent stem cells such as ES-like morphology, expression of pluripotent markers and capacity to differentiate into all three germ layers. Our results show that we have successfully generated a patient-specific iPS cell line with a mutation in PKD1 for study of renal disease pathophysiology. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Transfer of numeric ASCII data files between Apple and IBM personal computers.
Allan, R W; Bermejo, R; Houben, D
1986-01-01
Listings for programs designed to transfer numeric ASCII data files between Apple and IBM personal computers are provided with accompanying descriptions of how the software operates. Details of the hardware used are also given. The programs may be easily adapted for transferring data between other microcomputers.
Software Reviews. Programs Worth a Second Look.
ERIC Educational Resources Information Center
Schneider, Roxanne; Eiser, Leslie
1989-01-01
Reviewed are three computer software packages for use in middle/high school classrooms. Included are "MacWrite II," a word-processing program for MacIntosh computers; "Super Story Tree," a word-processing program for Apple and IBM computers; and "Math Blaster Mystery," for IBM, Apple, and Tandy computers. (CW)
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-14
... of the negative determination regarding workers' eligibility to apply for Trade Adjustment Assistance (TAA) applicable to workers and former workers of International Business Machines (IBM), Sales and... was published in the Federal Register on November 17, 2010 (75 FR 70296). The workers supply computer...
Resource Guide for Persons with Speech or Language Impairments.
ERIC Educational Resources Information Center
IBM, Atlanta, GA. National Support Center for Persons with Disabilities.
The resource guide identifies products which assist speech or language impaired individuals in accessing IBM (International Business Machine) Personal Computers or the IBM Personal System/2 family of products. An introduction provides a general overview of ways computers can help persons with speech or language handicaps. The document then…