Sample records for ibm cell processor

  1. Impacts of the IBM Cell Processor to Support Climate Models

    NASA Technical Reports Server (NTRS)

    Zhou, Shujia; Duffy, Daniel; Clune, Tom; Suarez, Max; Williams, Samuel; Halem, Milt

    2008-01-01

    NASA is interested in the performance and cost benefits for adapting its applications to the IBM Cell processor. However, its 256KB local memory per SPE and the new communication mechanism, make it very challenging to port an application. We selected the solar radiation component of the NASA GEOS-5 climate model, which: (1) is representative of column physics (approximately 50% computational time), (2) has a high computational load relative to transferring data from and to main memory, (3) performs independent calculations across multiple columns. We converted the baseline code (single-precision, Fortran) to C and ported it with manually SIMDizing 4 independent columns and found that a Cell with 8 SPEs can process 2274 columns per second. Compared with the baseline results, the Cell is approximately 5.2X, approximately 8.2X, approximately 15.1X faster than a core on Intel Woodcrest, Dempsey, and Itanium2, respectively. We believe this dramatic performance improvement makes a hybrid cluster with Cell and traditional nodes competitive.

  2. Cell-NPE (Numerical Performance Evaluation): Programming the IBM Cell Broadband Engine -- A General Parallelization Strategy

    DTIC Science & Technology

    2008-04-01

    Space GmbH as follows: B. TECHNICAL PRPOPOSA/DESCRIPTION OF WORK Cell: A Revolutionary High Performance Computing Platform On 29 June 2005 [1...IBM has announced that is has partnered with Mercury Computer Systems, a maker of specialized computers . The Cell chip provides massive floating-point...the computing industry away from the traditional processor technology dominated by Intel. While in the past, the development of computing power has

  3. Accuracy of the lattice-Boltzmann method using the Cell processor

    NASA Astrophysics Data System (ADS)

    Harvey, M. J.; de Fabritiis, G.; Giupponi, G.

    2008-11-01

    Accelerator processors like the new Cell processor are extending the traditional platforms for scientific computation, allowing orders of magnitude more floating-point operations per second (flops) compared to standard central processing units. However, they currently lack double-precision support and support for some IEEE 754 capabilities. In this work, we develop a lattice-Boltzmann (LB) code to run on the Cell processor and test the accuracy of this lattice method on this platform. We run tests for different flow topologies, boundary conditions, and Reynolds numbers in the range Re=6 350 . In one case, simulation results show a reduced mass and momentum conservation compared to an equivalent double-precision LB implementation. All other cases demonstrate the utility of the Cell processor for fluid dynamics simulations. Benchmarks on two Cell-based platforms are performed, the Sony Playstation3 and the QS20/QS21 IBM blade, obtaining a speed-up factor of 7 and 21, respectively, compared to the original PC version of the code, and a conservative sustained performance of 28 gigaflops per single Cell processor. Our results suggest that choice of IEEE 754 rounding mode is possibly as important as double-precision support for this specific scientific application.

  4. Parallel Implementation of the Wideband DOA Algorithm on the IBM Cell BE Processor

    DTIC Science & Technology

    2010-05-01

    Abstract—The Multiple Signal Classification ( MUSIC ) algorithm is a powerful technique for determining the Direction of Arrival (DOA) of signals...Broadband Engine Processor (Cell BE). The process of adapting the serial based MUSIC algorithm to the Cell BE will be analyzed in terms of parallelism and...using Multiple Signal Classification MUSIC algorithm [4] • Computation of Focus matrix • Computation of number of sources • Separation of Signal

  5. The Impact of IBM Cell Technology on the Programming Paradigm in the Context of Computer Systems for Climate and Weather Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Shujia; Duffy, Daniel; Clune, Thomas

    The call for ever-increasing model resolutions and physical processes in climate and weather models demands a continual increase in computing power. The IBM Cell processor's order-of-magnitude peak performance increase over conventional processors makes it very attractive to fulfill this requirement. However, the Cell's characteristics, 256KB local memory per SPE and the new low-level communication mechanism, make it very challenging to port an application. As a trial, we selected the solar radiation component of the NASA GEOS-5 climate model, which: (1) is representative of column physics components (half the total computational time), (2) has an extremely high computational intensity: the ratiomore » of computational load to main memory transfers, and (3) exhibits embarrassingly parallel column computations. In this paper, we converted the baseline code (single-precision Fortran) to C and ported it to an IBM BladeCenter QS20. For performance, we manually SIMDize four independent columns and include several unrolling optimizations. Our results show that when compared with the baseline implementation running on one core of Intel's Xeon Woodcrest, Dempsey, and Itanium2, the Cell is approximately 8.8x, 11.6x, and 12.8x faster, respectively. Our preliminary analysis shows that the Cell can also accelerate the dynamics component (~;;25percent total computational time). We believe these dramatic performance improvements make the Cell processor very competitive as an accelerator.« less

  6. Conversion of Mass Storage Hierarchy in an IBM Computer Network

    DTIC Science & Technology

    1989-03-01

    storage devices GUIDE IBM users’ group for DOS operating systems IBM International Business Machines IBM 370/145 CPU introduced in 1970 IBM 370/168 CPU...February 12, 1985, Information Systems Group, International Business Machines Corporation. "IBM 3090 Processor Complex" and 񓼪 Mass Storage System...34 Mainframe Journal, pp. 15-26, 64-65, Dallas, Texas, September-October 1987. 3. International Business Machines Corporation, Introduction to IBM 3S80 Storage

  7. Initial Performance Results on IBM POWER6

    NASA Technical Reports Server (NTRS)

    Saini, Subbash; Talcott, Dale; Jespersen, Dennis; Djomehri, Jahed; Jin, Haoqiang; Mehrotra, Piysuh

    2008-01-01

    The POWER5+ processor has a faster memory bus than that of the previous generation POWER5 processor (533 MHz vs. 400 MHz), but the measured per-core memory bandwidth of the latter is better than that of the former (5.7 GB/s vs. 4.3 GB/s). The reason for this is that in the POWER5+, the two cores on the chip share the L2 cache, L3 cache and memory bus. The memory controller is also on the chip and is shared by the two cores. This serializes the path to memory. For consistently good performance on a wide range of applications, the performance of the processor, the memory subsystem, and the interconnects (both latency and bandwidth) should be balanced. Recognizing this, IBM has designed the Power6 processor so as to avoid the bottlenecks due to the L2 cache, memory controller and buffer chips of the POWER5+. Unlike the POWER5+, each core in the POWER6 has its own L2 cache (4 MB - double that of the Power5+), memory controller and buffer chips. Each core in the POWER6 runs at 4.7 GHz instead of 1.9 GHz in POWER5+. In this paper, we evaluate the performance of a dual-core Power6 based IBM p6-570 system, and we compare its performance with that of a dual-core Power5+ based IBM p575+ system. In this evaluation, we have used the High- Performance Computing Challenge (HPCC) benchmarks, NAS Parallel Benchmarks (NPB), and four real-world applications--three from computational fluid dynamics and one from climate modeling.

  8. Implementation of a cone-beam backprojection algorithm on the cell broadband engine processor

    NASA Astrophysics Data System (ADS)

    Bockenbach, Olivier; Knaup, Michael; Kachelrieß, Marc

    2007-03-01

    Tomographic image reconstruction is computationally very demanding. In all cases the backprojection represents the performance bottleneck due to the high operational count and due to the high demand put on the memory subsystem. In the past, solving this problem has lead to the implementation of specific architectures, connecting Application Specific Integrated Circuits (ASICs) or Field Programmable Gate Arrays (FPGAs) to memory through dedicated high speed busses. More recently, there have also been attempt to use Graphic Processing Units (GPUs) to perform the backprojection step. Originally aimed at the gaming market, IBM, Toshiba and Sony have introduced the Cell Broadband Engine (CBE) processor, often considered as a multicomputer on a chip. Clocked at 3 GHz, the Cell allows for a theoretical performance of 192 GFlops and a peak data transfer rate over the internal bus of 200 GB/s. This performance indeed makes the Cell a very attractive architecture for implementing tomographic image reconstruction algorithms. In this study, we investigate the relative performance of a perspective backprojection algorithm when implemented on a standard PC and on the Cell processor. We compare these results to the performance achievable with FPGAs based boards and high end GPUs. The cone-beam backprojection performance was assessed by backprojecting a full circle scan of 512 projections of 1024x1024 pixels into a volume of size 512x512x512 voxels. It took 3.2 minutes on the PC (single CPU) and is as fast as 13.6 seconds on the Cell.

  9. MULTI-CORE AND OPTICAL PROCESSOR RELATED APPLICATIONS RESEARCH AT OAK RIDGE NATIONAL LABORATORY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barhen, Jacob; Kerekes, Ryan A; ST Charles, Jesse Lee

    2008-01-01

    High-speed parallelization of common tasks holds great promise as a low-risk approach to achieving the significant increases in signal processing and computational performance required for next generation innovations in reconfigurable radio systems. Researchers at the Oak Ridge National Laboratory have been working on exploiting the parallelization offered by this emerging technology and applying it to a variety of problems. This paper will highlight recent experience with four different parallel processors applied to signal processing tasks that are directly relevant to signal processing required for SDR/CR waveforms. The first is the EnLight Optical Core Processor applied to matched filter (MF) correlationmore » processing via fast Fourier transform (FFT) of broadband Dopplersensitive waveforms (DSW) using active sonar arrays for target tracking. The second is the IBM CELL Broadband Engine applied to 2-D discrete Fourier transform (DFT) kernel for image processing and frequency domain processing. And the third is the NVIDIA graphical processor applied to document feature clustering. EnLight Optical Core Processor. Optical processing is inherently capable of high-parallelism that can be translated to very high performance, low power dissipation computing. The EnLight 256 is a small form factor signal processing chip (5x5 cm2) with a digital optical core that is being developed by an Israeli startup company. As part of its evaluation of foreign technology, ORNL's Center for Engineering Science Advanced Research (CESAR) had access to a precursor EnLight 64 Alpha hardware for a preliminary assessment of capabilities in terms of large Fourier transforms for matched filter banks and on applications related to Doppler-sensitive waveforms. This processor is optimized for array operations, which it performs in fixed-point arithmetic at the rate of 16 TeraOPS at 8-bit precision. This is approximately 1000 times faster than the fastest DSP available today. The optical

  10. The GF11 project at IBM

    NASA Astrophysics Data System (ADS)

    Sexton, James C.

    1990-08-01

    The GF11 project at IBM's T. J. Watson Research Center is entering full production for QCD numerical calculations. This paper describes the GF11 hardware and system software, and discusses the first production program which has been developed to run on GF11. This program is a variation of the Cabbibo Marinari pure gauge Monte Carlo program for SU(3) and is currently sustaining almost 6 gigaflops on 360 processors in GF11.

  11. Fuel processors for fuel cell APU applications

    NASA Astrophysics Data System (ADS)

    Aicher, T.; Lenz, B.; Gschnell, F.; Groos, U.; Federici, F.; Caprile, L.; Parodi, L.

    The conversion of liquid hydrocarbons to a hydrogen rich product gas is a central process step in fuel processors for auxiliary power units (APUs) for vehicles of all kinds. The selection of the reforming process depends on the fuel and the type of the fuel cell. For vehicle power trains, liquid hydrocarbons like gasoline, kerosene, and diesel are utilized and, therefore, they will also be the fuel for the respective APU systems. The fuel cells commonly envisioned for mobile APU applications are molten carbonate fuel cells (MCFC), solid oxide fuel cells (SOFC), and proton exchange membrane fuel cells (PEMFC). Since high-temperature fuel cells, e.g. MCFCs or SOFCs, can be supplied with a feed gas that contains carbon monoxide (CO) their fuel processor does not require reactors for CO reduction and removal. For PEMFCs on the other hand, CO concentrations in the feed gas must not exceed 50 ppm, better 20 ppm, which requires additional reactors downstream of the reforming reactor. This paper gives an overview of the current state of the fuel processor development for APU applications and APU system developments. Furthermore, it will present the latest developments at Fraunhofer ISE regarding fuel processors for high-temperature fuel cell APU systems on board of ships and aircrafts.

  12. Multiple Embedded Processors for Fault-Tolerant Computing

    NASA Technical Reports Server (NTRS)

    Bolotin, Gary; Watson, Robert; Katanyoutanant, Sunant; Burke, Gary; Wang, Mandy

    2005-01-01

    A fault-tolerant computer architecture has been conceived in an effort to reduce vulnerability to single-event upsets (spurious bit flips caused by impingement of energetic ionizing particles or photons). As in some prior fault-tolerant architectures, the redundancy needed for fault tolerance is obtained by use of multiple processors in one computer. Unlike prior architectures, the multiple processors are embedded in a single field-programmable gate array (FPGA). What makes this new approach practical is the recent commercial availability of FPGAs that are capable of having multiple embedded processors. A working prototype (see figure) consists of two embedded IBM PowerPC 405 processor cores and a comparator built on a Xilinx Virtex-II Pro FPGA. This relatively simple instantiation of the architecture implements an error-detection scheme. A planned future version, incorporating four processors and two comparators, would correct some errors in addition to detecting them.

  13. Accelerating molecular dynamic simulation on the cell processor and Playstation 3.

    PubMed

    Luttmann, Edgar; Ensign, Daniel L; Vaidyanathan, Vishal; Houston, Mike; Rimon, Noam; Øland, Jeppe; Jayachandran, Guha; Friedrichs, Mark; Pande, Vijay S

    2009-01-30

    Implementation of molecular dynamics (MD) calculations on novel architectures will vastly increase its power to calculate the physical properties of complex systems. Herein, we detail algorithmic advances developed to accelerate MD simulations on the Cell processor, a commodity processor found in PlayStation 3 (PS3). In particular, we discuss issues regarding memory access versus computation and the types of calculations which are best suited for streaming processors such as the Cell, focusing on implicit solvation models. We conclude with a comparison of improved performance on the PS3's Cell processor over more traditional processors. (c) 2008 Wiley Periodicals, Inc.

  14. RISC Processors and High Performance Computing

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Bailey, David H.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    In this tutorial, we will discuss top five current RISC microprocessors: The IBM Power2, which is used in the IBM RS6000/590 workstation and in the IBM SP2 parallel supercomputer, the DEC Alpha, which is in the DEC Alpha workstation and in the Cray T3D; the MIPS R8000, which is used in the SGI Power Challenge; the HP PA-RISC 7100, which is used in the HP 700 series workstations and in the Convex Exemplar; and the Cray proprietary processor, which is used in the new Cray J916. The architecture of these microprocessors will first be presented. The effective performance of these processors will then be compared, both by citing standard benchmarks and also in the context of implementing a real applications. In the process, different programming models such as data parallel (CM Fortran and HPF) and message passing (PVM and MPI) will be introduced and compared. The latest NAS Parallel Benchmark (NPB) absolute performance and performance per dollar figures will be presented. The next generation of the NP13 will also be described. The tutorial will conclude with a discussion of general trends in the field of high performance computing, including likely future developments in hardware and software technology, and the relative roles of vector supercomputers tightly coupled parallel computers, and clusters of workstations. This tutorial will provide a unique cross-machine comparison not available elsewhere.

  15. LEOPARD; SPOTS; sjectrum calculation with depletion. [IBM360; UNIVAC1108; FORTRAN IV(H) (IBM360) and FORTRAN V (UNIVAC1108)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barry, R.F.

    LEOPARD is a unit cell homogenization and spectrum generation (MUFT-SOFOCATE type) program with a fuel depletion option.IBM360;UNIVAC1108; FORTRAN IV(H) (IBM360) and FORTRAN V (UNIVAC1108); OS/360 (IBM360) and EXEC2 (UNIVAC1108); 50K (decimal) memory.

  16. Preparation and in vitro function of granulocyte concentrates for transfusion to neonates using the IBM 2991 blood processor.

    PubMed

    Goldfinger, D; Medici, M A; Hsi, R; McPherson, J; Connelly, M

    1983-01-01

    Clinical studies have suggested that granulocyte transfusions may be of value in the treatment of septic neonatal patients who present with severe granulocytopenia. We have developed a protocol for the preparation of granulocyte concentrates from freshly collected units of whole blood, using an automated blood cell processor. The red cells were washed with saline. Then, the buffy coats were collected from the washed red cells and studied for their suitability as granulocyte concentrates for neonatal transfusion. The mean number of granulocytes per concentrate was 1.6 X 10(9) in a mean volume of 25 ml. Studies of granulocyte function, including viability, random mobility, chemotaxis, phagocytosis and nitro-blue tetrazolium reduction, demonstrated that the granulocytes were functionally unimpaired following preparation of the concentrates. These studies suggest that concentrates of functional granulocytes, suitable for transfusion to neonatal patients, can be prepared from fresh units of whole blood, using a cell processor. This procedure is more cost-effective than leukapheresis and allows for delivery of granulocytes for transfusion in a more timely fashion.

  17. Geospace simulations on the Cell BE processor

    NASA Astrophysics Data System (ADS)

    Germaschewski, K.; Raeder, J.; Larson, D.

    2008-12-01

    OpenGGCM (Open Geospace General circulation Model) is an established numerical code that simulates the Earth's space environment. The most computing intensive part is the MHD (magnetohydrodynamics) solver that models the plasma surrounding Earth and its interaction with Earth's magnetic field and the solar wind flowing in from the sun. Like other global magnetosphere codes, OpenGGCM's realism is limited by computational constraints on grid resolution. We investigate porting of the MHD solver to the Cell BE architecture, a novel inhomogeneous multicore architecture capable of up to 230 GFlops per processor. Realizing this high performance on the Cell processor is a programming challenge, though. We implemented the MHD solver using a multi-level parallel approach: On the coarsest level, the problem is distributed to processors based upon the usual domain decomposition approach. Then, on each processor, the problem is divided into 3D columns, each of which is handled by the memory limited SPEs (synergistic processing elements) slice by slice. Finally, SIMD instructions are used to fully exploit the vector/SIMD FPUs in each SPE. Memory management needs to be handled explicitly by the code, using DMA to move data from main memory to the per-SPE local store and vice versa. We obtained excellent performance numbers, a speed-up of a factor of 25 compared to just using the main processor, while still keeping the numerical implementation details of the code maintainable.

  18. Performance of the Cell processor for biomolecular simulations

    NASA Astrophysics Data System (ADS)

    De Fabritiis, G.

    2007-06-01

    The new Cell processor represents a turning point for computing intensive applications. Here, I show that for molecular dynamics it is possible to reach an impressive sustained performance in excess of 30 Gflops with a peak of 45 Gflops for the non-bonded force calculations, over one order of magnitude faster than a single core standard processor.

  19. Miniature Fuel Processors for Portable Fuel Cell Power Supplies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holladay, Jamie D.; Jones, Evan O.; Palo, Daniel R.

    2003-06-02

    Miniature and micro-scale fuel processors are discussed. The enabling technologies for these devices are the novel catalysts and the micro-technology-based designs. The novel catalyst allows for methanol reforming at high gas hourly space velocities of 50,000 hr-1 or higher, while maintaining a carbon monoxide levels at 1% or less. The micro-technology-based designs enable the devices to be extremely compact and lightweight. The miniature fuel processors can nominally provide between 25-50 watts equivalent of hydrogen which is ample for soldier or personal portable power supplies. The integrated processors have a volume less than 50 cm3, a mass less than 150 grams,more » and thermal efficiencies of up to 83%. With reasonable assumptions on fuel cell efficiencies, anode gas and water management, parasitic power loss, etc., the energy density was estimated at 1700 Whr/kg. The miniature processors have been demonstrated with a carbon monoxide clean-up method and a fuel cell stack. The micro-scale fuel processors have been designed to provide up to 0.3 watt equivalent of power with efficiencies over 20%. They have a volume of less than 0.25 cm3 and a mass of less than 1 gram.« less

  20. A Comparison of Two Panasonic Lithium-Ion Batteries and Cells for the IBM Thinkpad

    NASA Technical Reports Server (NTRS)

    Jeevarajan, Judith A.; Cook, Joseph S.; Davies, Francis J.; Collins, Jacob; Bragg, Bobby J.

    2003-01-01

    The IBM Thinkpad 760XD has been used in the Orbiter and International Space Station since 2000. The Thinkpad is powered by a Panasonic Li-ion battery that has a voltage of 10.8 V and 3.0 Ah capacity. This Thinkpad is now being replaced by the IBM Thinkpad A31P which has a Panasonic Li-ion battery that has a voltage of 10.8 V and 4.0 Ah capacity. Both batteries have protective circuit boards. The Panasonic battery for the Thinkpad 760XD had 12 Panasonic 17500 cells of 0.75 Ah capacity in a 4P3S cOnfiguration. The new Panasonic battery has 6 Panasonic 18650 cells of 2.0 Ah capacity in a 2P3S configuration. The batteries and cells for both models have been evaluated for performance and safety. A comparison of the cells under similar test conditions will be presented. The performance of the cells has been evaluated under different rates of charge and discharge and different temperatures. The cells have been tested under abuse conditions and the safety features in the cells evaluated. The protective circuit board in the battery was also tested under conditions of overcharge, overdischarge, short circuit and unbalanced cell configurations. The results of the studies will be presented in this paper.

  1. Dense and Sparse Matrix Operations on the Cell Processor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Samuel W.; Shalf, John; Oliker, Leonid

    2005-05-01

    The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. Therefore, the high performance computing community is examining alternative architectures that address the limitations of modern superscalar designs. In this work, we examine STI's forthcoming Cell processor: a novel, low-power architecture that combines a PowerPC core with eight independent SIMD processing units coupled with a software-controlled memory to offer high FLOP/s/Watt. Since neither Cell hardware nor cycle-accurate simulators are currently publicly available, we develop an analytic framework to predict Cell performance on dense and sparse matrix operations, usingmore » a variety of algorithmic approaches. Results demonstrate Cell's potential to deliver more than an order of magnitude better GFLOP/s per watt performance, when compared with the Intel Itanium2 and Cray X1 processors.« less

  2. Performance assessment of KORAT-3D on the ANL IBM-SP computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alexeyev, A.V.; Zvenigorodskaya, O.A.; Shagaliev, R.M.

    1999-09-01

    The TENAR code is currently being developed at the Russian Federal Nuclear Center (VNIIEF) as a coupled dynamics code for the simulation of transients in VVER and RBMK systems and other nuclear systems. The neutronic module in this code system is KORAT-3D. This module is also one of the most computationally intensive components of the code system. A parallel version of KORAT-3D has been implemented to achieve the goal of obtaining transient solutions in reasonable computational time, particularly for RBMK calculations that involve the application of >100,000 nodes. An evaluation of the KORAT-3D code performance was recently undertaken on themore » Argonne National Laboratory (ANL) IBM ScalablePower (SP) parallel computer located in the Mathematics and Computer Science Division of ANL. At the time of the study, the ANL IBM-SP computer had 80 processors. This study was conducted under the auspices of a technical staff exchange program sponsored by the International Nuclear Safety Center (INSC).« less

  3. Efficacy of Code Optimization on Cache-Based Processors

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Saphir, William C.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    In this paper a number of techniques for improving the cache performance of a representative piece of numerical software is presented. Target machines are popular processors from several vendors: MIPS R5000 (SGI Indy), MIPS R8000 (SGI PowerChallenge), MIPS R10000 (SGI Origin), DEC Alpha EV4 + EV5 (Cray T3D & T3E), IBM RS6000 (SP Wide-node), Intel PentiumPro (Ames' Whitney), Sun UltraSparc (NERSC's NOW). The optimizations all attempt to increase the locality of memory accesses. But they meet with rather varied and often counterintuitive success on the different computing platforms. We conclude that it may be genuinely impossible to obtain portable performance on the current generation of cache-based machines. At the least, it appears that the performance of modern commodity processors cannot be described with parameters defining the cache alone.

  4. The Easy Way of Finding Parameters in IBM (EWofFP-IBM)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turkan, Nureddin

    E2/M1 multipole mixing ratios of even-even nuclei in transitional region can be calculated as soon as B(E2) and B(M1) values by using the PHINT and/or NP-BOS codes. The correct calculations of energies must be obtained to produce such calculations. Also, the correct parameter values are needed to calculate the energies. The logic of the codes is based on the mathematical and physical Statements describing interacting boson model (IBM) which is one of the model of nuclear structure physics. Here, the big problem is to find the best fitted parameters values of the model. So, by using the Easy Way ofmore » Finding Parameters in IBM (EWofFP-IBM), the best parameter values of IBM Hamiltonian for {sup 102-110}Pd and {sup 102-110}Ru isotopes were firstly obtained and then the energies were calculated. At the end, it was seen that the calculated results are in good agreement with the experimental ones. In addition, it was carried out that the presented energy values obtained by using the EWofFP-IBM are dominantly better than the previous theoretical data.« less

  5. Bill Lang's contributions to acoustics at the International Business Machines Corp. (IBM) and to IBM in general

    NASA Astrophysics Data System (ADS)

    Nobile, Matthew A.; Chu, Richard C.

    2005-09-01

    Although Bill Lang's accomplishments and key roles in national and international standards and in the formation of INCE are widely recognized, sometimes it has to be remembered that for nearly 35 years he also had a ``day job'' at the IBM Corporation. His achievements at IBM were no less significant and enduring than those in external standards and professional societies. This paper will highlight some of the accomplishments and activities of Bill Lang as an IBM noise control engineer, the creator of the IBM Acoustics Lab in Poughkeepsie, the founder of the global Acoustics program at IBM, and his many other IBM leadership roles. Bill was also a long-serving IBM manager, with the full set of personnel issues to deal with, so his people-management skills were often called into play. Bill ended his long and fruitful IBM career at a high point. In 1988, he took an original idea of his to the top of IBM executive management, which led directly to the formation of the IBM Academy of Technology, today the preeminent body of IBM top technical leaders from around the world.

  6. Communication overhead on the Intel Paragon, IBM SP2 and Meiko CS-2

    NASA Technical Reports Server (NTRS)

    Bokhari, Shahid H.

    1995-01-01

    Interprocessor communication overhead is a crucial measure of the power of parallel computing systems-its impact can severely limit the performance of parallel programs. This report presents measurements of communication overhead on three contemporary commercial multicomputer systems: the Intel Paragon, the IBM SP2 and the Meiko CS-2. In each case the time to communicate between processors is presented as a function of message length. The time for global synchronization and memory access is discussed. The performance of these machines in emulating hypercubes and executing random pairwise exchanges is also investigated. It is shown that the interprocessor communication time depends heavily on the specific communication pattern required. These observations contradict the commonly held belief that communication overhead on contemporary machines is independent of the placement of tasks on processors. The information presented in this report permits the evaluation of the efficiency of parallel algorithm implementations against standard baselines.

  7. Optimization of Particle-in-Cell Codes on RISC Processors

    NASA Technical Reports Server (NTRS)

    Decyk, Viktor K.; Karmesin, Steve Roy; Boer, Aeint de; Liewer, Paulette C.

    1996-01-01

    General strategies are developed to optimize particle-cell-codes written in Fortran for RISC processors which are commonly used on massively parallel computers. These strategies include data reorganization to improve cache utilization and code reorganization to improve efficiency of arithmetic pipelines.

  8. IBM Application System/400 as the foundation of the Mayo Clinic/IBM PACS project

    NASA Astrophysics Data System (ADS)

    Rothman, Melvyn L.; Morin, Richard L.; Persons, Kenneth R.; Gibbons, Patricia S.

    1990-08-01

    An IBM Application System/400 (AS/400) anchors the Mayo Clinic/IBM joint development PACS project. This paper highlights some of the AS/400's features and the resulting benefits which make it a strong foundation for a medical image archival and review system. Among the AS/400's key features are: 1. A high-level machine architecture 2. Object orientation 3. Relational data base and other functions integrated into the system's architecture 4. High-function interfaces to IBM Personal Computers and IBM Personal System/2s' (pS/2TM).

  9. Deep learning for medical image segmentation - using the IBM TrueNorth neurosynaptic system

    NASA Astrophysics Data System (ADS)

    Moran, Steven; Gaonkar, Bilwaj; Whitehead, William; Wolk, Aidan; Macyszyn, Luke; Iyer, Subramanian S.

    2018-03-01

    Deep convolutional neural networks have found success in semantic image segmentation tasks in computer vision and medical imaging. These algorithms are executed on conventional von Neumann processor architectures or GPUs. This is suboptimal. Neuromorphic processors that replicate the structure of the brain are better-suited to train and execute deep learning models for image segmentation by relying on massively-parallel processing. However, given that they closely emulate the human brain, on-chip hardware and digital memory limitations also constrain them. Adapting deep learning models to execute image segmentation tasks on such chips, requires specialized training and validation. In this work, we demonstrate for the first-time, spinal image segmentation performed using a deep learning network implemented on neuromorphic hardware of the IBM TrueNorth Neurosynaptic System and validate the performance of our network by comparing it to human-generated segmentations of spinal vertebrae and disks. To achieve this on neuromorphic hardware, the training model constrains the coefficients of individual neurons to {-1,0,1} using the Energy Efficient Deep Neuromorphic (EEDN)1 networks training algorithm. Given the 1 million neurons and 256 million synapses, the scale and size of the neural network implemented by the IBM TrueNorth allows us to execute the requisite mapping between segmented images and non-uniform intensity MR images >20 times faster than on a GPU-accelerated network and using <0.1 W. This speed and efficiency implies that a trained neuromorphic chip can be deployed in intra-operative environments where real-time medical image segmentation is necessary.

  10. Flexstab on the IBM 360

    NASA Technical Reports Server (NTRS)

    Pyle, R. S.; Sykora, R. G.; Denman, S. C.

    1976-01-01

    FLEXSTAB, an array of computer programs developed on CDC equipment, has been converted to operate on the IBM 360 computation system. Instructions for installing, validating, and operating FLEXSTAB on the IBM 360 are included. Hardware requirements are itemized and supplemental materials describe JCL sequences, the CDC to IBM conversion, the input output subprograms, and the interprogram data flow.

  11. Accelerating Climate and Weather Simulations through Hybrid Computing

    NASA Technical Reports Server (NTRS)

    Zhou, Shujia; Cruz, Carlos; Duffy, Daniel; Tucker, Robert; Purcell, Mark

    2011-01-01

    Unconventional multi- and many-core processors (e.g. IBM (R) Cell B.E.(TM) and NVIDIA (R) GPU) have emerged as effective accelerators in trial climate and weather simulations. Yet these climate and weather models typically run on parallel computers with conventional processors (e.g. Intel, AMD, and IBM) using Message Passing Interface. To address challenges involved in efficiently and easily connecting accelerators to parallel computers, we investigated using IBM's Dynamic Application Virtualization (TM) (IBM DAV) software in a prototype hybrid computing system with representative climate and weather model components. The hybrid system comprises two Intel blades and two IBM QS22 Cell B.E. blades, connected with both InfiniBand(R) (IB) and 1-Gigabit Ethernet. The system significantly accelerates a solar radiation model component by offloading compute-intensive calculations to the Cell blades. Systematic tests show that IBM DAV can seamlessly offload compute-intensive calculations from Intel blades to Cell B.E. blades in a scalable, load-balanced manner. However, noticeable communication overhead was observed, mainly due to IP over the IB protocol. Full utilization of IB Sockets Direct Protocol and the lower latency production version of IBM DAV will reduce this overhead.

  12. Online Help from IBM.

    ERIC Educational Resources Information Center

    Moore, Jack

    1988-01-01

    The article describes the IBM/Special Needs Exchange which consists of: (1) electronic mail, conferencing, and a library of text and program files on the CompuServe Information Service; and (2) a dial-in database of special education software for IBM and compatible computers. (DB)

  13. Accelerating Climate Simulations Through Hybrid Computing

    NASA Technical Reports Server (NTRS)

    Zhou, Shujia; Sinno, Scott; Cruz, Carlos; Purcell, Mark

    2009-01-01

    Unconventional multi-core processors (e.g., IBM Cell B/E and NYIDIDA GPU) have emerged as accelerators in climate simulation. However, climate models typically run on parallel computers with conventional processors (e.g., Intel and AMD) using MPI. Connecting accelerators to this architecture efficiently and easily becomes a critical issue. When using MPI for connection, we identified two challenges: (1) identical MPI implementation is required in both systems, and; (2) existing MPI code must be modified to accommodate the accelerators. In response, we have extended and deployed IBM Dynamic Application Virtualization (DAV) in a hybrid computing prototype system (one blade with two Intel quad-core processors, two IBM QS22 Cell blades, connected with Infiniband), allowing for seamlessly offloading compute-intensive functions to remote, heterogeneous accelerators in a scalable, load-balanced manner. Currently, a climate solar radiation model running with multiple MPI processes has been offloaded to multiple Cell blades with approx.10% network overhead.

  14. A microcomputer interface for a digital audio processor-based data recording system.

    PubMed

    Croxton, T L; Stump, S J; Armstrong, W M

    1987-10-01

    An inexpensive interface is described that performs direct transfer of digitized data from the digital audio processor and video cassette recorder based data acquisition system designed by Bezanilla (1985, Biophys. J., 47:437-441) to an IBM PC/XT microcomputer. The FORTRAN callable software that drives this interface is capable of controlling the video cassette recorder and starting data collection immediately after recognition of a segment of previously collected data. This permits piecewise analysis of long intervals of data that would otherwise exceed the memory capability of the microcomputer.

  15. A microcomputer interface for a digital audio processor-based data recording system.

    PubMed Central

    Croxton, T L; Stump, S J; Armstrong, W M

    1987-01-01

    An inexpensive interface is described that performs direct transfer of digitized data from the digital audio processor and video cassette recorder based data acquisition system designed by Bezanilla (1985, Biophys. J., 47:437-441) to an IBM PC/XT microcomputer. The FORTRAN callable software that drives this interface is capable of controlling the video cassette recorder and starting data collection immediately after recognition of a segment of previously collected data. This permits piecewise analysis of long intervals of data that would otherwise exceed the memory capability of the microcomputer. PMID:3676444

  16. MATCHED FILTER COMPUTATION ON FPGA, CELL, AND GPU

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    BAKER, ZACHARY K.; GOKHALE, MAYA B.; TRIPP, JUSTIN L.

    2007-01-08

    The matched filter is an important kernel in the processing of hyperspectral data. The filter enables researchers to sift useful data from instruments that span large frequency bands. In this work, they evaluate the performance of a matched filter algorithm implementation on accelerated co-processor (XD1000), the IBM Cell microprocessor, and the NVIDIA GeForce 6900 GTX GPU graphics card. They provide extensive discussion of the challenges and opportunities afforded by each platform. In particular, they explore the problems of partitioning the filter most efficiently between the host CPU and the co-processor. Using their results, they derive several performance metrics that providemore » the optimal solution for a variety of application situations.« less

  17. Simulation of a 250 kW diesel fuel processor/PEM fuel cell system

    NASA Astrophysics Data System (ADS)

    Amphlett, J. C.; Mann, R. F.; Peppley, B. A.; Roberge, P. R.; Rodrigues, A.; Salvador, J. P.

    Polymer-electrolyte membrane (PEM) fuel cell systems offer a potential power source for utility and mobile applications. Practical fuel cell systems use fuel processors for the production of hydrogen-rich gas. Liquid fuels, such as diesel or other related fuels, are attractive options as feeds to a fuel processor. The generation of hydrogen gas for fuel cells, in most cases, becomes the crucial design issue with respect to weight and volume in these applications. Furthermore, these systems will require a gas clean-up system to insure that the fuel quality meets the demands of the cell anode. The endothermic nature of the reformer will have a significant affect on the overall system efficiency. The gas clean-up system may also significantly effect the overall heat balance. To optimize the performance of this integrated system, therefore, waste heat must be used effectively. Previously, we have concentrated on catalytic methanol-steam reforming. A model of a methanol steam reformer has been previously developed and has been used as the basis for a new, higher temperature model for liquid hydrocarbon fuels. Similarly, our fuel cell evaluation program previously led to the development of a steady-state electrochemical fuel cell model (SSEM). The hydrocarbon fuel processor model and the SSEM have now been incorporated in the development of a process simulation of a 250 kW diesel-fueled reformer/fuel cell system using a process simulator. The performance of this system has been investigated for a variety of operating conditions and a preliminary assessment of thermal integration issues has been carried out. This study demonstrates the application of a process simulation model as a design analysis tool for the development of a 250 kW fuel cell system.

  18. Metal membrane-type 25-kW methanol fuel processor for fuel-cell hybrid vehicle

    NASA Astrophysics Data System (ADS)

    Han, Jaesung; Lee, Seok-Min; Chang, Hyuksang

    A 25-kW on-board methanol fuel processor has been developed. It consists of a methanol steam reformer, which converts methanol to hydrogen-rich gas mixture, and two metal membrane modules, which clean-up the gas mixture to high-purity hydrogen. It produces hydrogen at rates up to 25 N m 3/h and the purity of the product hydrogen is over 99.9995% with a CO content of less than 1 ppm. In this fuel processor, the operating condition of the reformer and the metal membrane modules is nearly the same, so that operation is simple and the overall system construction is compact by eliminating the extensive temperature control of the intermediate gas streams. The recovery of hydrogen in the metal membrane units is maintained at 70-75% by the control of the pressure in the system, and the remaining 25-30% hydrogen is recycled to a catalytic combustion zone to supply heat for the methanol steam-reforming reaction. The thermal efficiency of the fuel processor is about 75% and the inlet air pressure is as low as 4 psi. The fuel processor is currently being integrated with 25-kW polymer electrolyte membrane fuel-cell (PEMFC) stack developed by the Hyundai Motor Company. The stack exhibits the same performance as those with pure hydrogen, which proves that the maximum power output as well as the minimum stack degradation is possible with this fuel processor. This fuel-cell 'engine' is to be installed in a hybrid passenger vehicle for road testing.

  19. Earth Observations Division version of the Laboratory for Applications of Remote Sensing System (EOD-LARSYS) user guide for the IBM 370/148. Volume 2: User reference manual

    NASA Technical Reports Server (NTRS)

    Aucoin, P. J.; Stewart, J.; Mckay, M. F. (Principal Investigator)

    1980-01-01

    This document presents instructions for analysts who use the EOD-LARSYS as programmed on the Purdue University IBM 370/148 (recently replaced by the IBM 3031) computer. It presents sample applications, control cards, and error messages for all processors in the system and gives detailed descriptions of the mathematical procedures and information needed to execute the system and obtain the desired output. EOD-LARSYS is the JSC version of an integrated batch system for analysis of multispectral scanner imagery data. The data included is designed for use with the as built documentation (volume 3) and the program listings (volume 4). The system is operational from remote terminals at Johnson Space Center under the virtual machine/conversational monitor system environment.

  20. A natural-gas fuel processor for a residential fuel cell system

    NASA Astrophysics Data System (ADS)

    Adachi, H.; Ahmed, S.; Lee, S. H. D.; Papadias, D.; Ahluwalia, R. K.; Bendert, J. C.; Kanner, S. A.; Yamazaki, Y.

    A system model was used to develop an autothermal reforming fuel processor to meet the targets of 80% efficiency (higher heating value) and start-up energy consumption of less than 500 kJ when operated as part of a 1-kWe natural-gas fueled fuel cell system for cogeneration of heat and power. The key catalytic reactors of the fuel processor - namely the autothermal reformer, a two-stage water gas shift reactor and a preferential oxidation reactor - were configured and tested in a breadboard apparatus. Experimental results demonstrated a reformate containing ∼48% hydrogen (on a dry basis and with pure methane as fuel) and less than 5 ppm CO. The effects of steam-to-carbon and part load operations were explored.

  1. Gyrokinetic micro-turbulence simulations on the NERSC 16-way SMP IBM SP computer: experiences and performance results

    NASA Astrophysics Data System (ADS)

    Ethier, Stephane; Lin, Zhihong

    2001-10-01

    Earlier this year, the National Energy Research Scientific Computing center (NERSC) took delivery of the second most powerful computer in the world. With its 2,528 processors running at a peak performance of 1.5 GFlops, this IBM SP machine has a theoretical performance of almost 3.8 TFlops. To efficiently harness such computing power in one single code is not an easy task and requires a good knowledge of the computer's architecture. Here we present the steps that we followed to improve our gyrokinetic micro-turbulence code GTC in order to take advantage of the new 16-way shared memory nodes of the NERSC IBM SP. Performance results are shown as well as details about the improved mixed-mode MPI-OpenMP model that we use. The enhancements to the code allowed us to tackle much bigger problem sizes, getting closer to our goal of simulating an ITER-size tokamak with both kinetic ions and electrons.(This work is supported by DOE Contract No. DE-AC02-76CH03073 (PPPL), and in part by the DOE Fusion SciDAC Project.)

  2. Monte Carlo dose calculation using a cell processor based PlayStation 3 system

    NASA Astrophysics Data System (ADS)

    Chow, James C. L.; Lam, Phil; Jaffray, David A.

    2012-02-01

    This study investigates the performance of the EGSnrc computer code coupled with a Cell-based hardware in Monte Carlo simulation of radiation dose in radiotherapy. Performance evaluations of two processor-intensive functions namely, HOWNEAR and RANMAR_GET in the EGSnrc code were carried out basing on the 20-80 rule (Pareto principle). The execution speeds of the two functions were measured by the profiler gprof specifying the number of executions and total time spent on the functions. A testing architecture designed for Cell processor was implemented in the evaluation using a PlayStation3 (PS3) system. The evaluation results show that the algorithms examined are readily parallelizable on the Cell platform, provided that an architectural change of the EGSnrc was made. However, as the EGSnrc performance was limited by the PowerPC Processing Element in the PS3, PC coupled with graphics processing units or GPCPU may provide a more viable avenue for acceleration.

  3. A self-sustained, complete and miniaturized methanol fuel processor for proton exchange membrane fuel cell

    NASA Astrophysics Data System (ADS)

    Yang, Mei; Jiao, Fengjun; Li, Shulian; Li, Hengqiang; Chen, Guangwen

    2015-08-01

    A self-sustained, complete and miniaturized methanol fuel processor has been developed based on modular integration and microreactor technology. The fuel processor is comprised of one methanol oxidative reformer, one methanol combustor and one two-stage CO preferential oxidation unit. Microchannel heat exchanger is employed to recover heat from hot stream, miniaturize system size and thus achieve high energy utilization efficiency. By optimized thermal management and proper operation parameter control, the fuel processor can start up in 10 min at room temperature without external heating. A self-sustained state is achieved with H2 production rate of 0.99 Nm3 h-1 and extremely low CO content below 25 ppm. This amount of H2 is sufficient to supply a 1 kWe proton exchange membrane fuel cell. The corresponding thermal efficiency of whole processor is higher than 86%. The size and weight of the assembled reactors integrated with microchannel heat exchangers are 1.4 L and 5.3 kg, respectively, demonstrating a very compact construction of the fuel processor.

  4. Parallel hyperbolic PDE simulation on clusters: Cell versus GPU

    NASA Astrophysics Data System (ADS)

    Rostrup, Scott; De Sterck, Hans

    2010-12-01

    Increasingly, high-performance computing is looking towards data-parallel computational devices to enhance computational performance. Two technologies that have received significant attention are IBM's Cell Processor and NVIDIA's CUDA programming model for graphics processing unit (GPU) computing. In this paper we investigate the acceleration of parallel hyperbolic partial differential equation simulation on structured grids with explicit time integration on clusters with Cell and GPU backends. The message passing interface (MPI) is used for communication between nodes at the coarsest level of parallelism. Optimizations of the simulation code at the several finer levels of parallelism that the data-parallel devices provide are described in terms of data layout, data flow and data-parallel instructions. Optimized Cell and GPU performance are compared with reference code performance on a single x86 central processing unit (CPU) core in single and double precision. We further compare the CPU, Cell and GPU platforms on a chip-to-chip basis, and compare performance on single cluster nodes with two CPUs, two Cell processors or two GPUs in a shared memory configuration (without MPI). We finally compare performance on clusters with 32 CPUs, 32 Cell processors, and 32 GPUs using MPI. Our GPU cluster results use NVIDIA Tesla GPUs with GT200 architecture, but some preliminary results on recently introduced NVIDIA GPUs with the next-generation Fermi architecture are also included. This paper provides computational scientists and engineers who are considering porting their codes to accelerator environments with insight into how structured grid based explicit algorithms can be optimized for clusters with Cell and GPU accelerators. It also provides insight into the speed-up that may be gained on current and future accelerator architectures for this class of applications. Program summaryProgram title: SWsolver Catalogue identifier: AEGY_v1_0 Program summary URL

  5. Ada (Tradename) Compiler Validation Summary Report. International Business Machines Corporation. IBM Development System for the Ada Language for VM/CMS, Version 1.0. IBM 4381 (IBM System/370) under VM/CMS.

    DTIC Science & Technology

    1986-04-29

    COMPILER VALIDATION SUMMARY REPORT: International Business Machines Corporation IBM Development System for the Ada Language for VM/CMS, Version 1.0 IBM 4381...tested using command scripts provided by International Business Machines Corporation. These scripts were reviewed by the validation team. Test.s were run...s): IBM 4381 (System/370) Operating System: VM/CMS, release 3.6 International Business Machines Corporation has made no deliberate extensions to the

  6. Ada (Trade Name) Compiler Validation Summary Report: International Business Machines Corporation. IBM Development System for the Ada Language System, Version 1.1.0, IBM 4381 under VM/SP CMS Host, IBM 4381 under MVS Target

    DTIC Science & Technology

    1988-05-20

    AVF Control Number: AVF-VSR-84.1087 ’S (0 87-03-10-TEL I- Ada® COMPILER VALIDATION SUMMARY REPORT: International Business Machines Corporation IBM...System, Version 1.1.0, International Business Machines Corporation, Wright-Patterson AFB. IBM 4381 under VM/SP CMS, Release 3.6 (host) and IBM 4381...an IBM 4381 operating under MVS, Release 3.8. On-site testing was performed 18 May 1987 through 20 May 1987 at International Business Machines

  7. High-performance ultra-low power VLSI analog processor for data compression

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul (Inventor)

    1996-01-01

    An apparatus for data compression employing a parallel analog processor. The apparatus includes an array of processor cells with N columns and M rows wherein the processor cells have an input device, memory device, and processor device. The input device is used for inputting a series of input vectors. Each input vector is simultaneously input into each column of the array of processor cells in a pre-determined sequential order. An input vector is made up of M components, ones of which are input into ones of M processor cells making up a column of the array. The memory device is used for providing ones of M components of a codebook vector to ones of the processor cells making up a column of the array. A different codebook vector is provided to each of the N columns of the array. The processor device is used for simultaneously comparing the components of each input vector to corresponding components of each codebook vector, and for outputting a signal representative of the closeness between the compared vector components. A combination device is used to combine the signal output from each processor cell in each column of the array and to output a combined signal. A closeness determination device is then used for determining which codebook vector is closest to an input vector from the combined signals, and for outputting a codebook vector index indicating which of the N codebook vectors was the closest to each input vector input into the array.

  8. A novel VLSI processor architecture for supercomputing arrays

    NASA Technical Reports Server (NTRS)

    Venkateswaran, N.; Pattabiraman, S.; Devanathan, R.; Ahmed, Ashaf; Venkataraman, S.; Ganesh, N.

    1993-01-01

    Design of the processor element for general purpose massively parallel supercomputing arrays is highly complex and cost ineffective. To overcome this, the architecture and organization of the functional units of the processor element should be such as to suit the diverse computational structures and simplify mapping of complex communication structures of different classes of algorithms. This demands that the computation and communication structures of different class of algorithms be unified. While unifying the different communication structures is a difficult process, analysis of a wide class of algorithms reveals that their computation structures can be expressed in terms of basic IP,IP,OP,CM,R,SM, and MAA operations. The execution of these operations is unified on the PAcube macro-cell array. Based on this PAcube macro-cell array, we present a novel processor element called the GIPOP processor, which has dedicated functional units to perform the above operations. The architecture and organization of these functional units are such to satisfy the two important criteria mentioned above. The structure of the macro-cell and the unification process has led to a very regular and simpler design of the GIPOP processor. The production cost of the GIPOP processor is drastically reduced as it is designed on high performance mask programmable PAcube arrays.

  9. Technology and design of an active-matrix OLED on crystalline silicon direct-view display for a wristwatch computer

    NASA Astrophysics Data System (ADS)

    Sanford, James L.; Schlig, Eugene S.; Prache, Olivier; Dove, Derek B.; Ali, Tariq A.; Howard, Webster E.

    2002-02-01

    The IBM Research Division and eMagin Corp. jointly have developed a low-power VGA direct view active matrix OLED display, fabricated on a crystalline silicon CMOS chip. The display is incorporated in IBM prototype wristwatch computers running the Linus operating system. IBM designed the silicon chip and eMagin developed the organic stack and performed the back-end-of line processing and packaging. Each pixel is driven by a constant current source controlled by a CMOS RAM cell, and the display receives its data from the processor memory bus. This paper describes the OLED technology and packaging, and outlines the design of the pixel and display electronics and the processor interface. Experimental results are presented.

  10. IBM PC enhances the world's future

    NASA Technical Reports Server (NTRS)

    Cox, Jozelle

    1988-01-01

    Although the purpose of this research is to illustrate the importance of computers to the public, particularly the IBM PC, present examinations will include computers developed before the IBM PC was brought into use. IBM, as well as other computing facilities, began serving the public years ago, and is continuing to find ways to enhance the existence of man. With new developments in supercomputers like the Cray-2, and the recent advances in artificial intelligence programming, the human race is gaining knowledge at a rapid pace. All have benefited from the development of computers in the world; not only have they brought new assets to life, but have made life more and more of a challenge everyday.

  11. Highly parallel reconfigurable computer architecture for robotic computation having plural processor cells each having right and left ensembles of plural processors

    NASA Technical Reports Server (NTRS)

    Fijany, Amir (Inventor); Bejczy, Antal K. (Inventor)

    1994-01-01

    In a computer having a large number of single-instruction multiple data (SIMD) processors, each of the SIMD processors has two sets of three individual processor elements controlled by a master control unit and interconnected among a plurality of register file units where data is stored. The register files input and output data in synchronism with a minor cycle clock under control of two slave control units controlling the register file units connected to respective ones of the two sets of processor elements. Depending upon which ones of the register file units are enabled to store or transmit data during a particular minor clock cycle, the processor elements within an SIMD processor are connected in rings or in pipeline arrays, and may exchange data with the internal bus or with neighboring SIMD processors through interface units controlled by respective ones of the two slave control units.

  12. Hot Chips and Hot Interconnects for High End Computing Systems

    NASA Technical Reports Server (NTRS)

    Saini, Subhash

    2005-01-01

    I will discuss several processors: 1. The Cray proprietary processor used in the Cray X1; 2. The IBM Power 3 and Power 4 used in an IBM SP 3 and IBM SP 4 systems; 3. The Intel Itanium and Xeon, used in the SGI Altix systems and clusters respectively; 4. IBM System-on-a-Chip used in IBM BlueGene/L; 5. HP Alpha EV68 processor used in DOE ASCI Q cluster; 6. SPARC64 V processor, which is used in the Fujitsu PRIMEPOWER HPC2500; 7. An NEC proprietary processor, which is used in NEC SX-6/7; 8. Power 4+ processor, which is used in Hitachi SR11000; 9. NEC proprietary processor, which is used in Earth Simulator. The IBM POWER5 and Red Storm Computing Systems will also be discussed. The architectures of these processors will first be presented, followed by interconnection networks and a description of high-end computer systems based on these processors and networks. The performance of various hardware/programming model combinations will then be compared, based on latest NAS Parallel Benchmark results (MPI, OpenMP/HPF and hybrid (MPI + OpenMP). The tutorial will conclude with a discussion of general trends in the field of high performance computing, (quantum computing, DNA computing, cellular engineering, and neural networks).

  13. Control apparatus and method for efficiently heating a fuel processor in a fuel cell system

    DOEpatents

    Doan, Tien M.; Clingerman, Bruce J.

    2003-08-05

    A control apparatus and method for efficiently controlling the amount of heat generated by a fuel cell processor in a fuel cell system by determining a temperature error between actual and desired fuel processor temperatures. The temperature error is converted to a combustor fuel injector command signal or a heat dump valve position command signal depending upon the type of temperature error. Logic controls are responsive to the combustor fuel injector command signals and the heat dump valve position command signal to prevent the combustor fuel injector command signal from being generated if the heat dump valve is opened or, alternately, from preventing the heat dump valve position command signal from being generated if the combustor fuel injector is opened.

  14. Ada Compiler Validation Summary Report: Certificate Number: 880318W1. 09043 International Business Machines Corporation IBM Development System for the Ada Language, Version 2.1.0 IBM 4381 under VM/HPO, Host IBM 4381 under MVS/XA, Target

    DTIC Science & Technology

    1988-03-28

    International Business Machines Corporation IBM Development System for the Ada Language, Version 2.1.0 IBM 4381 under VM/HPO, host IBM 4381 under MVS/XA, target...Program Office, AJPO 20. ABSTRACT (Continue on reverse side if necessary and identify by block number) International Business Machines Corporation, IBM...Standard ANSI/MIL-STD-1815A in the compiler listed in this declaration. I declare that International Business Machines Corporation is the owner of record

  15. Development of a Soldier-Portable Fuel Cell Power System, Part I: A Bread-Board Methanol Fuel Processor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palo, Daniel R.; Holladay, Jamelyn D.; Rozmiarek, Robert T.

    A 15-We portable power system is being developed for the US Army, comprised of a hydrogen-generating fuel reformer coupled to a hydrogen-converting fuel cell. As a first phase of this project, a methanol steam reformer system was developed and demonstrated. The reformer system included a combustor, two vaporizers, and a steam-reforming reactor. The device was demonstrated as a thermally independent unit over the range of 14 to 80 Wt output. Assuming a 14-day mission life and an ultimate 1-kg fuel processor/fuel cell assembly, a base case was chosen to illustrate the expected system performance. Operating at 13 We, the systemmore » yielded a fuel processor efficiency of 45% (LHV of H2 out/LHV of fuel in) and an estimated net efficiency of 22% (assuming a fuel cell efficiency of 48%). The resulting energy density of 720 W-hr/kg is several times the energy density of the best lithium-ion batteries. Some immediate areas of improvement in thermal management also have been identified and an integrated fuel processor is under development. The final system will be a hybrid, containing a fuel reformer, fuel cell, and rechargeable battery. The battery will provide power for startup and added capacity for times of peak power demand.« less

  16. [IBM Work and Personal Life Balance Programs.

    ERIC Educational Resources Information Center

    International Business Machines Corp., Armonk, NY.

    These five brochures describe the IBM Corporation's policies, programs, and initiatives designed to meet the needs of employees' child care and family responsibilities as they move through various stages of employment with IBM. The Work and Personal Life Balance Programs brochure outlines (1) policies for flexible work schedules, including…

  17. Ada (Tradename) Compiler Validation Summary Report. International Business Machines Corporation. IBM Development System for the Ada Language for MVS, Version 1.0. IBM 4381 (IBM System/370) under MVS.

    DTIC Science & Technology

    1986-05-05

    AVF-VSR-36.0187 Ada" COMPILER VALIDATION SUMMARY REPORT: International Business Machines Corporation IBM Development System for the Ada Language for...withdrawn from ACVC Version 1.7 were not run. The compiler was tested using command scripts provided by International Business Machines Corporation. These...APPENDIX A COMPLIANCE STATEMENT International Business Machines Corporation has submitted the following compliance statement concerning the IBM

  18. Introduction of a closed-system cell processor for red blood cell washing: postimplementation monitoring of safety and efficacy.

    PubMed

    Acker, Jason P; Hansen, Adele L; Yi, Qi-Long; Sondi, Nayana; Cserti-Gazdewich, Christine; Pendergrast, Jacob; Hannach, Barbara

    2016-01-01

    After introduction of a closed-system cell processor, the effect of this product change on safety, efficacy, and utilization of washed red blood cells (RBCs) was assessed. This study was a pre-/postimplementation observational study. Efficacy data were collected from sequentially transfused washed RBCs received as prophylactic therapy by β-thalassemia patients during a 3-month period before and after implementation of the Haemonetics ACP 215 closed-system processor. Before implementation, an open system (TerumoBCT COBE 2991) was used to wash RBCs. The primary endpoint for efficacy was a change in hemoglobin (Hb) concentration corrected for the duration between transfusions. The primary endpoint for safety was the frequency of adverse transfusion reactions (ATRs) in all washed RBCs provided by Canadian Blood Services to the transfusion service for 12 months before and after implementation. Data were analyzed from more than 300 RBCs transfused to 31 recipients before implementation and 29 recipients after implementation. The number of units transfused per episode reduced significantly after implementation, from a mean of 3.5 units to a mean of 3.1 units (p < 0.005). The corrected change in Hb concentration was not significantly different before and after implementation. ATRs occurred in 0.15% of transfusions both before and after implementation. Safety and efficacy of washed RBCs were not affected after introduction of a closed-system cell processor. The ACP 215 allowed for an extended expiry time, improving inventory management and overall utilization of washed RBCs. Transfusion of fewer RBCs per episode reduced exposure of recipients to allogeneic blood products while maintaining efficacy. © 2015 AABB.

  19. Analyzing checkpointing trends for applications on the IBM Blue Gene/P system.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naik, H.; Gupta, R.; Beckman, P.

    Current petascale systems have tens of thousands of hardware components and complex system software stacks, which increase the probability of faults occurring during the lifetime of a process. Checkpointing has been a popular method of providing fault tolerance in high-end systems. While considerable research has been done to optimize checkpointing, in practice the method still involves a high-cost overhead for users. In this paper, we study the checkpointing overhead seen by applications running on leadership-class machines such as the IBM Blue Gene/P at Argonne National Laboratory. We study various applications and design a methodology to assist users in understanding andmore » choosing checkpointing frequency and reducing the overhead incurred. In particular, we study three popular applications -- the Grid-Based Projector-Augmented Wave application, the Carr-Parrinello Molecular Dynamics application, and a Nek5000 computational fluid dynamics application -- and analyze their memory usage and possible checkpointing trends on 32,768 processors of the Blue Gene/P system.« less

  20. The IBM Story

    NASA Astrophysics Data System (ADS)

    Loving, Charles

    This chapter explores an actual transition from a product-based company to one where services dominate. That company is IBM and the transition started in the early 1990s. As you will see, the changes do not happen overnight in a "big bang" ­apocalyptic event but more gradually in a series of phases; indeed, at the time of writing (2009), we are almost 20 years into the transition and we know that we are not finished yet. Underneath the headline change from products to services, there are a myriad of other process and procedural changes that have to be made to support the business and allow it to change dynamically in order to meet the needs of its clients, its shareholders and its suppliers. In this chapter, we will show how IBM has responded to these requirements and how it is applying the lessons learnt in this process to create an agenda for innovation in service creation and delivery to address global problems.

  1. The Efficiency and the Scalability of an Explicit Operator on an IBM POWER4 System

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    We present an evaluation of the efficiency and the scalability of an explicit CFD operator on an IBM POWER4 system. The POWER4 architecture exhibits a common trend in HPC architectures: boosting CPU processing power by increasing the number of functional units, while hiding the latency of memory access by increasing the depth of the memory hierarchy. The overall machine performance depends on the ability of the caches-buses-fabric-memory to feed the functional units with the data to be processed. In this study we evaluate the efficiency and scalability of one explicit CFD operator on an IBM POWER4. This operator performs computations at the points of a Cartesian grid and involves a few dozen floating point numbers and on the order of 100 floating point operations per grid point. The computations in all grid points are independent. Specifically, we estimate the efficiency of the RHS operator (SP of NPB) on a single processor as the observed/peak performance ratio. Then we estimate the scalability of the operator on a single chip (2 CPUs), a single MCM (8 CPUs), 16 CPUs, and the whole machine (32 CPUs). Then we perform the same measurements for a chache-optimized version of the RHS operator. For our measurements we use the HPM (Hardware Performance Monitor) counters available on the POWER4. These counters allow us to analyze the obtained performance results.

  2. Development of a soldier-portable fuel cell power system. Part I: A bread-board methanol fuel processor

    NASA Astrophysics Data System (ADS)

    Palo, Daniel R.; Holladay, Jamie D.; Rozmiarek, Robert T.; Guzman-Leong, Consuelo E.; Wang, Yong; Hu, Jianli; Chin, Ya-Huei; Dagle, Robert A.; Baker, Eddie G.

    A 15-W e portable power system is being developed for the US Army that consists of a hydrogen-generating fuel reformer coupled to a proton-exchange membrane fuel cell. In the first phase of this project, a methanol steam reformer system was developed and demonstrated. The reformer system included a combustor, two vaporizers, and a steam reforming reactor. The device was demonstrated as a thermally independent unit over the range of 14-80 W t output. Assuming a 14-day mission life and an ultimate 1-kg fuel processor/fuel cell assembly, a base case was chosen to illustrate the expected system performance. Operating at 13 W e, the system yielded a fuel processor efficiency of 45% (LHV of H 2 out/LHV of fuel in) and an estimated net efficiency of 22% (assuming a fuel cell efficiency of 48%). The resulting energy density of 720 Wh/kg is several times the energy density of the best lithium-ion batteries. Some immediate areas of improvement in thermal management also have been identified, and an integrated fuel processor is under development. The final system will be a hybrid, containing a fuel reformer, a fuel cell, and a rechargeable battery. The battery will provide power for start-up and added capacity for times of peak power demand.

  3. Attaching IBM-compatible 3380 disks to Cray X-MP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engert, D.E.; Midlock, J.L.

    1989-01-01

    A method of attaching IBM-compatible 3380 disks directly to a Cray X-MP via the XIOP with a BMC is described. The IBM 3380 disks appear to the UNICOS operating system as DD-29 disks with UNICOS file systems. IBM 3380 disks provide cheap, reliable large capacity disk storage. Combined with a small number of high-speed Cray disks, the IBM disks provide for the bulk of the storage for small files and infrequently used files. Cray Research designed the BMC and its supporting software in the XIOP to allow IBM tapes and other devices to be attached to the X-MP. No hardwaremore » changes were necessary, and we added less than 2000 lines of code to the XIOP to accomplish this project. This system has been in operation for over eight months. Future enhancements such as the use of a cache controller and attachment to a Y-MP are also described. 1 tab.« less

  4. Distributed processor allocation for launching applications in a massively connected processors complex

    DOEpatents

    Pedretti, Kevin

    2008-11-18

    A compute processor allocator architecture for allocating compute processors to run applications in a multiple processor computing apparatus is distributed among a subset of processors within the computing apparatus. Each processor of the subset includes a compute processor allocator. The compute processor allocators can share a common database of information pertinent to compute processor allocation. A communication path permits retrieval of information from the database independently of the compute processor allocators.

  5. Interactive graphics system for IBM 1800 computer

    NASA Technical Reports Server (NTRS)

    Carleton, T. P.; Howell, D. R.; Mish, W. H.

    1972-01-01

    A FORTRAN compatible software system that has been developed to provide an interactive graphics capability for the IBM 1800 computer is described. The interactive graphics hardware consists of a Hewlett-Packard 1300A cathode ray tube, Sanders photopen, digital to analog converters, pulse counter, and necessary interface. The hardware is available from IBM as several related RPQ's. The software developed permits the application programmer to use IBM 1800 FORTRAN to develop a display on the cathode ray tube which consists of one or more independent units called pictures. The software permits a great deal of flexibility in the manipulation of these pictures and allows the programmer to use the photopen to interact with the displayed data and make decisions based on information returned by the photopen.

  6. Programming for 1.6 Millon cores: Early experiences with IBM's BG/Q SMP architecture

    NASA Astrophysics Data System (ADS)

    Glosli, James

    2013-03-01

    With the stall in clock cycle improvements a decade ago, the drive for computational performance has continues along a path of increasing core counts on a processor. The multi-core evolution has been expressed in both a symmetric multi processor (SMP) architecture and cpu/GPU architecture. Debates rage in the high performance computing (HPC) community which architecture best serves HPC. In this talk I will not attempt to resolve that debate but perhaps fuel it. I will discuss the experience of exploiting Sequoia, a 98304 node IBM Blue Gene/Q SMP at Lawrence Livermore National Laboratory. The advantages and challenges of leveraging the computational power BG/Q will be detailed through the discussion of two applications. The first application is a Molecular Dynamics code called ddcMD. This is a code developed over the last decade at LLNL and ported to BG/Q. The second application is a cardiac modeling code called Cardioid. This is a code that was recently designed and developed at LLNL to exploit the fine scale parallelism of BG/Q's SMP architecture. Through the lenses of these efforts I'll illustrate the need to rethink how we express and implement our computational approaches. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  7. Self-sustained operation of a kW e-class kerosene-reforming processor for solid oxide fuel cells

    NASA Astrophysics Data System (ADS)

    Yoon, Sangho; Bae, Joongmyeon; Kim, Sunyoung; Yoo, Young-Sung

    In this paper, fuel-processing technologies are developed for application in residential power generation (RPG) in solid oxide fuel cells (SOFCs). Kerosene is selected as the fuel because of its high hydrogen density and because of the established infrastructure that already exists in South Korea. A kerosene fuel processor with two different reaction stages, autothermal reforming (ATR) and adsorptive desulfurization reactions, is developed for SOFC operations. ATR is suited to the reforming of liquid hydrocarbon fuels because oxygen-aided reactions can break the aromatics in the fuel and steam can suppress carbon deposition during the reforming reaction. ATR can also be implemented as a self-sustaining reactor due to the exothermicity of the reaction. The kW e self-sustained kerosene fuel processor, including the desulfurizer, operates for about 250 h in this study. This fuel processor does not require a heat exchanger between the ATR reactor and the desulfurizer or electric equipment for heat supply and fuel or water vaporization because a suitable temperature of the ATR reformate is reached for H 2S adsorption on the ZnO catalyst beds in desulfurizer. Although the CH 4 concentration in the reformate gas of the fuel processor is higher due to the lower temperature of ATR tail gas, SOFCs can directly use CH 4 as a fuel with the addition of sufficient steam feeds (H 2O/CH 4 ≥ 1.5), in contrast to low-temperature fuel cells. The reforming efficiency of the fuel processor is about 60%, and the desulfurizer removed H 2S to a sufficient level to allow for the operation of SOFCs.

  8. Ada Compiler Validation Summary Report: Certificate Number: 890420W1. 10075 International Business Machines Corporation. IBM Development System, for the Ada Language CMS/MVS Ada Cross Compiler, Version 2.1.1 IBM 3083 Host and IBM 4381 Target

    DTIC Science & Technology

    1989-04-20

    International business Machines Corporati,:i IBM Development System for the Ada Language, CMS/MVS Ada Cross Compiler, Version 2.1.1, Wright-Patterson AFB, IBM...VALIDATION SUMMARY REPORT: Certificate Number: 890420W1.10075 International Business Machines Corporation IBM Development System for the Ada Language CMS...command scripts provided by International Business Machines Corporation and reviewed by the validation team. The compiler was tested using all default

  9. Ada (Trade Name) Compiler Validation Summary Report: International Business Machines Corporation. IBM Development System for the Ada Language System, Version 1.1.0, IBM 4381 under MVS.

    DTIC Science & Technology

    1988-05-22

    TITLE (andSubtile) 5. TYPE OF REPORT & PERIOD COVERED Ada Compler Validation Summary Report: 22 May 1987 to 22 May 1988 International Business Machines...IBM Development System for the Ada Language System, Version 1.1.0, International Business Machines Corporation, Wright-Patterson AFB. IBM 4381 under...SUMMARY REPORT: International Business Machines Corporation IBM Development System f’or the Ada Language System, Version 1.1.0 IBM 4381 under MVS

  10. Design and development of an IBM/VM menu system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cazzola, D.J.

    1992-10-01

    This report describes a full screen menu system developed using IBM`s Interactive System Productivity Facility (ISPF) and the REXX programming language. The software was developed for the 2800 IBM/VM Electrical Computer Aided Design (ECAD) system. The system was developed to deliver electronic drawing definitions to a corporate drawing release system. Although this report documents the status of the menu system when it was retired, the methodologies used and the requirements defined are very applicable to replacement systems.

  11. Enhancements to the IBM version of COSMIC/NASTRAN

    NASA Technical Reports Server (NTRS)

    Brown, W. Keith

    1989-01-01

    Major improvements were made to the IBM version of COSMIC/NASTRAN by RPK Corporation under contract to IBM Corporation. These improvements will become part of COSMIC's IBM version and will be available in the second quarter of 1989. The first improvement is the inclusion of code to take advantage of IBM's new Vector Facility (VF) on its 3090 machines. The remaining improvements are modifications that will benefit all users as a result of the extended addressing capability provided by the MVS/XA operating system. These improvements include the availability of an in-memory data base that potentially eliminates the need for I/O to the PRIxx disk files. Another improvement is the elimination of multiple load modules that have to be loaded for every link switch within NASTRAN. The last improvement allows for NASTRAN to execute above the 16 mega-byte line. This improvement allows for NASTRAN to have access to 2 giga-bytes of memory for open core and the in-memory data base.

  12. COPYCAT; IBM OS system catalog utility routine. [IBM360,370; Assembly language

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engert, D.E.

    COPYCAT is an OS utility program designed to produce an efficient system-wide catalog which may reside on many volumes. Substantial improvement in performance may also be obtained on a system with only a single catalog. First, catalog entries from many different catalogs may be redistributed to equalize the load on each catalog. Second, each individual catalog is restructured in a way designed to minimize the I/O time required for searching and updating. Redistribution and restructuring parameters are under user control. Model DSCB's for generation data groups and alias entries are also processed. Catalogs on all direct access devices, including datamore » cells, are supported. Backup copies may also be made.IBM360,370; Assembly language; OS/MVT, OS/MFT, OS/VS1 and OS/VS2 Release 1; A large region size is recommended since COPYCAT will use all of the core available to it for buffers..« less

  13. State University of New York Institute of Technology (SUNYIT) Summer Scholar Program

    DTIC Science & Technology

    2009-10-01

    COVERED (From - To) March 2007 – April 2009 4 . TITLE AND SUBTITLE STATE UNIVERSITY OF NEW YORK INSTITUTE OF TECHNOLOGY (SUNYIT) SUMMER SCHOLAR...Even with access to the Arctic Regional Supercomputer Center (ARSC), evolving a 9/7 wavelet with four multi-resolution levels (MRA 4 ) involves...evaluated over the multiple processing elements in the Cell processor. It was tested on Cell processors in a Sony Playstation 3 and on an IBM QS20 blade

  14. Detailed description of the Mayo/IBM PACS

    NASA Astrophysics Data System (ADS)

    Gehring, Dale G.; Persons, Kenneth R.; Rothman, Melvyn L.; Salutz, James R.; Morin, Richard L.

    1991-07-01

    The Mayo Clinic and IBM/Rochester have jointly developed a picture archiving system (PACS) for use with Mayo's MRI and Neuro-CT imaging modalities. The system was developed to replace the imaging system's vendor-supplied magnetic tape archiving capability. The system consists of seven MR imagers and nine CT scanners, each interfaced to the PACS via IBM Personal System/2(tm) (PS/2) computers, which act as gateways from the imaging modality to the PACS network. The PAC system operates on the token-ring component of Mayo's city-wide local area network. Also on the PACS network are four optical storage subsystems used for image archival, three optical subsystems used for image retrieval, an IBM Application System/400(tm) (AS/400) computer used for database management and multiple PS/2-based image display systems and their image servers.

  15. TRUMP; transient and steady state temperature distribution. [IBM360,370; CDC7600; FORTRAN IV (95%) and BAL (5%) (IBM); FORTRAN IV (CDC)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elrod, D.C.; Turner, W.D.

    TRUMP solves a general nonlinear parabolic partial differential equation describing flow in various kinds of potential fields, such as fields of temperature, pressure, or electricity and magnetism; simultaneously, it will solve two additional equations representing, in thermal problems, heat production by decomposition of two reactants having rate constants with a general Arrhenius temperature dependence. Steady-state and transient flow in one, two, or three dimensions are considered in geometrical configurations having simple or complex shapes and structures. Problem parameters may vary with spatial position, time, or primary dependent variables--temperature, pressure, or field strength. Initial conditions may vary with spatial position, andmore » among the criteria that may be specified for ending a problem are upper and lower limits on the size of the primary dependent variable, upper limits on the problem time or on the number of time-steps or on the computer time, and attainment of steady state.IBM360,370;CDC7600; FORTRAN IV (95%) and BAL (5%) (IBM); FORTRAN IV (CDC); OS/360 (IBM360), OS/370 (IBM370), SCOPE 2.1.5 (CDC7600); As dimensioned, the program requires 400K bytes of storage on an IBM370 and 145,100 (octal) words on a CDC7600.« less

  16. Hazardous Waste Cleanup: IBM Corporation in Dayton, New Jersey

    EPA Pesticide Factsheets

    The IBM facility is located at 431 Ridge Road on a 66-acre parcel in a mixed residential and industrial section of Dayton, South Brunswick Township, Middlesex County, New Jersey. IBM's manufacturing plant was constructed in 1956 and used until 1985 for

  17. Geospace simulations using modern accelerator processor technology

    NASA Astrophysics Data System (ADS)

    Germaschewski, K.; Raeder, J.; Larson, D. J.

    2009-12-01

    OpenGGCM (Open Geospace General Circulation Model) is a well-established numerical code simulating the Earth's space environment. The most computing intensive part is the MHD (magnetohydrodynamics) solver that models the plasma surrounding Earth and its interaction with Earth's magnetic field and the solar wind flowing in from the sun. Like other global magnetosphere codes, OpenGGCM's realism is currently limited by computational constraints on grid resolution. OpenGGCM has been ported to make use of the added computational powerof modern accelerator based processor architectures, in particular the Cell processor. The Cell architecture is a novel inhomogeneous multicore architecture capable of achieving up to 230 GFLops on a single chip. The University of New Hampshire recently acquired a PowerXCell 8i based computing cluster, and here we will report initial performance results of OpenGGCM. Realizing the high theoretical performance of the Cell processor is a programming challenge, though. We implemented the MHD solver using a multi-level parallelization approach: On the coarsest level, the problem is distributed to processors based upon the usual domain decomposition approach. Then, on each processor, the problem is divided into 3D columns, each of which is handled by the memory limited SPEs (synergistic processing elements) slice by slice. Finally, SIMD instructions are used to fully exploit the SIMD FPUs in each SPE. Memory management needs to be handled explicitly by the code, using DMA to move data from main memory to the per-SPE local store and vice versa. We use a modern technique, automatic code generation, which shields the application programmer from having to deal with all of the implementation details just described, keeping the code much more easily maintainable. Our preliminary results indicate excellent performance, a speed-up of a factor of 30 compared to the unoptimized version.

  18. IBM PC/IX operating system evaluation plan

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Granier, Martin; Hall, Philip P.; Triantafyllopoulos, Spiros

    1984-01-01

    An evaluation plan for the IBM PC/IX Operating System designed for IBM PC/XT computers is discussed. The evaluation plan covers the areas of performance measurement and evaluation, software facilities available, man-machine interface considerations, networking, and the suitability of PC/IX as a development environment within the University of Southwestern Louisiana NASA PC Research and Development project. In order to compare and evaluate the PC/IX system, comparisons with other available UNIX-based systems are also included.

  19. Multi-Core Programming Design Patterns: Stream Processing Algorithms for Dynamic Scene Perceptions

    DTIC Science & Technology

    2014-05-01

    processor developed by IBM and other companies , incorpo- rates the verb—POWER5— processor as the Power Processor Element (PPE), one of the early general...deliver an power efficient single-precision peak performance of more than 256 GFlops. Substantially more raw power became available later, when nVIDIA ...algorithms, including IBM’s Cell/B.E., GPUs from NVidia and AMD and many-core CPUs from Intel.27 The vast growth of digital video content has been a

  20. A digital retina-like low-level vision processor.

    PubMed

    Mertoguno, S; Bourbakis, N G

    2003-01-01

    This correspondence presents the basic design and the simulation of a low level multilayer vision processor that emulates to some degree the functional behavior of a human retina. This retina-like multilayer processor is the lower part of an autonomous self-organized vision system, called Kydon, that could be used on visually impaired people with a damaged visual cerebral cortex. The Kydon vision system, however, is not presented in this paper. The retina-like processor consists of four major layers, where each of them is an array processor based on hexagonal, autonomous processing elements that perform a certain set of low level vision tasks, such as smoothing and light adaptation, edge detection, segmentation, line recognition and region-graph generation. At each layer, the array processor is a 2D array of k/spl times/m hexagonal identical autonomous cells that simultaneously execute certain low level vision tasks. Thus, the hardware design and the simulation at the transistor level of the processing elements (PEs) of the retina-like processor and its simulated functionality with illustrative examples are provided in this paper.

  1. LLNL Partners with IBM on Brain-Like Computing Chip

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Essen, Brian

    Lawrence Livermore National Laboratory (LLNL) will receive a first-of-a-kind brain-inspired supercomputing platform for deep learning developed by IBM Research. Based on a breakthrough neurosynaptic computer chip called IBM TrueNorth, the scalable platform will process the equivalent of 16 million neurons and 4 billion synapses and consume the energy equivalent of a hearing aid battery – a mere 2.5 watts of power. The brain-like, neural network design of the IBM Neuromorphic System is able to infer complex cognitive tasks such as pattern recognition and integrated sensory processing far more efficiently than conventional chips.

  2. LLNL Partners with IBM on Brain-Like Computing Chip

    ScienceCinema

    Van Essen, Brian

    2018-06-25

    Lawrence Livermore National Laboratory (LLNL) will receive a first-of-a-kind brain-inspired supercomputing platform for deep learning developed by IBM Research. Based on a breakthrough neurosynaptic computer chip called IBM TrueNorth, the scalable platform will process the equivalent of 16 million neurons and 4 billion synapses and consume the energy equivalent of a hearing aid battery – a mere 2.5 watts of power. The brain-like, neural network design of the IBM Neuromorphic System is able to infer complex cognitive tasks such as pattern recognition and integrated sensory processing far more efficiently than conventional chips.

  3. PS3 CELL Development for Scientific Computation and Research

    NASA Astrophysics Data System (ADS)

    Christiansen, M.; Sevre, E.; Wang, S. M.; Yuen, D. A.; Liu, S.; Lyness, M. D.; Broten, M.

    2007-12-01

    The Cell processor is one of the most powerful processors on the market, and researchers in the earth sciences may find its parallel architecture to be very useful. A cell processor, with 7 cores, can easily be obtained for experimentation by purchasing a PlayStation 3 (PS3) and installing linux and the IBM SDK. Each core of the PS3 is capable of 25 GFLOPS giving a potential limit of 150 GFLOPS when using all 6 SPUs (synergistic processing units) by using vectorized algorithms. We have used the Cell's computational power to create a program which takes simulated tsunami datasets, parses them, and returns a colorized height field image using ray casting techniques. As expected, the time required to create an image is inversely proportional to the number of SPUs used. We believe that this trend will continue when multiple PS3s are chained using OpenMP functionality and are in the process of researching this. By using the Cell to visualize tsunami data, we have found that its greatest feature is its power. This fact entwines well with the needs of the scientific community where the limiting factor is time. Any algorithm, such as the heat equation, that can be subdivided into multiple parts can take advantage of the PS3 Cell's ability to split the computations across the 6 SPUs reducing required run time by one sixth. Further vectorization of the code can allow for 4 simultanious floating point operations by using the SIMD (single instruction multiple data) capabilities of the SPU increasing efficiency 24 times.

  4. Implementation of NASTRAN on the IBM/370 CMS operating system

    NASA Technical Reports Server (NTRS)

    Britten, S. S.; Schumacker, B.

    1980-01-01

    The NASA Structural Analysis (NASTRAN) computer program is operational on the IBM 360/370 series computers. While execution of NASTRAN has been described and implemented under the virtual storage operating systems of the IBM 370 models, the IBM 370/168 computer can also operate in a time-sharing mode under the virtual machine operating system using the Conversational Monitor System (CMS) subset. The changes required to make NASTRAN operational under the CMS operating system are described.

  5. Digital Parallel Processor Array for Optimum Path Planning

    NASA Technical Reports Server (NTRS)

    Kremeny, Sabrina E. (Inventor); Fossum, Eric R. (Inventor); Nixon, Robert H. (Inventor)

    1996-01-01

    The invention computes the optimum path across a terrain or topology represented by an array of parallel processor cells interconnected between neighboring cells by links extending along different directions to the neighboring cells. Such an array is preferably implemented as a high-speed integrated circuit. The computation of the optimum path is accomplished by, in each cell, receiving stimulus signals from neighboring cells along corresponding directions, determining and storing the identity of a direction along which the first stimulus signal is received, broadcasting a subsequent stimulus signal to the neighboring cells after a predetermined delay time, whereby stimulus signals propagate throughout the array from a starting one of the cells. After propagation of the stimulus signal throughout the array, a master processor traces back from a selected destination cell to the starting cell along an optimum path of the cells in accordance with the identity of the directions stored in each of the cells.

  6. IBM NJE protocol emulator for VAX/VMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engert, D.E.

    1981-01-01

    Communications software has been written at Argonne National Laboratory to enable a VAX/VMS system to participate as an end-node in a standard IBM network by emulating the Network Job Entry (NJE) protocol. NJE is actually a collection of programs that support job networking for the operating systems used on most large IBM-compatible computers (e.g., VM/370, MVS with JES2 or JES3, SVS, MVT with ASP or HASP). Files received by the VAX can be printed or saved in user-selected disk files. Files sent to the network can be routed to any node in the network for printing, punching, or job submission,more » as well as to a VM/370 user's virtual reader. Files sent from the VAX are queued and transmitted asynchronously to allow users to perform other work while files are awaiting transmission. No changes are required to the IBM software.« less

  7. Geometrical model for DBMS: an experimental DBMS using IBM solid modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ali, D.E.D.L.

    1985-01-01

    This research presents a new model for data base management systems (DBMS). The new model, Geometrical DBMS, is based on using solid modelling technology in designing and implementing DBMS. The Geometrical DBMS is implemented using the IBM solid modelling Geometric Design Processor (GDP). Built basically on computer-graphics concepts, Geometrical DBMS is indeed a unique model. Traditionally, researchers start with one of the existent DBMS models and then put a graphical front end on it. In Geometrical DBMS, the graphical aspect of the model is not an alien concept tailored to the model but is, as a matter of fact, themore » atom around which the model is designed. The main idea in Geometrical DBMS is to allow the user and the system to refer to and manipulate data items as a solid object in 3D space, and representing a record as a group of logically related solid objects. In Geometical DBMS, hierarchical structure is used to present the data relations and the user sees the data as a group of arrays; yet, for the user and the system together, the data structure is a multidimensional tree.« less

  8. Quality of red blood cells washed using a second wash sequence on an automated cell processor.

    PubMed

    Hansen, Adele L; Turner, Tracey R; Kurach, Jayme D R; Acker, Jason P

    2015-10-01

    Washed red blood cells (RBCs) are indicated for immunoglobulin (Ig)A-deficient recipients when RBCs from IgA-deficient donors are not available. Canadian Blood Services recently began using the automated ACP 215 cell processor (Haemonetics Corporation) for RBC washing, and its suitability to produce IgA-deficient RBCs was investigated. RBCs produced from whole blood donations by the buffy coat (BC) and whole blood filtration (WBF) methods were washed using the ACP 215 or the COBE 2991 cell processors and IgA and total protein levels were assessed. A double-wash procedure using the ACP 215 was developed, tested, and validated by assessing hemolysis, hematocrit, recovery, and other in vitro quality variables in RBCs stored after washing, with and without irradiation. A single wash using the ACP 215 did not meet Canadian Standards Association recommendations for washing with more than 2 L of solution and could not consistently reduce IgA to levels suitable for IgA-deficient recipients (24/26 BC RBCs and 0/9 WBF RBCs had IgA levels < 0.05 mg/dL). Using a second wash sequence, all BC and WBF units were washed with more than 2 L and had levels of IgA of less than 0.05 mg/dL. During 7 days' postwash storage, with and without irradiation, double-washed RBCs met quality control criteria, except for the failure of one RBC unit for inadequate (69%) postwash recovery. Using the ACP 215, a double-wash procedure for the production of components for IgA-deficient recipients from either BC or WBF RBCs was developed and validated. © 2015 AABB.

  9. Strong scaling and speedup to 16,384 processors in cardiac electro-mechanical simulations.

    PubMed

    Reumann, Matthias; Fitch, Blake G; Rayshubskiy, Aleksandr; Keller, David U J; Seemann, Gunnar; Dossel, Olaf; Pitman, Michael C; Rice, John J

    2009-01-01

    High performance computing is required to make feasible simulations of whole organ models of the heart with biophysically detailed cellular models in a clinical setting. Increasing model detail by simulating electrophysiology and mechanical models increases computation demands. We present scaling results of an electro - mechanical cardiac model of two ventricles and compare them to our previously published results using an electrophysiological model only. The anatomical data-set was given by both ventricles of the Visible Female data-set in a 0.2 mm resolution. Fiber orientation was included. Data decomposition for the distribution onto the distributed memory system was carried out by orthogonal recursive bisection. Load weight ratios for non-tissue vs. tissue elements used in the data decomposition were 1:1, 1:2, 1:5, 1:10, 1:25, 1:38.85, 1:50 and 1:100. The ten Tusscher et al. (2004) electrophysiological cell model was used and the Rice et al. (1999) model for the computation of the calcium transient dependent force. Scaling results for 512, 1024, 2048, 4096, 8192 and 16,384 processors were obtained for 1 ms simulation time. The simulations were carried out on an IBM Blue Gene/L supercomputer. The results show linear scaling from 512 to 16,384 processors with speedup factors between 1.82 and 2.14 between partitions. The most optimal load ratio was 1:25 for on all partitions. However, a shift towards load ratios with higher weight for the tissue elements can be recognized as can be expected when adding computational complexity to the model while keeping the same communication setup. This work demonstrates that it is potentially possible to run simulations of 0.5 s using the presented electro-mechanical cardiac model within 1.5 hours.

  10. IBM, Elsevier Science, and academic freedom.

    PubMed

    Bailar, John C; Cicolella, Andre; Harrison, Robert; LaDou, Joseph; Levy, Barry S; Rohm, Timothy; Teitelbaum, Daniel T; Wang, Yung-Der; Watterson, Andrew; Yoshida, Fumikazu

    2007-01-01

    Elsevier Science refused to publish a study of IBM workers that IBM sought to keep from public view. Occupational and environmental health (OEH) suffers from the absence of a level playing field on which science can thrive. Industry pays for a substantial portion of OEH research. Studies done by private consulting firms or academic institutions may be published if the results suit the sponsoring companies, or they may be censored. OEH journals often reflect the dominance of industry influence on research in the papers they publish, sometimes withdrawing or modifying papers in line with industry and advertising agendas. Although such practices are widely recognized, no fundamental change is supported by government and industry or by professional organizations.

  11. Hazardous Waste Cleanup: IBM Corporation, Former in Owego, New York

    EPA Pesticide Factsheets

    The corrective action activities at the facility are conducted by IBM Corporation, therefore IBM is listed as the operator of the Part 373 Hazardous Waste Management (HWM) Permit for corrective action. Lockheed Martin Corporation owns the facility and is l

  12. Compact propane fuel processor for auxiliary power unit application

    NASA Astrophysics Data System (ADS)

    Dokupil, M.; Spitta, C.; Mathiak, J.; Beckhaus, P.; Heinzel, A.

    With focus on mobile applications a fuel cell auxiliary power unit (APU) using liquefied petroleum gas (LPG) is currently being developed at the Centre for Fuel Cell Technology (Zentrum für BrennstoffzellenTechnik, ZBT gGmbH). The system is consisting of an integrated compact and lightweight fuel processor and a low temperature PEM fuel cell for an electric power output of 300 W. This article is presenting the current status of development of the fuel processor which is designed for a nominal hydrogen output of 1 k Wth,H2 within a load range from 50 to 120%. A modular setup was chosen defining a reformer/burner module and a CO-purification module. Based on the performance specifications, thermodynamic simulations, benchmarking and selection of catalysts the modules have been developed and characterised simultaneously and then assembled to the complete fuel processor. Automated operation results in a cold startup time of about 25 min for nominal load and carbon monoxide output concentrations below 50 ppm for steady state and dynamic operation. Also fast transient response of the fuel processor at load changes with low fluctuations of the reformate gas composition have been achieved. Beside the development of the main reactors the transfer of the fuel processor to an autonomous system is of major concern. Hence, concepts for packaging have been developed resulting in a volume of 7 l and a weight of 3 kg. Further a selection of peripheral components has been tested and evaluated regarding to the substitution of the laboratory equipment.

  13. Data Discovery with IBM Watson

    NASA Astrophysics Data System (ADS)

    Fessler, J.

    2016-12-01

    BM Watson is a cognitive computing system that uses machine learning, statistical analysis, and natural language processing to find and understand the clues in questions posed to it. Watson was made famous when it bested two champions on TV's Jeopardy! show. Since then, Watson has evolved into a platform of cognitive services that can be trained on very granular fields up study. Watson is being used to support a number of subject domains, such as cancer research, public safety, engineering, and the intelligence community. IBM will be providing a presentation and demonstration on the Watson technology and will discuss its capabilities including Natural Language Processing, text analytics and enterprise search, as well as cognitive computing with deep Q&A. The team will also be giving examples of how IBM Watson technology is being used to support real-world problems across a number of public sector agencies

  14. New Generation General Purpose Computer (GPC) compact IBM unit

    NASA Technical Reports Server (NTRS)

    1991-01-01

    New Generation General Purpose Computer (GPC) compact IBM unit replaces a two-unit earlier generation computer. The new IBM unit is documented in table top views alone (S91-26867, S91-26868), with the onboard equipment it supports including the flight deck CRT screen and keypad (S91-26866), and next to the two earlier versions it replaces (S91-26869).

  15. Stream Processors

    NASA Astrophysics Data System (ADS)

    Erez, Mattan; Dally, William J.

    Stream processors, like other multi core architectures partition their functional units and storage into multiple processing elements. In contrast to typical architectures, which contain symmetric general-purpose cores and a cache hierarchy, stream processors have a significantly leaner design. Stream processors are specifically designed for the stream execution model, in which applications have large amounts of explicit parallel computation, structured and predictable control, and memory accesses that can be performed at a coarse granularity. Applications in the streaming model are expressed in a gather-compute-scatter form, yielding programs with explicit control over transferring data to and from on-chip memory. Relying on these characteristics, which are common to many media processing and scientific computing applications, stream architectures redefine the boundary between software and hardware responsibilities with software bearing much of the complexity required to manage concurrency, locality, and latency tolerance. Thus, stream processors have minimal control consisting of fetching medium- and coarse-grained instructions and executing them directly on the many ALUs. Moreover, the on-chip storage hierarchy of stream processors is under explicit software control, as is all communication, eliminating the need for complex reactive hardware mechanisms.

  16. Array processor architecture

    NASA Technical Reports Server (NTRS)

    Barnes, George H. (Inventor); Lundstrom, Stephen F. (Inventor); Shafer, Philip E. (Inventor)

    1983-01-01

    A high speed parallel array data processing architecture fashioned under a computational envelope approach includes a data base memory for secondary storage of programs and data, and a plurality of memory modules interconnected to a plurality of processing modules by a connection network of the Omega gender. Programs and data are fed from the data base memory to the plurality of memory modules and from hence the programs are fed through the connection network to the array of processors (one copy of each program for each processor). Execution of the programs occur with the processors operating normally quite independently of each other in a multiprocessing fashion. For data dependent operations and other suitable operations, all processors are instructed to finish one given task or program branch before all are instructed to proceed in parallel processing fashion on the next instruction. Even when functioning in the parallel processing mode however, the processors are not locked-step but execute their own copy of the program individually unless or until another overall processor array synchronization instruction is issued.

  17. Green Secure Processors: Towards Power-Efficient Secure Processor Design

    NASA Astrophysics Data System (ADS)

    Chhabra, Siddhartha; Solihin, Yan

    With the increasing wealth of digital information stored on computer systems today, security issues have become increasingly important. In addition to attacks targeting the software stack of a system, hardware attacks have become equally likely. Researchers have proposed Secure Processor Architectures which utilize hardware mechanisms for memory encryption and integrity verification to protect the confidentiality and integrity of data and computation, even from sophisticated hardware attacks. While there have been many works addressing performance and other system level issues in secure processor design, power issues have largely been ignored. In this paper, we first analyze the sources of power (energy) increase in different secure processor architectures. We then present a power analysis of various secure processor architectures in terms of their increase in power consumption over a base system with no protection and then provide recommendations for designs that offer the best balance between performance and power without compromising security. We extend our study to the embedded domain as well. We also outline the design of a novel hybrid cryptographic engine that can be used to minimize the power consumption for a secure processor. We believe that if secure processors are to be adopted in future systems (general purpose or embedded), it is critically important that power issues are considered in addition to performance and other system level issues. To the best of our knowledge, this is the first work to examine the power implications of providing hardware mechanisms for security.

  18. Microscopic derivation of IBM and structural evolution in nuclei

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nomura, Kosuke

    A Hamiltonian of the interacting boson model (IBM) is derived based on the mean-field calculations with nuclear energy density functionals (EDFs). The multi-nucleon dynamics of the surface deformation is simulated in terms of the boson degrees of freedom. The interaction strengths of the IBM Hamiltonian are determined by mapping the potential energy surfaces (PESs) of a given EDF with quadrupole degrees of freedom onto the corresponding PES of IBM. A fermion-to-boson mapping for a rotational nucleus is discussed in terms of the rotational response, which reflects a specific time-dependent feature. Ground-state correlation energy is evaluated as a signature of structuralmore » evolution. Some examples resulting from the present spectroscopic calculations are shown for neutron-rich Pt, Os and W isotopes including exotic ones.« less

  19. Applications Performance Under MPL and MPI on NAS IBM SP2

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Simon, Horst D.; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    On July 5, 1994, an IBM Scalable POWER parallel System (IBM SP2) with 64 nodes, was installed at the Numerical Aerodynamic Simulation (NAS) Facility Each node of NAS IBM SP2 is a "wide node" consisting of a RISC 6000/590 workstation module with a clock of 66.5 MHz which can perform four floating point operations per clock with a peak performance of 266 Mflop/s. By the end of 1994, 64 nodes of IBM SP2 will be upgraded to 160 nodes with a peak performance of 42.5 Gflop/s. An overview of the IBM SP2 hardware is presented. The basic understanding of architectural details of RS 6000/590 will help application scientists the porting, optimizing, and tuning of codes from other machines such as the CRAY C90 and the Paragon to the NAS SP2. Optimization techniques such as quad-word loading, effective utilization of two floating point units, and data cache optimization of RS 6000/590 is illustrated, with examples giving performance gains at each optimization step. The conversion of codes using Intel's message passing library NX to codes using native Message Passing Library (MPL) and the Message Passing Interface (NMI) library available on the IBM SP2 is illustrated. In particular, we will present the performance of Fast Fourier Transform (FFT) kernel from NAS Parallel Benchmarks (NPB) under MPL and MPI. We have also optimized some of Fortran BLAS 2 and BLAS 3 routines, e.g., the optimized Fortran DAXPY runs at 175 Mflop/s and optimized Fortran DGEMM runs at 230 Mflop/s per node. The performance of the NPB (Class B) on the IBM SP2 is compared with the CRAY C90, Intel Paragon, TMC CM-5E, and the CRAY T3D.

  20. Training in the Workplace: An IBM Case Study. Contractor Report.

    ERIC Educational Resources Information Center

    Grubb, Ralph E.

    International Business Machines Corporation's (IBM) efforts to develop a corporate culture are associated with its founder, Thomas J. Watson, Sr. From the start of his association with the company in 1914, the importance of education was stressed. The expansion of the education and training organization paralleled IBM's 75-year growth. In January…

  1. A diesel fuel processor for fuel-cell-based auxiliary power unit applications

    NASA Astrophysics Data System (ADS)

    Samsun, Remzi Can; Krekel, Daniel; Pasel, Joachim; Prawitz, Matthias; Peters, Ralf; Stolten, Detlef

    2017-07-01

    Producing a hydrogen-rich gas from diesel fuel enables the efficient generation of electricity in a fuel-cell-based auxiliary power unit. In recent years, significant progress has been achieved in diesel reforming. One issue encountered is the stable operation of water-gas shift reactors with real reformates. A new fuel processor is developed using a commercial shift catalyst. The system is operated using optimized start-up and shut-down strategies. Experiments with diesel and kerosene fuels show slight performance drops in the shift reactor during continuous operation for 100 h. CO concentrations much lower than the target value are achieved during system operation in auxiliary power unit mode at partial loads of up to 60%. The regeneration leads to full recovery of the shift activity. Finally, a new operation strategy is developed whereby the gas hourly space velocity of the shift stages is re-designed. This strategy is validated using different diesel and kerosene fuels, showing a maximum CO concentration of 1.5% at the fuel processor outlet under extreme conditions, which can be tolerated by a high-temperature PEFC. The proposed operation strategy solves the issue of strong performance drop in the shift reactor and makes this technology available for reducing emissions in the transportation sector.

  2. Simulating Synchronous Processors

    DTIC Science & Technology

    1988-06-01

    34f Fvtvru m LABORATORY FOR INMASSACHUSETTSFCOMPUTER SCIENCE TECHNOLOGY MIT/LCS/TM-359 SIMULATING SYNCHRONOUS PROCESSORS Jennifer Lundelius Welch...PROJECT TASK WORK UNIT Arlington, VA 22217 ELEMENT NO. NO. NO ACCESSION NO. 11. TITLE Include Security Classification) Simulating Synchronous Processors...necessary and identify by block number) In this paper we show how a distributed system with synchronous processors and asynchro- nous message delays can

  3. Analysis of the energy efficiency of an integrated ethanol processor for PEM fuel cell systems

    NASA Astrophysics Data System (ADS)

    Francesconi, Javier A.; Mussati, Miguel C.; Mato, Roberto O.; Aguirre, Pio A.

    The aim of this work is to investigate the energy integration and to determine the maximum efficiency of an ethanol processor for hydrogen production and fuel cell operation. Ethanol, which can be produced from renewable feedstocks or agriculture residues, is an attractive option as feed to a fuel processor. The fuel processor investigated is based on steam reforming, followed by high- and low-temperature shift reactors and preferential oxidation, which are coupled to a polymeric fuel cell. Applying simulation techniques and using thermodynamic models the performance of the complete system has been evaluated for a variety of operating conditions and possible reforming reactions pathways. These models involve mass and energy balances, chemical equilibrium and feasible heat transfer conditions (Δ T min). The main operating variables were determined for those conditions. The endothermic nature of the reformer has a significant effect on the overall system efficiency. The highest energy consumption is demanded by the reforming reactor, the evaporator and re-heater operations. To obtain an efficient integration, the heat exchanged between the reformer outgoing streams of higher thermal level (reforming and combustion gases) and the feed stream should be maximized. Another process variable that affects the process efficiency is the water-to-fuel ratio fed to the reformer. Large amounts of water involve large heat exchangers and the associated heat losses. A net electric efficiency around 35% was calculated based on the ethanol HHV. The responsibilities for the remaining 65% are: dissipation as heat in the PEMFC cooling system (38%), energy in the flue gases (10%) and irreversibilities in compression and expansion of gases. In addition, it has been possible to determine the self-sufficient limit conditions, and to analyze the effect on the net efficiency of the input temperatures of the clean-up system reactors, combustion preheating, expander unit and crude ethanol as

  4. Integration, Development and Performance of the 500 TFLOPS Heterogeneous Cluster (Condor)

    DTIC Science & Technology

    2012-08-01

    PlayStation 3 for High Performance Cluster Computing” LAPACK Working Note 185, 2007. [ 4 ] Feng, W., X. Feng, and R. Ge, “Green Supercomputing Comes of...CONFERENCE PAPER (Post Print) 3. DATES COVERED (From - To) JUN 2010 – MAY 2013 4 . TITLE AND SUBTITLE INTEGRATION, DEVELOPMENT AND PERFORMANCE OF...and streaming processing; the PlayStation 3 uses the IBM Cell BE processor, which adopts the multi-processor, single-instruction-multiple- data (SIMD

  5. Ada Compiler Validation Summary Report: Certificate Number: 880318W1. 09041, International Business Machines Corporation, IBM Development System for the Ada Language, Version 2.1.0, IBM 4381 under VM/HPO, Host and Target

    DTIC Science & Technology

    1988-03-28

    International Business Machines Corporation IBM Development System for the Ada Language, Version 2.1.0 IBM 4381 under VM/HPO, host and target DTIC...necessary and identify by block number) International Business Machines Corporation, IBM Development System for the Ada Language, Version 2.1.0, IBM...in the compiler listed in this declaration. I declare that International Business Machines Corporation is the owner of record of the object code of the

  6. Hybrid Electro-Optic Processor

    DTIC Science & Technology

    1991-07-01

    This report describes the design of a hybrid electro - optic processor to perform adaptive interference cancellation in radar systems. The processor is...modulator is reported. Included is this report is a discussion of the design, partial fabrication in the laboratory, and partial testing of the hybrid electro ... optic processor. A follow on effort is planned to complete the construction and testing of the processor. The work described in this report is the

  7. Design and development of an IBM/VM menu system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cazzola, D.J.

    1992-10-01

    This report describes a full screen menu system developed using IBM's Interactive System Productivity Facility (ISPF) and the REXX programming language. The software was developed for the 2800 IBM/VM Electrical Computer Aided Design (ECAD) system. The system was developed to deliver electronic drawing definitions to a corporate drawing release system. Although this report documents the status of the menu system when it was retired, the methodologies used and the requirements defined are very applicable to replacement systems.

  8. A DNA sequence analysis package for the IBM personal computer.

    PubMed Central

    Lagrimini, L M; Brentano, S T; Donelson, J E

    1984-01-01

    We present here a collection of DNA sequence analysis programs, called "PC Sequence" (PCS), which are designed to run on the IBM Personal Computer (PC). These programs are written in IBM PC compiled BASIC and take full advantage of the IBM PC's speed, error handling, and graphics capabilities. For a modest initial expense in hardware any laboratory can use these programs to quickly perform computer analysis on DNA sequences. They are written with the novice user in mind and require very little training or previous experience with computers. Also provided are a text editing program for creating and modifying DNA sequence files and a communications program which enables the PC to communicate with and collect information from mainframe computers and DNA sequence databases. PMID:6546433

  9. External audio for IBM-compatible computers

    NASA Technical Reports Server (NTRS)

    Washburn, David A.

    1992-01-01

    Numerous applications benefit from the presentation of computer-generated auditory stimuli at points discontiguous with the computer itself. Modification of an IBM-compatible computer for use of an external speaker is relatively easy but not intuitive. This modification is briefly described.

  10. Understanding checkpointing overheads on massive-scale systems : analysis of the IBM Blue Gene/P system.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gupta, R.; Naik, H.; Beckman, P.

    Providing fault tolerance in high-end petascale systems, consisting of millions of hardware components and complex software stacks, is becoming an increasingly challenging task. Checkpointing continues to be the most prevalent technique for providing fault tolerance in such high-end systems. Considerable research has focussed on optimizing checkpointing; however, in practice, checkpointing still involves a high-cost overhead for users. In this paper, we study the checkpointing overhead seen by various applications running on leadership-class machines like the IBM Blue Gene/P at Argonne National Laboratory. In addition to studying popular applications, we design a methodology to help users understand and intelligently choose anmore » optimal checkpointing frequency to reduce the overall checkpointing overhead incurred. In particular, we study the Grid-Based Projector-Augmented Wave application, the Carr-Parrinello Molecular Dynamics application, the Nek5000 computational fluid dynamics application and the Parallel Ocean Program application-and analyze their memory usage and possible checkpointing trends on 65,536 processors of the Blue Gene/P system.« less

  11. Neuron splitting in compute-bound parallel network simulations enables runtime scaling with twice as many processors.

    PubMed

    Hines, Michael L; Eichner, Hubert; Schürmann, Felix

    2008-08-01

    Neuron tree topology equations can be split into two subtrees and solved on different processors with no change in accuracy, stability, or computational effort; communication costs involve only sending and receiving two double precision values by each subtree at each time step. Splitting cells is useful in attaining load balance in neural network simulations, especially when there is a wide range of cell sizes and the number of cells is about the same as the number of processors. For compute-bound simulations load balance results in almost ideal runtime scaling. Application of the cell splitting method to two published network models exhibits good runtime scaling on twice as many processors as could be effectively used with whole-cell balancing.

  12. Hybrid Optical Processor

    DTIC Science & Technology

    1990-08-01

    LCTVs) ..................... 17 2.14 JOINT FOURIER TRANSFORM PROCESSOR .................. 18 2.15 HOLOGRAPHIC ASSOCIATIVE MEMORY USING A MICRO ...RADC-TR-90-256 Final Technical Report August1990 AD-A227 163 HYBRID OPTICAL PROCESSOR Dove Electronics, Inc. J.F. Dove, F.T .S. Yu, C. Eldering...ANM SUSUE & FUNDING NUMBERS C - F19628-87-C-0086 HYBRID OPTICAL PROCESSOR PE - 61102F PR - 2305 &AUThNOA TA - J7 J.F. Dove, F.T.S. Yu, C. Eldering WU

  13. Development of compact fuel processor for 2 kW class residential PEMFCs

    NASA Astrophysics Data System (ADS)

    Seo, Yu Taek; Seo, Dong Joo; Jeong, Jin Hyeok; Yoon, Wang Lai

    Korea Institute of Energy Research (KIER) has been developing a novel fuel processing system to provide hydrogen rich gas to residential polymer electrolyte membrane fuel cells (PEMFCs) cogeneration system. For the effective design of a compact hydrogen production system, the unit processes of steam reforming, high and low temperature water gas shift, steam generator and internal heat exchangers are thermally and physically integrated into a packaged hardware system. Several prototypes are under development and the prototype I fuel processor showed thermal efficiency of 73% as a HHV basis with methane conversion of 81%. Recently tested prototype II has been shown the improved performance of thermal efficiency of 76% with methane conversion of 83%. In both prototypes, two-stage PrOx reactors reduce CO concentration less than 10 ppm, which is the prerequisite CO limit condition of product gas for the PEMFCs stack. After confirming the initial performance of prototype I fuel processor, it is coupled with PEMFC single cell to test the durability and demonstrated that the fuel processor is operated for 3 days successfully without any failure of fuel cell voltage. Prototype II fuel processor also showed stable performance during the durability test.

  14. Sequence information signal processor

    DOEpatents

    Peterson, John C.; Chow, Edward T.; Waterman, Michael S.; Hunkapillar, Timothy J.

    1999-01-01

    An electronic circuit is used to compare two sequences, such as genetic sequences, to determine which alignment of the sequences produces the greatest similarity. The circuit includes a linear array of series-connected processors, each of which stores a single element from one of the sequences and compares that element with each successive element in the other sequence. For each comparison, the processor generates a scoring parameter that indicates which segment ending at those two elements produces the greatest degree of similarity between the sequences. The processor uses the scoring parameter to generate a similar scoring parameter for a comparison between the stored element and the next successive element from the other sequence. The processor also delivers the scoring parameter to the next processor in the array for use in generating a similar scoring parameter for another pair of elements. The electronic circuit determines which processor and alignment of the sequences produce the scoring parameter with the highest value.

  15. Issues Identified During September 2016 IBM OpenMP 4.5 Hackathon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richards, David F.

    In September, 2016 IBM hosted an OpenMP 4.5 Hackathon at the TJ Watson Research Center. Teams from LLNL, ORNL, SNL, LANL, and LBNL attended the event. As with the 2015 hackathon, IBM produced an extremely useful and successful event with unmatched support from compiler team, applications staff, and facilities. Approximately 24 IBM staff supported 4-day hackathon and spent significant time 4-6 weeks out to prepare environment and become familiar with apps. This hackathon was also the first event to feature LLVM & XL C/C++ and Fortran compilers. This report records many of the issues encountered by the LLNL teams duringmore » the hackathon.« less

  16. Ada Compiler Validation Summary Report: Certificate Number 89020W1. 10073: International Business Machines Corporation, IBM Development System for the Ada Language, VM/CMS Ada Compiler, Version 2.1.1, IBM 3083 (Host and Target)

    DTIC Science & Technology

    1989-04-20

    International Business Machines Corporation) IBM Development System for the Ada Language, VN11/CMS Ada Compiler, Version 2.1.1, Wright-Patterson AFB, IBM 3083...890420W1.10073 International Business Machines Corporation IBM Development System for the Ada Language VM/CMS Ada Compiler Version 2.1.1 IBM 3083... International Business Machines Corporation and reviewed by the validation team. The compiler was tested using all default option settings except for the

  17. Multithreading in vector processors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evangelinos, Constantinos; Kim, Changhoan; Nair, Ravi

    In one embodiment, a system includes a processor having a vector processing mode and a multithreading mode. The processor is configured to operate on one thread per cycle in the multithreading mode. The processor includes a program counter register having a plurality of program counters, and the program counter register is vectorized. Each program counter in the program counter register represents a distinct corresponding thread of a plurality of threads. The processor is configured to execute the plurality of threads by activating the plurality of program counters in a round robin cycle.

  18. Making IBM's Computer, Watson, Human

    ERIC Educational Resources Information Center

    Rachlin, Howard

    2012-01-01

    This essay uses the recent victory of an IBM computer (Watson) in the TV game, "Jeopardy," to speculate on the abilities Watson would need, in addition to those it has, to be human. The essay's basic premise is that to be human is to behave as humans behave and to function in society as humans function. Alternatives to this premise are considered…

  19. Performance Support Case Studies from IBM.

    ERIC Educational Resources Information Center

    Duke-Moran, Celia; Swope, Ginger; Morariu, Janis; deKam, Peter

    1999-01-01

    Presents two case studies that show how IBM addressed performance support solutions and electronic learning. The first developed a performance support and expert coaching solution; the second applied performance support to reducing implementation time and total cost of ownership of enterprise resource planning systems. (Author/LRW)

  20. Hardware multiplier processor

    DOEpatents

    Pierce, Paul E.

    1986-01-01

    A hardware processor is disclosed which in the described embodiment is a memory mapped multiplier processor that can operate in parallel with a 16 bit microcomputer. The multiplier processor decodes the address bus to receive specific instructions so that in one access it can write and automatically perform single or double precision multiplication involving a number written to it with or without addition or subtraction with a previously stored number. It can also, on a single read command automatically round and scale a previously stored number. The multiplier processor includes two concatenated 16 bit multiplier registers, two 16 bit concatenated 16 bit multipliers, and four 16 bit product registers connected to an internal 16 bit data bus. A high level address decoder determines when the multiplier processor is being addressed and first and second low level address decoders generate control signals. In addition, certain low order address lines are used to carry uncoded control signals. First and second control circuits coupled to the decoders generate further control signals and generate a plurality of clocking pulse trains in response to the decoded and address control signals.

  1. Hardware multiplier processor

    DOEpatents

    Pierce, P.E.

    A hardware processor is disclosed which in the described embodiment is a memory mapped multiplier processor that can operate in parallel with a 16 bit microcomputer. The multiplier processor decodes the address bus to receive specific instructions so that in one access it can write and automatically perform single or double precision multiplication involving a number written to it with or without addition or subtraction with a previously stored number. It can also, on a single read command automatically round and scale a previously stored number. The multiplier processor includes two concatenated 16 bit multiplier registers, two 16 bit concatenated 16 bit multipliers, and four 16 bit product registers connected to an internal 16 bit data bus. A high level address decoder determines when the multiplier processor is being addressed and first and second low level address decoders generate control signals. In addition, certain low order address lines are used to carry uncoded control signals. First and second control circuits coupled to the decoders generate further control signals and generate a plurality of clocking pulse trains in response to the decoded and address control signals.

  2. The IBM PC at NASA Ames

    NASA Technical Reports Server (NTRS)

    Peredo, James P.

    1988-01-01

    Like many large companies, Ames relies very much on its computing power to get work done. And, like many other large companies, finding the IBM PC a reliable tool, Ames uses it for many of the same types of functions as other companies. Presentation and clarification needs demand much of graphics packages. Programming and text editing needs require simpler, more-powerful packages. The storage space needed by NASA's scientists and users for the monumental amounts of data that Ames needs to keep demand the best database packages that are large and easy to use. Availability to the Micom Switching Network combines the powers of the IBM PC with the capabilities of other computers and mainframes and allows users to communicate electronically. These four primary capabilities of the PC are vital to the needs of NASA's users and help to continue and support the vast amounts of work done by the NASA employees.

  3. 35-We polymer electrolyte membrane fuel cell system for notebook computer using a compact fuel processor

    NASA Astrophysics Data System (ADS)

    Son, In-Hyuk; Shin, Woo-Cheol; Lee, Yong-Kul; Lee, Sung-Chul; Ahn, Jin-Gu; Han, Sang-Il; kweon, Ho-Jin; Kim, Ju-Yong; Kim, Moon-Chan; Park, Jun-Yong

    A polymer electrolyte membrane fuel cell (PEMFC) system is developed to power a notebook computer. The system consists of a compact methanol-reforming system with a CO preferential oxidation unit, a 16-cell PEMFC stack, and a control unit for the management of the system with a d.c.-d.c. converter. The compact fuel-processor system (260 cm 3) generates about 1.2 L min -1 of reformate, which corresponds to 35 We, with a low CO concentration (<30 ppm, typically 0 ppm), and is thus proven to be capable of being targetted at notebook computers.

  4. The web server of IBM's Bioinformatics and Pattern Discovery group.

    PubMed

    Huynh, Tien; Rigoutsos, Isidore; Parida, Laxmi; Platt, Daniel; Shibuya, Tetsuo

    2003-07-01

    We herein present and discuss the services and content which are available on the web server of IBM's Bioinformatics and Pattern Discovery group. The server is operational around the clock and provides access to a variety of methods that have been published by the group's members and collaborators. The available tools correspond to applications ranging from the discovery of patterns in streams of events and the computation of multiple sequence alignments, to the discovery of genes in nucleic acid sequences and the interactive annotation of amino acid sequences. Additionally, annotations for more than 70 archaeal, bacterial, eukaryotic and viral genomes are available on-line and can be searched interactively. The tools and code bundles can be accessed beginning at http://cbcsrv.watson.ibm.com/Tspd.html whereas the genomics annotations are available at http://cbcsrv.watson.ibm.com/Annotations/.

  5. The web server of IBM's Bioinformatics and Pattern Discovery group

    PubMed Central

    Huynh, Tien; Rigoutsos, Isidore; Parida, Laxmi; Platt, Daniel; Shibuya, Tetsuo

    2003-01-01

    We herein present and discuss the services and content which are available on the web server of IBM's Bioinformatics and Pattern Discovery group. The server is operational around the clock and provides access to a variety of methods that have been published by the group's members and collaborators. The available tools correspond to applications ranging from the discovery of patterns in streams of events and the computation of multiple sequence alignments, to the discovery of genes in nucleic acid sequences and the interactive annotation of amino acid sequences. Additionally, annotations for more than 70 archaeal, bacterial, eukaryotic and viral genomes are available on-line and can be searched interactively. The tools and code bundles can be accessed beginning at http://cbcsrv.watson.ibm.com/Tspd.html whereas the genomics annotations are available at http://cbcsrv.watson.ibm.com/Annotations/. PMID:12824385

  6. Direct RF A-O Processor Spectrum Analyzer.

    DTIC Science & Technology

    1981-08-01

    The primary objective was to develop and demonstrate design approach, along with the associated processing technologies, for a wideband acousto optic Bragg...cell spectrum analyzer. The signal processor used to demonstrate feasibility of the technical approach consisted of two bulk wave acousto optic deflectors

  7. Investigation of triaxiality in 54122-128Xe isotopes in the framework of sdg-IBM

    NASA Astrophysics Data System (ADS)

    Jafarizadeh, M. A.; Ranjbar, Z.; Fouladi, N.; Ghapanvari, M.

    In this paper, a transitional interacting boson model (IBM) Hamiltonian in both sd-(IBM) and sdg-IBM versions based on affine SU(1, 1) Lie algebra is employed to describe deviations from the gamma-unstable nature of Hamiltonian along the chain of Xe isotopes. sdg-IBM Hamiltonian proposed a better interpretation of this deviation which cannot be explained in the sd-boson models. The nuclei studied have well-known γ bands close to the γ-unstable limit. The energy levels, B(E2) transition rates and signature splitting of the γ -vibrational band are calculated via the affine SU(1,1) Lie algebra. An acceptable degree of agreement was achieved based on this procedure. It is shown that in these isotopes the signature splitting is better reproduced by the inclusion of sdg-IBM. In none of them, any evidence for a stable, triaxial ground state shape is found.

  8. A performance evaluation of the IBM 370/XT personal computer

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Triantafyllopoulos, Spiros

    1984-01-01

    An evaluation of the IBM 370/XT personal computer is given. This evaluation focuses primarily on the use of the 370/XT for scientific and technical applications and applications development. A measurement of the capabilities of the 370/XT was performed by means of test programs which are presented. Also included is a review of facilities provided by the operating system (VM/PC), along with comments on the IBM 370/XT hardware configuration.

  9. Array processor architecture connection network

    NASA Technical Reports Server (NTRS)

    Barnes, George H. (Inventor); Lundstrom, Stephen F. (Inventor); Shafer, Philip E. (Inventor)

    1982-01-01

    A connection network is disclosed for use between a parallel array of processors and a parallel array of memory modules for establishing non-conflicting data communications paths between requested memory modules and requesting processors. The connection network includes a plurality of switching elements interposed between the processor array and the memory modules array in an Omega networking architecture. Each switching element includes a first and a second processor side port, a first and a second memory module side port, and control logic circuitry for providing data connections between the first and second processor ports and the first and second memory module ports. The control logic circuitry includes strobe logic for examining data arriving at the first and the second processor ports to indicate when the data arriving is requesting data from a requesting processor to a requested memory module. Further, connection circuitry is associated with the strobe logic for examining requesting data arriving at the first and the second processor ports for providing a data connection therefrom to the first and the second memory module ports in response thereto when the data connection so provided does not conflict with a pre-established data connection currently in use.

  10. The ATLAS Level-1 Calorimeter Trigger: PreProcessor implementation and performance

    NASA Astrophysics Data System (ADS)

    Åsman, B.; Achenbach, R.; Allbrooke, B. M. M.; Anders, G.; Andrei, V.; Büscher, V.; Bansil, H. S.; Barnett, B. M.; Bauss, B.; Bendtz, K.; Bohm, C.; Bracinik, J.; Brawn, I. P.; Brock, R.; Buttinger, W.; Caputo, R.; Caughron, S.; Cerrito, L.; Charlton, D. G.; Childers, J. T.; Curtis, C. J.; Daniells, A. C.; Davis, A. O.; Davygora, Y.; Dorn, M.; Eckweiler, S.; Edmunds, D.; Edwards, J. P.; Eisenhandler, E.; Ellis, K.; Ermoline, Y.; Föhlisch, F.; Faulkner, P. J. W.; Fedorko, W.; Fleckner, J.; French, S. T.; Gee, C. N. P.; Gillman, A. R.; Goeringer, C.; Hülsing, T.; Hadley, D. R.; Hanke, P.; Hauser, R.; Heim, S.; Hellman, S.; Hickling, R. S.; Hidvégi, A.; Hillier, S. J.; Hofmann, J. I.; Hristova, I.; Ji, W.; Johansen, M.; Keller, M.; Khomich, A.; Kluge, E.-E.; Koll, J.; Laier, H.; Landon, M. P. J.; Lang, V. S.; Laurens, P.; Lepold, F.; Lilley, J. N.; Linnemann, J. T.; Müller, F.; Müller, T.; Mahboubi, K.; Martin, T. A.; Mass, A.; Meier, K.; Meyer, C.; Middleton, R. P.; Moa, T.; Moritz, S.; Morris, J. D.; Mudd, R. D.; Narayan, R.; zur Nedden, M.; Neusiedl, A.; Newman, P. R.; Nikiforov, A.; Ohm, C. C.; Perera, V. J. O.; Pfeiffer, U.; Plucinski, P.; Poddar, S.; Prieur, D. P. F.; Qian, W.; Rieck, P.; Rizvi, E.; Sankey, D. P. C.; Schäfer, U.; Scharf, V.; Schmitt, K.; Schröder, C.; Schultz-Coulon, H.-C.; Schumacher, C.; Schwienhorst, R.; Silverstein, S. B.; Simioni, E.; Snidero, G.; Staley, R. J.; Stamen, R.; Stock, P.; Stockton, M. C.; Tan, C. L. A.; Tapprogge, S.; Thomas, J. P.; Thompson, P. D.; Thomson, M.; True, P.; Watkins, P. M.; Watson, A. T.; Watson, M. F.; Weber, P.; Wessels, M.; Wiglesworth, C.; Williams, S. L.

    2012-12-01

    The PreProcessor system of the ATLAS Level-1 Calorimeter Trigger (L1Calo) receives about 7200 analogue signals from the electromagnetic and hadronic components of the calorimetric detector system. Lateral division results in cells which are pre-summed to so-called Trigger Towers of size 0.1 × 0.1 along azimuth (phi) and pseudorapidity (η). The received calorimeter signals represent deposits of transverse energy. The system consists of 124 individual PreProcessor modules that digitise the input signals for each LHC collision, and provide energy and timing information to the digital processors of the L1Calo system, which identify physics objects forming much of the basis for the full ATLAS first level trigger decision. This paper describes the architecture of the PreProcessor, its hardware realisation, functionality, and performance.

  11. A light hydrocarbon fuel processor producing high-purity hydrogen

    NASA Astrophysics Data System (ADS)

    Löffler, Daniel G.; Taylor, Kyle; Mason, Dylan

    This paper discusses the design process and presents performance data for a dual fuel (natural gas and LPG) fuel processor for PEM fuel cells delivering between 2 and 8 kW electric power in stationary applications. The fuel processor resulted from a series of design compromises made to address different design constraints. First, the product quality was selected; then, the unit operations needed to achieve that product quality were chosen from the pool of available technologies. Next, the specific equipment needed for each unit operation was selected. Finally, the unit operations were thermally integrated to achieve high thermal efficiency. Early in the design process, it was decided that the fuel processor would deliver high-purity hydrogen. Hydrogen can be separated from other gases by pressure-driven processes based on either selective adsorption or permeation. The pressure requirement made steam reforming (SR) the preferred reforming technology because it does not require compression of combustion air; therefore, steam reforming is more efficient in a high-pressure fuel processor than alternative technologies like autothermal reforming (ATR) or partial oxidation (POX), where the combustion occurs at the pressure of the process stream. A low-temperature pre-reformer reactor is needed upstream of a steam reformer to suppress coke formation; yet, low temperatures facilitate the formation of metal sulfides that deactivate the catalyst. For this reason, a desulfurization unit is needed upstream of the pre-reformer. Hydrogen separation was implemented using a palladium alloy membrane. Packed beds were chosen for the pre-reformer and reformer reactors primarily because of their low cost, relatively simple operation and low maintenance. Commercial, off-the-shelf balance of plant (BOP) components (pumps, valves, and heat exchangers) were used to integrate the unit operations. The fuel processor delivers up to 100 slm hydrogen >99.9% pure with <1 ppm CO, <3 ppm CO 2. The

  12. Method for fast start of a fuel processor

    DOEpatents

    Ahluwalia, Rajesh K [Burr Ridge, IL; Ahmed, Shabbir [Naperville, IL; Lee, Sheldon H. D. [Willowbrook, IL

    2008-01-29

    An improved fuel processor for fuel cells is provided whereby the startup time of the processor is less than sixty seconds and can be as low as 30 seconds, if not less. A rapid startup time is achieved by either igniting or allowing a small mixture of air and fuel to react over and warm up the catalyst of an autothermal reformer (ATR). The ATR then produces combustible gases to be subsequently oxidized on and simultaneously warm up water-gas shift zone catalysts. After normal operating temperature has been achieved, the proportion of air included with the fuel is greatly diminished.

  13. Enhancing Image Processing Performance for PCID in a Heterogeneous Network of Multi-code Processors

    NASA Astrophysics Data System (ADS)

    Linderman, R.; Spetka, S.; Fitzgerald, D.; Emeny, S.

    The Physically-Constrained Iterative Deconvolution (PCID) image deblurring code is being ported to heterogeneous networks of multi-core systems, including Intel Xeons and IBM Cell Broadband Engines. This paper reports results from experiments using the JAWS supercomputer at MHPCC (60 TFLOPS of dual-dual Xeon nodes linked with Infiniband) and the Cell Cluster at AFRL in Rome, NY. The Cell Cluster has 52 TFLOPS of Playstation 3 (PS3) nodes with IBM Cell Broadband Engine multi-cores and 15 dual-quad Xeon head nodes. The interconnect fabric includes Infiniband, 10 Gigabit Ethernet and 1 Gigabit Ethernet to each of the 336 PS3s. The results compare approaches to parallelizing FFT executions across the Xeons and the Cell's Synergistic Processing Elements (SPEs) for frame-level image processing. The experiments included Intel's Performance Primitives and Math Kernel Library, FFTW3.2, and Carnegie Mellon's SPIRAL. Optimization of FFTs in the PCID code led to a decrease in relative processing time for FFTs. Profiling PCID version 6.2, about one year ago, showed the 13 functions that accounted for the highest percentage of processing were all FFT processing functions. They accounted for over 88% of processing time in one run on Xeons. FFT optimizations led to improvement in the current PCID version 8.0. A recent profile showed that only two of the 19 functions with the highest processing time were FFT processing functions. Timing measurements showed that FFT processing for PCID version 8.0 has been reduced to less than 19% of overall processing time. We are working toward a goal of scaling to 200-400 cores per job (1-2 imagery frames/core). Running a pair of cores on each set of frames reduces latency by implementing parallel FFT processing. Our current results show scaling well out to 100 pairs of cores. These results support the next higher level of parallelism in PCID, where groups of several hundred frames each producing one resolved image are sent to cliques of several

  14. Processor register error correction management

    DOEpatents

    Bose, Pradip; Cher, Chen-Yong; Gupta, Meeta S.

    2016-12-27

    Processor register protection management is disclosed. In embodiments, a method of processor register protection management can include determining a sensitive logical register for executable code generated by a compiler, generating an error-correction table identifying the sensitive logical register, and storing the error-correction table in a memory accessible by a processor. The processor can be configured to generate a duplicate register of the sensitive logical register identified by the error-correction table.

  15. Performance evaluation of throughput computing workloads using multi-core processors and graphics processors

    NASA Astrophysics Data System (ADS)

    Dave, Gaurav P.; Sureshkumar, N.; Blessy Trencia Lincy, S. S.

    2017-11-01

    Current trend in processor manufacturing focuses on multi-core architectures rather than increasing the clock speed for performance improvement. Graphic processors have become as commodity hardware for providing fast co-processing in computer systems. Developments in IoT, social networking web applications, big data created huge demand for data processing activities and such kind of throughput intensive applications inherently contains data level parallelism which is more suited for SIMD architecture based GPU. This paper reviews the architectural aspects of multi/many core processors and graphics processors. Different case studies are taken to compare performance of throughput computing applications using shared memory programming in OpenMP and CUDA API based programming.

  16. 76 FR 174 - International Business Machines (IBM), Global Sales Operations Organization, Sales and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-03

    ...] International Business Machines (IBM), Global Sales Operations Organization, Sales and Distribution Business Manager Roles; One Teleworker Located in Charleston, WV; International Business Machines (IBM), Global Sales Operations Organization, Sales and Distribution Business Unit, Relations Analyst and Band 8...

  17. A Comparison of the Apple Macintosh and IBM PC in Laboratory Applications.

    ERIC Educational Resources Information Center

    Williams, Ron

    1986-01-01

    Compares Apple Macintosh and IBM PC microcomputers in terms of their usefulness in the laboratory. No attempt is made to equalize the two computer systems since they represent opposite ends of the computer spectrum. Indicates that the IBM PC is the most useful general-purpose personal computer for laboratory applications. (JN)

  18. Enhancing Image Processing Performance for PCID in a Heterogeneous Network of Multi-core Processors

    DTIC Science & Technology

    2009-09-01

    TFLOPS of Playstation 3 (PS3) nodes with IBM Cell Broadband Engine multi-cores and 15 dual-quad Xeon head nodes. The interconnect fabric includes... 4   3. INFORMATION MANAGEMENT FOR PARALLELIZATION AND...STREAMING............................................................. 7  4 . RESULTS

  19. 40 CFR 791.45 - Processors.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) When a test rule or subsequent Federal Register notice pertaining to a test rule expressly obligates processors as well as manufacturers to assume direct testing and data reimbursement responsibilities. (2... processors voluntarily agree to reimburse manufacturers for a portion of test costs. Only those processors...

  20. FPGA wavelet processor design using language for instruction-set architectures (LISA)

    NASA Astrophysics Data System (ADS)

    Meyer-Bäse, Uwe; Vera, Alonzo; Rao, Suhasini; Lenk, Karl; Pattichis, Marios

    2007-04-01

    The design of an microprocessor is a long, tedious, and error-prone task consisting of typically three design phases: architecture exploration, software design (assembler, linker, loader, profiler), architecture implementation (RTL generation for FPGA or cell-based ASIC) and verification. The Language for instruction-set architectures (LISA) allows to model a microprocessor not only from instruction-set but also from architecture description including pipelining behavior that allows a design and development tool consistency over all levels of the design. To explore the capability of the LISA processor design platform a.k.a. CoWare Processor Designer we present in this paper three microprocessor designs that implement a 8/8 wavelet transform processor that is typically used in today's FBI fingerprint compression scheme. We have designed a 3 stage pipelined 16 bit RISC processor (NanoBlaze). Although RISC μPs are usually considered "fast" processors due to design concept like constant instruction word size, deep pipelines and many general purpose registers, it turns out that DSP operations consume essential processing time in a RISC processor. In a second step we have used design principles from programmable digital signal processor (PDSP) to improve the throughput of the DWT processor. A multiply-accumulate operation along with indirect addressing operation were the key to achieve higher throughput. A further improvement is possible with today's FPGA technology. Today's FPGAs offer a large number of embedded array multipliers and it is now feasible to design a "true" vector processor (TVP). A multiplication of two vectors can be done in just one clock cycle with our TVP, a complete scalar product in two clock cycles. Code profiling and Xilinx FPGA ISE synthesis results are provided that demonstrate the essential improvement that a TVP has compared with traditional RISC or PDSP designs.

  1. HONEI: A collection of libraries for numerical computations targeting multiple processor architectures

    NASA Astrophysics Data System (ADS)

    van Dyk, Danny; Geveler, Markus; Mallach, Sven; Ribbrock, Dirk; Göddeke, Dominik; Gutwenger, Carsten

    2009-12-01

    underlying hardware towards heterogeneity and parallelism. This is particularly relevant for data-intensive problems stemming from discretisations with local support, such as finite differences, volumes and elements. Solution method: To address these issues, we present a hardware aware collection of libraries combining the advantages of modern software techniques and hardware oriented programming. Applications built on top of these libraries can be configured trivially to execute on CPUs, GPUs or the Cell processor. In order to evaluate the performance and accuracy of our approach, we provide two domain specific applications; a multigrid solver for the Poisson problem and a fully explicit solver for 2D shallow water equations. Restrictions: HONEI is actively being developed, and its feature list is continuously expanded. Not all combinations of operations and architectures might be supported in earlier versions of the code. Obtaining snapshots from http://www.honei.org is recommended. Unusual features: The considered applications as well as all library operations can be run on NVIDIA GPUs and the Cell BE. Running time: Depending on the application, and the input sizes. The Poisson solver executes in few seconds, while the SWE solver requires up to 5 minutes for large spatial discretisations or small timesteps. References:http://www.nvidia.com/cuda. http://www.ibm.com/developerworks/power/cell.

  2. Data Mining and Knowledge Discover - IBM Cognitive Alternatives for NASA KSC

    NASA Technical Reports Server (NTRS)

    Velez, Victor Hugo

    2016-01-01

    Skillful tools in cognitive computing to transform industries have been found favorable and profitable for different Directorates at NASA KSC. In this study is shown how cognitive computing systems can be useful for NASA when computers are trained in the same way as humans are to gain knowledge over time. Increasing knowledge through senses, learning and a summation of events is how the applications created by the firm IBM empower the artificial intelligence in a cognitive computing system. NASA has explored and applied for the last decades the artificial intelligence approach specifically with cognitive computing in few projects adopting similar models proposed by IBM Watson. However, the usage of semantic technologies by the dedicated business unit developed by IBM leads these cognitive computing applications to outperform the functionality of the inner tools and present outstanding analysis to facilitate the decision making for managers and leads in a management information system.

  3. 75 FR 60141 - International Business Machines (IBM), Global Technology Services Delivery Division, Including On...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-29

    ... 25, 2010, applicable to workers of International Business Machines (IBM), Global Technology Services... hereby issued as follows: All workers of International Business Machines (IBM), Global Technology... DEPARTMENT OF LABOR Employment and Training Administration [TA-W-74,164] International Business...

  4. IBM techexplorer and MathML: Interactive Multimodal Scientific Documents

    NASA Astrophysics Data System (ADS)

    Diaz, Angel

    2001-06-01

    The World Wide Web provides a standard publishing platform for disseminating scientific and technical articles, books, journals, courseware, or even homework on the internet; however, the transition from paper to web-based interactive content has brought new opportunities for creating interactive content. Students, scientists, and engineers are now faced with the task of rendering the 2D presentational structure of mathematics, harnessing the wealth of scientific and technical software, and creating truly accessible scientific portals across international boundaries and markets. The recent emergence of World Wide Web Consortium (W3C) standards such as the Mathematical Markup Language (MathML), Language (XSL), and Aural CSS (ACSS) provide a foundation whereby mathematics can be displayed, enlivened, computed, and audio formatted. With interoperability ensured by standards, software applications can be easily brought together to create extensible and interactive scientific content. In this presentation we will provide an overview of the IBM techexplorer Hypermedia Browser, a web browser plug-in and ActiveX control aimed at bringing interactive mathematics to the masses across platforms and applications. We will demonstrate "live" mathematics where documents that contain MathML expressions can be edited and computed right inside your favorite web browser. This demonstration will be generalized as we show how MathML can be used to enliven even PowerPoint presentations. Finally, we will close the loop by demonstrating a novel approach to spoken mathematics based on MathML, DOM, XSL, ACSS, techexplorer, and IBM ViaVoice. By making use of techexplorer as the glue that binds the rendered content to the web browser, the back-end computation software, the Java applets that augment the exposition, and voice-rendering systems such as ViaVoice, authors can indeed create truly extensible and interactive scientific content. For more information see: [http://www.software.ibm

  5. Desk-top publishing using IBM-compatible computers.

    PubMed

    Grencis, P W

    1991-01-01

    This paper sets out to describe one Medical Illustration Departments' experience of the introduction of computers for desk-top publishing. In this particular case, after careful consideration of all the options open, an IBM-compatible system was installed rather than the often popular choice of an Apple Macintosh.

  6. Aircraft noise prediction program propeller analysis system IBM-PC version user's manual version 2.0

    NASA Technical Reports Server (NTRS)

    Nolan, Sandra K.

    1988-01-01

    The IBM-PC version of the Aircraft Noise Prediction Program (ANOPP) Propeller Analysis System (PAS) is a set of computational programs for predicting the aerodynamics, performance, and noise of propellers. The ANOPP-PAS is a subset of a larger version of ANOPP which can be executed on CDC or VAX computers. This manual provides a description of the IBM-PC version of the ANOPP-PAS and its prediction capabilities, and instructions on how to use the system on an IBM-XT or IBM-AT personal computer. Sections within the manual document installation, system design, ANOPP-PAS usage, data entry preprocessors, and ANOPP-PAS functional modules and procedures. Appendices to the manual include a glossary of ANOPP terms and information on error diagnostics and recovery techniques.

  7. Multi-processor including data flow accelerator module

    DOEpatents

    Davidson, George S.; Pierce, Paul E.

    1990-01-01

    An accelerator module for a data flow computer includes an intelligent memory. The module is added to a multiprocessor arrangement and uses a shared tagged memory architecture in the data flow computer. The intelligent memory module assigns locations for holding data values in correspondence with arcs leading to a node in a data dependency graph. Each primitive computation is associated with a corresponding memory cell, including a number of slots for operands needed to execute a primitive computation, a primitive identifying pointer, and linking slots for distributing the result of the cell computation to other cells requiring that result as an operand. Circuitry is provided for utilizing tag bits to determine automatically when all operands required by a processor are available and for scheduling the primitive for execution in a queue. Each memory cell of the module may be associated with any of the primitives, and the particular primitive to be executed by the processor associated with the cell is identified by providing an index, such as the cell number for the primitive, to the primitive lookup table of starting addresses. The module thus serves to perform functions previously performed by a number of sections of data flow architectures and coexists with conventional shared memory therein. A multiprocessing system including the module operates in a hybrid mode, wherein the same processing modules are used to perform some processing in a sequential mode, under immediate control of an operating system, while performing other processing in a data flow mode.

  8. Networking DEC and IBM computers

    NASA Technical Reports Server (NTRS)

    Mish, W. H.

    1983-01-01

    Local Area Networking of DEC and IBM computers within the structure of the ISO-OSI Seven Layer Reference Model at a raw signaling speed of 1 Mops or greater are discussed. After an introduction to the ISO-OSI Reference Model nd the IEEE-802 Draft Standard for Local Area Networks (LANs), there follows a detailed discussion and comparison of the products available from a variety of manufactures to perform this networking task. A summary of these products is presented in a table.

  9. Optical Associative Processors For Visual Perception"

    NASA Astrophysics Data System (ADS)

    Casasent, David; Telfer, Brian

    1988-05-01

    We consider various associative processor modifications required to allow these systems to be used for visual perception, scene analysis, and object recognition. For these applications, decisions on the class of the objects present in the input image are required and thus heteroassociative memories are necessary (rather than the autoassociative memories that have been given most attention). We analyze the performance of both associative processors and note that there is considerable difference between heteroassociative and autoassociative memories. We describe associative processors suitable for realizing functions such as: distortion invariance (using linear discriminant function memory synthesis techniques), noise and image processing performance (using autoassociative memories in cascade with with a heteroassociative processor and with a finite number of autoassociative memory iterations employed), shift invariance (achieved through the use of associative processors operating on feature space data), and the analysis of multiple objects in high noise (which is achieved using associative processing of the output from symbolic correlators). We detail and provide initial demonstrations of the use of associative processors operating on iconic, feature space and symbolic data, as well as adaptive associative processors.

  10. 76 FR 32231 - International Business Machines (IBM), Sales and Distribution Business Unit, Global Sales...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-03

    ... for the workers and former workers of International Business Machines (IBM), Sales and Distribution... reconsideration alleges that IBM outsourced to India and China. During the reconsideration investigation, it was..., Armonk, New York. The subject worker group supply computer software development and maintenance services...

  11. Configurable Multi-Purpose Processor

    NASA Technical Reports Server (NTRS)

    Valencia, J. Emilio; Forney, Chirstopher; Morrison, Robert; Birr, Richard

    2010-01-01

    Advancements in technology have allowed the miniaturization of systems used in aerospace vehicles. This technology is driven by the need for next-generation systems that provide reliable, responsive, and cost-effective range operations while providing increased capabilities such as simultaneous mission support, increased launch trajectories, improved launch, and landing opportunities, etc. Leveraging the newest technologies, the command and telemetry processor (CTP) concept provides for a compact, flexible, and integrated solution for flight command and telemetry systems and range systems. The CTP is a relatively small circuit board that serves as a processing platform for high dynamic, high vibration environments. The CTP can be reconfigured and reprogrammed, allowing it to be adapted for many different applications. The design is centered around a configurable field-programmable gate array (FPGA) device that contains numerous logic cells that can be used to implement traditional integrated circuits. The FPGA contains two PowerPC processors running the Vx-Works real-time operating system and are used to execute software programs specific to each application. The CTP was designed and developed specifically to provide telemetry functions; namely, the command processing, telemetry processing, and GPS metric tracking of a flight vehicle. However, it can be used as a general-purpose processor board to perform numerous functions implemented in either hardware or software using the FPGA s processors and/or logic cells. Functionally, the CTP was designed for range safety applications where it would ultimately become part of a vehicle s flight termination system. Consequently, the major functions of the CTP are to perform the forward link command processing, GPS metric tracking, return link telemetry data processing, error detection and correction, data encryption/ decryption, and initiate flight termination action commands. Also, the CTP had to be designed to survive and

  12. Implementation of kernels on the Maestro processor

    NASA Astrophysics Data System (ADS)

    Suh, Jinwoo; Kang, D. I. D.; Crago, S. P.

    Currently, most microprocessors use multiple cores to increase performance while limiting power usage. Some processors use not just a few cores, but tens of cores or even 100 cores. One such many-core microprocessor is the Maestro processor, which is based on Tilera's TILE64 processor. The Maestro chip is a 49-core, general-purpose, radiation-hardened processor designed for space applications. The Maestro processor, unlike the TILE64, has a floating point unit (FPU) in each core for improved floating point performance. The Maestro processor runs at 342 MHz clock frequency. On the Maestro processor, we implemented several widely used kernels: matrix multiplication, vector add, FIR filter, and FFT. We measured and analyzed the performance of these kernels. The achieved performance was up to 5.7 GFLOPS, and the speedup compared to single tile was up to 49 using 49 tiles.

  13. Neurovision processor for designing intelligent sensors

    NASA Astrophysics Data System (ADS)

    Gupta, Madan M.; Knopf, George K.

    1992-03-01

    A programmable multi-task neuro-vision processor, called the Positive-Negative (PN) neural processor, is proposed as a plausible hardware mechanism for constructing robust multi-task vision sensors. The computational operations performed by the PN neural processor are loosely based on the neural activity fields exhibited by certain nervous tissue layers situated in the brain. The neuro-vision processor can be programmed to generate diverse dynamic behavior that may be used for spatio-temporal stabilization (STS), short-term visual memory (STVM), spatio-temporal filtering (STF) and pulse frequency modulation (PFM). A multi- functional vision sensor that performs a variety of information processing operations on time- varying two-dimensional sensory images can be constructed from a parallel and hierarchical structure of numerous individually programmed PN neural processors.

  14. Performance of the Mayo-IBM PAC system

    NASA Astrophysics Data System (ADS)

    Persons, Kenneth R.; Reardon, Frank J.; Gehring, Dale G.; Hangiandreou, Nicholas J.

    1994-05-01

    The Mayo Clinic and IBM (at Rochester, Minnesota) have jointly developed a picture archived system for use with Mayo's MRI and CT imaging modalities. This PACS is made up of over 50 computers that work cooperatively to provide archival, retrieval and image distribution services for Mayo's Department of Radiology. This paper will examine the performance characteristics of the system.

  15. Parallel processor for real-time structural control

    NASA Astrophysics Data System (ADS)

    Tise, Bert L.

    1993-07-01

    A parallel processor that is optimized for real-time linear control has been developed. This modular system consists of A/D modules, D/A modules, and floating-point processor modules. The scalable processor uses up to 1,000 Motorola DSP96002 floating-point processors for a peak computational rate of 60 GFLOPS. Sampling rates up to 625 kHz are supported by this analog-in to analog-out controller. The high processing rate and parallel architecture make this processor suitable for computing state-space equations and other multiply/accumulate-intensive digital filters. Processor features include 14-bit conversion devices, low input-to-output latency, 240 Mbyte/s synchronous backplane bus, low-skew clock distribution circuit, VME connection to host computer, parallelizing code generator, and look- up-tables for actuator linearization. This processor was designed primarily for experiments in structural control. The A/D modules sample sensors mounted on the structure and the floating- point processor modules compute the outputs using the programmed control equations. The outputs are sent through the D/A module to the power amps used to drive the structure's actuators. The host computer is a Sun workstation. An OpenWindows-based control panel is provided to facilitate data transfer to and from the processor, as well as to control the operating mode of the processor. A diagnostic mode is provided to allow stimulation of the structure and acquisition of the structural response via sensor inputs.

  16. Parallel processor for real-time structural control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tise, B.L.

    1992-01-01

    A parallel processor that is optimized for real-time linear control has been developed. This modular system consists of A/D modules, D/A modules, and floating-point processor modules. The scalable processor uses up to 1,000 Motorola DSP96002 floating-point processors for a peak computational rate of 60 GFLOPS. Sampling rates up to 625 kHz are supported by this analog-in to analog-out controller. The high processing rate and parallel architecture make this processor suitable for computing state-space equations and other multiply/accumulate-intensive digital filters. Processor features include 14-bit conversion devices, low input-output latency, 240 Mbyte/s synchronous backplane bus, low-skew clock distribution circuit, VME connection tomore » host computer, parallelizing code generator, and look-up-tables for actuator linearization. This processor was designed primarily for experiments in structural control. The A/D modules sample sensors mounted on the structure and the floating-point processor modules compute the outputs using the programmed control equations. The outputs are sent through the D/A module to the power amps used to drive the structure's actuators. The host computer is a Sun workstation. An Open Windows-based control panel is provided to facilitate data transfer to and from the processor, as well as to control the operating mode of the processor. A diagnostic mode is provided to allow stimulation of the structure and acquisition of the structural response via sensor inputs.« less

  17. Fuel processor and method for generating hydrogen for fuel cells

    DOEpatents

    Ahmed, Shabbir [Naperville, IL; Lee, Sheldon H. D. [Willowbrook, IL; Carter, John David [Bolingbrook, IL; Krumpelt, Michael [Naperville, IL; Myers, Deborah J [Lisle, IL

    2009-07-21

    A method of producing a H.sub.2 rich gas stream includes supplying an O.sub.2 rich gas, steam, and fuel to an inner reforming zone of a fuel processor that includes a partial oxidation catalyst and a steam reforming catalyst or a combined partial oxidation and stream reforming catalyst. The method also includes contacting the O.sub.2 rich gas, steam, and fuel with the partial oxidation catalyst and the steam reforming catalyst or the combined partial oxidation and stream reforming catalyst in the inner reforming zone to generate a hot reformate stream. The method still further includes cooling the hot reformate stream in a cooling zone to produce a cooled reformate stream. Additionally, the method includes removing sulfur-containing compounds from the cooled reformate stream by contacting the cooled reformate stream with a sulfur removal agent. The method still further includes contacting the cooled reformate stream with a catalyst that converts water and carbon monoxide to carbon dioxide and H.sub.2 in a water-gas-shift zone to produce a final reformate stream in the fuel processor.

  18. Portability studies of modular data base managers. Interim reports. [Running CDC's DATATRAN 2 on IBM 360/370 and IBM's JOSHUA on CDC computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kopp, H.J.; Mortensen, G.A.

    1978-04-01

    Approximately 60% of the full CDC 6600/7600 Datatran 2.0 capability was made operational on IBM 360/370 equipment. Sufficient capability was made operational to demonstrate adequate performance for modular program linking applications. Also demonstrated were the basic capabilities and performance required to support moderate-sized data base applications and moderately active scratch input/output applications. Approximately one to two calendar years are required to develop DATATRAN 2.0 capabilities fully for the entire spectrum of applications proposed. Included in the next stage of conversion should be syntax checking and syntax conversion features that would foster greater FORTRAN compatibility between IBM and CDC developed modules.more » The batch portion of the JOSHUA Modular System, which was developed by Savannah River Laboratory to run on an IBM computer, was examined for the feasibility of conversion to run on a Control Data Corporation (CDC) computer. Portions of the JOSHUA Precompiler were changed so as to be operable on the CDC computer. The Data Manager and Batch Monitor were also examined for conversion feasibility, but no changes were made in them. It appears to be feasible to convert the batch portion of the JOSHUA Modular System to run on a CDC computer with an estimated additional two to three man-years of effort. 9 tables.« less

  19. Rectangular Array Of Digital Processors For Planning Paths

    NASA Technical Reports Server (NTRS)

    Kemeny, Sabrina E.; Fossum, Eric R.; Nixon, Robert H.

    1993-01-01

    Prototype 24 x 25 rectangular array of asynchronous parallel digital processors rapidly finds best path across two-dimensional field, which could be patch of terrain traversed by robotic or military vehicle. Implemented as single-chip very-large-scale integrated circuit. Excepting processors on edges, each processor communicates with four nearest neighbors along paths representing travel to north, south, east, and west. Each processor contains delay generator in form of 8-bit ripple counter, preset to 1 of 256 possible values. Operation begins with choice of processor representing starting point. Transmits signals to nearest neighbor processors, which retransmits to other neighboring processors, and process repeats until signals propagated across entire field.

  20. Optimal processor assignment for pipeline computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Simha, Rahul; Choudhury, Alok N.; Narahari, Bhagirath

    1991-01-01

    The availability of large scale multitasked parallel architectures introduces the following processor assignment problem for pipelined computations. Given a set of tasks and their precedence constraints, along with their experimentally determined individual responses times for different processor sizes, find an assignment of processor to tasks. Two objectives are of interest: minimal response given a throughput requirement, and maximal throughput given a response time requirement. These assignment problems differ considerably from the classical mapping problem in which several tasks share a processor; instead, it is assumed that a large number of processors are to be assigned to a relatively small number of tasks. Efficient assignment algorithms were developed for different classes of task structures. For a p processor system and a series parallel precedence graph with n constituent tasks, an O(np2) algorithm is provided that finds the optimal assignment for the response time optimization problem; it was found that the assignment optimizing the constrained throughput in O(np2log p) time. Special cases of linear, independent, and tree graphs are also considered.

  1. Lenslet array processors.

    PubMed

    Glaser, I

    1982-04-01

    By combining a lenslet array with masks it is possible to obtain a noncoherent optical processor capable of computing in parallel generalized 2-D discrete linear transformations. We present here an analysis of such lenslet array processors (LAP). The effect of several errors, including optical aberrations, diffraction, vignetting, and geometrical and mask errors, are calculated, and guidelines to optical design of LAP are derived. Using these results, both ultimate and practical performances of LAP are compared with those of competing techniques.

  2. Optimizing the inner loop of the gravitational force interaction on modern processors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warren, Michael S

    2010-12-08

    We have achieved superior performance on multiple generations of the fastest supercomputers in the world with our hashed oct-tree N-body code (HOT), spanning almost two decades and garnering multiple Gordon Bell Prizes for significant achievement in parallel processing. Execution time for our N-body code is largely influenced by the force calculation in the inner loop. Improvements to the inner loop using SSE3 instructions has enabled the calculation of over 200 million gravitational interactions per second per processor on a 2.6 GHz Opteron, for a computational rate of over 7 Gflops in single precision (700/0 of peak). We obtain optimal performancemore » some processors (including the Cell) by decomposing the reciprocal square root function required for a gravitational interaction into a table lookup, Chebychev polynomial interpolation, and Newton-Raphson iteration, using the algorithm of Karp. By unrolling the loop by a factor of six, and using SPU intrinsics to compute on vectors, we obtain performance of over 16 Gflops on a single Cell SPE. Aggregated over the 8 SPEs on a Cell processor, the overall performance is roughly 130 Gflops. In comparison, the ordinary C version of our inner loop only obtains 1.6 Gflops per SPE with the spuxlc compiler.« less

  3. Automobile Crash Sensor Signal Processor

    DOT National Transportation Integrated Search

    1973-11-01

    The crash sensor signal processor described interfaces between an automobile-installed doppler radar and an air bag activating solenoid or equivalent electromechanical device. The processor utilizes both digital and analog techniques to produce an ou...

  4. 76 FR 54800 - International Business Machines (IBM), Software Group Business Unit, Quality Assurance Group, San...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-02

    ... Machines (IBM), Software Group Business Unit, Quality Assurance Group, San Jose, California; Notice of... workers of International Business Machines (IBM), Software Group Business Unit, Optim Data Studio Tools QA... February 2, 2011 (76 FR 5832). The subject worker group supplies acceptance testing services, design...

  5. Compact gasoline fuel processor for passenger vehicle APU

    NASA Astrophysics Data System (ADS)

    Severin, Christopher; Pischinger, Stefan; Ogrzewalla, Jürgen

    Due to the increasing demand for electrical power in today's passenger vehicles, and with the requirements regarding fuel consumption and environmental sustainability tightening, a fuel cell-based auxiliary power unit (APU) becomes a promising alternative to the conventional generation of electrical energy via internal combustion engine, generator and battery. It is obvious that the on-board stored fuel has to be used for the fuel cell system, thus, gasoline or diesel has to be reformed on board. This makes the auxiliary power unit a complex integrated system of stack, air supply, fuel processor, electrics as well as heat and water management. Aside from proving the technical feasibility of such a system, the development has to address three major barriers:start-up time, costs, and size/weight of the systems. In this paper a packaging concept for an auxiliary power unit is presented. The main emphasis is placed on the fuel processor, as good packaging of this large subsystem has the strongest impact on overall size. The fuel processor system consists of an autothermal reformer in combination with water-gas shift and selective oxidation stages, based on adiabatic reactors with inter-cooling. The configuration was realized in a laboratory set-up and experimentally investigated. The results gained from this confirm a general suitability for mobile applications. A start-up time of 30 min was measured, while a potential reduction to 10 min seems feasible. An overall fuel processor efficiency of about 77% was measured. On the basis of the know-how gained by the experimental investigation of the laboratory set-up a packaging concept was developed. Using state-of-the-art catalyst and heat exchanger technology, the volumes of these components are fixed. However, the overall volume is higher mainly due to mixing zones and flow ducts, which do not contribute to the chemical or thermal function of the system. Thus, the concept developed mainly focuses on minimization of those

  6. The web server of IBM's Bioinformatics and Pattern Discovery group: 2004 update.

    PubMed

    Huynh, Tien; Rigoutsos, Isidore

    2004-07-01

    In this report, we provide an update on the services and content which are available on the web server of IBM's Bioinformatics and Pattern Discovery group. The server, which is operational around the clock, provides access to a large number of methods that have been developed and published by the group's members. There is an increasing number of problems that these tools can help tackle; these problems range from the discovery of patterns in streams of events and the computation of multiple sequence alignments, to the discovery of genes in nucleic acid sequences, the identification--directly from sequence--of structural deviations from alpha-helicity and the annotation of amino acid sequences for antimicrobial activity. Additionally, annotations for more than 130 archaeal, bacterial, eukaryotic and viral genomes are now available on-line and can be searched interactively. The tools and code bundles continue to be accessible from http://cbcsrv.watson.ibm.com/Tspd.html whereas the genomics annotations are available at http://cbcsrv.watson.ibm.com/Annotations/.

  7. Processor architecture for airborne SAR systems

    NASA Technical Reports Server (NTRS)

    Glass, C. M.

    1983-01-01

    Digital processors for spaceborne imaging radars and application of the technology developed for airborne SAR systems are considered. Transferring algorithms and implementation techniques from airborne to spaceborne SAR processors offers obvious advantages. The following topics are discussed: (1) a quantification of the differences in processing algorithms for airborne and spaceborne SARs; and (2) an overview of three processors for airborne SAR systems.

  8. Analog Processor To Solve Optimization Problems

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A.; Eberhardt, Silvio P.; Thakoor, Anil P.

    1993-01-01

    Proposed analog processor solves "traveling-salesman" problem, considered paradigm of global-optimization problems involving routing or allocation of resources. Includes electronic neural network and auxiliary circuitry based partly on concepts described in "Neural-Network Processor Would Allocate Resources" (NPO-17781) and "Neural Network Solves 'Traveling-Salesman' Problem" (NPO-17807). Processor based on highly parallel computing solves problem in significantly less time.

  9. Two autowire versions for CDC-3200 and IBM-360

    NASA Technical Reports Server (NTRS)

    Billingsley, J. B.

    1972-01-01

    Microelectronics program was initiated to evaluate circuitry, packaging methods, and fabrication approaches necessary to produce completely procured logic system. Two autowire programs were developed for CDC-3200 and IBM-360 computers for use in designing logic systems.

  10. Time-variant analysis of rotorcraft systems dynamics - An exploitation of vector processors

    NASA Technical Reports Server (NTRS)

    Amirouche, F. M. L.; Xie, M.; Shareef, N. H.

    1993-01-01

    In this paper a generalized algorithmic procedure is presented for handling constraints in mechanical transmissions. The latter are treated as multibody systems of interconnected rigid/flexible bodies. The constraint Jacobian matrices are generated automatically and suitably updated in time, depending on the geometrical and kinematical constraint conditions describing the interconnection between shafts or gears. The type of constraints are classified based on the interconnection of the bodies by assuming that one or more points of contact exist between them. The effects due to elastic deformation of the flexible bodies are included by allowing each body element to undergo small deformations. The procedure is based on recursively formulated Kane's dynamical equations of motion and the finite element method, including the concept of geometrical stiffening effects. The method is implemented on an IBM-3090-600j vector processor with pipe-lining capabilities. A significant increase in the speed of execution is achieved by vectorizing the developed code in computationally intensive areas. An example consisting of two meshing disks rotating at high angular velocity is presented. Applications are intended for the study of the dynamic behavior of helicopter transmissions.

  11. Enabling Future Robotic Missions with Multicore Processors

    NASA Technical Reports Server (NTRS)

    Powell, Wesley A.; Johnson, Michael A.; Wilmot, Jonathan; Some, Raphael; Gostelow, Kim P.; Reeves, Glenn; Doyle, Richard J.

    2011-01-01

    Recent commercial developments in multicore processors (e.g. Tilera, Clearspeed, HyperX) have provided an option for high performance embedded computing that rivals the performance attainable with FPGA-based reconfigurable computing architectures. Furthermore, these processors offer more straightforward and streamlined application development by allowing the use of conventional programming languages and software tools in lieu of hardware design languages such as VHDL and Verilog. With these advantages, multicore processors can significantly enhance the capabilities of future robotic space missions. This paper will discuss these benefits, along with onboard processing applications where multicore processing can offer advantages over existing or competing approaches. This paper will also discuss the key artchitecural features of current commercial multicore processors. In comparison to the current art, the features and advancements necessary for spaceflight multicore processors will be identified. These include power reduction, radiation hardening, inherent fault tolerance, and support for common spacecraft bus interfaces. Lastly, this paper will explore how multicore processors might evolve with advances in electronics technology and how avionics architectures might evolve once multicore processors are inserted into NASA robotic spacecraft.

  12. Ultra-Reliable Digital Avionics (URDA) processor

    NASA Astrophysics Data System (ADS)

    Branstetter, Reagan; Ruszczyk, William; Miville, Frank

    1994-10-01

    Texas Instruments Incorporated (TI) developed the URDA processor design under contract with the U.S. Air Force Wright Laboratory and the U.S. Army Night Vision and Electro-Sensors Directorate. TI's approach couples advanced packaging solutions with advanced integrated circuit (IC) technology to provide a high-performance (200 MIPS/800 MFLOPS) modular avionics processor module for a wide range of avionics applications. TI's processor design integrates two Ada-programmable, URDA basic processor modules (BPM's) with a JIAWG-compatible PiBus and TMBus on a single F-22 common integrated processor-compatible form-factor SEM-E avionics card. A separate, high-speed (25-MWord/second 32-bit word) input/output bus is provided for sensor data. Each BPM provides a peak throughput of 100 MIPS scalar concurrent with 400-MFLOPS vector processing in a removable multichip module (MCM) mounted to a liquid-flowthrough (LFT) core and interfacing to a processor interface module printed wiring board (PWB). Commercial RISC technology coupled with TI's advanced bipolar complementary metal oxide semiconductor (BiCMOS) application specific integrated circuit (ASIC) and silicon-on-silicon packaging technologies are used to achieve the high performance in a miniaturized package. A Mips R4000-family reduced instruction set computer (RISC) processor and a TI 100-MHz BiCMOS vector coprocessor (VCP) ASIC provide, respectively, the 100 MIPS of a scalar processor throughput and 400 MFLOPS of vector processing throughput for each BPM. The TI Aladdim ASIC chipset was developed on the TI Aladdin Program under contract with the U.S. Army Communications and Electronics Command and was sponsored by the Advanced Research Projects Agency with technical direction from the U.S. Army Night Vision and Electro-Sensors Directorate.

  13. Communications systems and methods for subsea processors

    DOEpatents

    Gutierrez, Jose; Pereira, Luis

    2016-04-26

    A subsea processor may be located near the seabed of a drilling site and used to coordinate operations of underwater drilling components. The subsea processor may be enclosed in a single interchangeable unit that fits a receptor on an underwater drilling component, such as a blow-out preventer (BOP). The subsea processor may issue commands to control the BOP and receive measurements from sensors located throughout the BOP. A shared communications bus may interconnect the subsea processor and underwater components and the subsea processor and a surface or onshore network. The shared communications bus may be operated according to a time division multiple access (TDMA) scheme.

  14. Experimental testing of the noise-canceling processor.

    PubMed

    Collins, Michael D; Baer, Ralph N; Simpson, Harry J

    2011-09-01

    Signal-processing techniques for localizing an acoustic source buried in noise are tested in a tank experiment. Noise is generated using a discrete source, a bubble generator, and a sprinkler. The experiment has essential elements of a realistic scenario in matched-field processing, including complex source and noise time series in a waveguide with water, sediment, and multipath propagation. The noise-canceling processor is found to outperform the Bartlett processor and provide the correct source range for signal-to-noise ratios below -10 dB. The multivalued Bartlett processor is found to outperform the Bartlett processor but not the noise-canceling processor. © 2011 Acoustical Society of America

  15. 7 CFR 1215.14 - Processor.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE POPCORN PROMOTION, RESEARCH, AND CONSUMER INFORMATION Popcorn Promotion, Research, and Consumer Information Order Definitions § 1215.14 Processor. Processor means a person engaged in the preparation of unpopped popcorn for the market who owns...

  16. 7 CFR 1215.14 - Processor.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE POPCORN PROMOTION, RESEARCH, AND CONSUMER INFORMATION Popcorn Promotion, Research, and Consumer Information Order Definitions § 1215.14 Processor. Processor means a person engaged in the preparation of unpopped popcorn for the market who owns...

  17. 7 CFR 1215.14 - Processor.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE POPCORN PROMOTION, RESEARCH, AND CONSUMER INFORMATION Popcorn Promotion, Research, and Consumer Information Order Definitions § 1215.14 Processor. Processor means a person engaged in the preparation of unpopped popcorn for the market who owns...

  18. 7 CFR 1215.14 - Processor.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE POPCORN PROMOTION, RESEARCH, AND CONSUMER INFORMATION Popcorn Promotion, Research, and Consumer Information Order Definitions § 1215.14 Processor. Processor means a person engaged in the preparation of unpopped popcorn for the market who owns...

  19. 7 CFR 1215.14 - Processor.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE POPCORN PROMOTION, RESEARCH, AND CONSUMER INFORMATION Popcorn Promotion, Research, and Consumer Information Order Definitions § 1215.14 Processor. Processor means a person engaged in the preparation of unpopped popcorn for the market who owns...

  20. A framework for WRF to WRF-IBM grid nesting to enable multiscale simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wiersema, David John; Lundquist, Katherine A.; Chow, Fotini Katapodes

    With advances in computational power, mesoscale models, such as the Weather Research and Forecasting (WRF) model, are often pushed to higher resolutions. As the model’s horizontal resolution is refined, the maximum resolved terrain slope will increase. Because WRF uses a terrain-following coordinate, this increase in resolved terrain slopes introduces additional grid skewness. At high resolutions and over complex terrain, this grid skewness can introduce large numerical errors that require methods, such as the immersed boundary method, to keep the model accurate and stable. Our implementation of the immersed boundary method in the WRF model, WRF-IBM, has proven effective at microscalemore » simulations over complex terrain. WRF-IBM uses a non-conforming grid that extends beneath the model’s terrain. Boundary conditions at the immersed boundary, the terrain, are enforced by introducing a body force term to the governing equations at points directly beneath the immersed boundary. Nesting between a WRF parent grid and a WRF-IBM child grid requires a new framework for initialization and forcing of the child WRF-IBM grid. This framework will enable concurrent multi-scale simulations within the WRF model, improving the accuracy of high-resolution simulations and enabling simulations across a wide range of scales.« less

  1. Sporadic-inclusion body myositis (s-IBM) is not so prevalent in Istanbul/Turkey: a muscle biopsy based survey.

    PubMed

    Oflazer, P Serdaroglu; Deymeer, F; Parman, Y

    2011-06-01

    In a muscle biopsy based study, only 9 out of 5450 biopsy samples, received from all parts of greater Istanbul area, had typical clinical and most suggestive light microscopic sporadic-inclusion body myositis (s-IBM) findings. Two other patients with and ten further patients without characteristic light microscopic findings had referring diagnosis of s-IBM. As the general and the age-adjusted populations of Istanbul in 2010 were 13.255.685 and 2.347.300 respectively, the calculated corresponding 'estimated prevalences' of most suggestive s-IBM in the Istanbul area were 0.679 X 10(-6) and 3.834 X 10(-6). Since Istanbul receives heavy migration from all regions of Turkey and ours is the only muscle pathology laboratory in Istanbul, projection of these figures to the Turkish population was considered to be reasonable and an estimate of the prevalence of s-IBM in Turkey was obtained. The calculated 'estimated prevalence' of s-IBM in Turkey is lower than the previously reported rates from other countries. The wide variation in the prevalence rates of s-IBM may reflect different genetic, immunogenetic or environmental factors in different populations.

  2. Network Coding on Heterogeneous Multi-Core Processors for Wireless Sensor Networks

    PubMed Central

    Kim, Deokho; Park, Karam; Ro, Won W.

    2011-01-01

    While network coding is well known for its efficiency and usefulness in wireless sensor networks, the excessive costs associated with decoding computation and complexity still hinder its adoption into practical use. On the other hand, high-performance microprocessors with heterogeneous multi-cores would be used as processing nodes of the wireless sensor networks in the near future. To this end, this paper introduces an efficient network coding algorithm developed for the heterogenous multi-core processors. The proposed idea is fully tested on one of the currently available heterogeneous multi-core processors referred to as the Cell Broadband Engine. PMID:22164053

  3. Parallel processor-based raster graphics system architecture

    DOEpatents

    Littlefield, Richard J.

    1990-01-01

    An apparatus for generating raster graphics images from the graphics command stream includes a plurality of graphics processors connected in parallel, each adapted to receive any part of the graphics command stream for processing the command stream part into pixel data. The apparatus also includes a frame buffer for mapping the pixel data to pixel locations and an interconnection network for interconnecting the graphics processors to the frame buffer. Through the interconnection network, each graphics processor may access any part of the frame buffer concurrently with another graphics processor accessing any other part of the frame buffer. The plurality of graphics processors can thereby transmit concurrently pixel data to pixel locations in the frame buffer.

  4. Fuel Processor Development for a Soldier-Portable Fuel Cell System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palo, Daniel R.; Holladay, Jamie D.; Rozmiarek, Robert T.

    2002-01-01

    Battelle is currently developing a soldier-portable power system for the U.S. Army that will continuously provide 15 W (25 W peak) of base load electric power for weeks or months using a micro technology-based fuel processor. The fuel processing train consists of a combustor, two vaporizers, and a steam-reforming reactor. This paper describes the concept and experimental progress to date.

  5. Multimode power processor

    DOEpatents

    O'Sullivan, G.A.; O'Sullivan, J.A.

    1999-07-27

    In one embodiment, a power processor which operates in three modes: an inverter mode wherein power is delivered from a battery to an AC power grid or load; a battery charger mode wherein the battery is charged by a generator; and a parallel mode wherein the generator supplies power to the AC power grid or load in parallel with the battery. In the parallel mode, the system adapts to arbitrary non-linear loads. The power processor may operate on a per-phase basis wherein the load may be synthetically transferred from one phase to another by way of a bumpless transfer which causes no interruption of power to the load when transferring energy sources. Voltage transients and frequency transients delivered to the load when switching between the generator and battery sources are minimized, thereby providing an uninterruptible power supply. The power processor may be used as part of a hybrid electrical power source system which may contain, in one embodiment, a photovoltaic array, diesel engine, and battery power sources. 31 figs.

  6. Multimode power processor

    DOEpatents

    O'Sullivan, George A.; O'Sullivan, Joseph A.

    1999-01-01

    In one embodiment, a power processor which operates in three modes: an inverter mode wherein power is delivered from a battery to an AC power grid or load; a battery charger mode wherein the battery is charged by a generator; and a parallel mode wherein the generator supplies power to the AC power grid or load in parallel with the battery. In the parallel mode, the system adapts to arbitrary non-linear loads. The power processor may operate on a per-phase basis wherein the load may be synthetically transferred from one phase to another by way of a bumpless transfer which causes no interruption of power to the load when transferring energy sources. Voltage transients and frequency transients delivered to the load when switching between the generator and battery sources are minimized, thereby providing an uninterruptible power supply. The power processor may be used as part of a hybrid electrical power source system which may contain, in one embodiment, a photovoltaic array, diesel engine, and battery power sources.

  7. Comparison of the AMDAHL 470V/6 and the IBM 370/195 using benchmarks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snider, D.R.; Midlock, J.L.; Hinds, A.R.

    1976-03-01

    Six groups of jobs were run on the IBM 370/195 at the Applied Mathematics Division (AMD) of Argonne National Laboratory using the current production versions of OS/MVT 21.7 and ASP 3.1. The same jobs were then run on an AMDAHL 470V/6 at the AMDAHL manufacturing facilities in Sunnyvale, California, using the identical operating systems. Performances of the two machines are compared. Differences in the configurations were minimized. The memory size on each machine was the same, all software which had an impact on run times was the same, and the I/O configurations were as similar as possible. This allowed themore » comparison to be based on the relative performance of the two CPU's. As part of the studies preliminary to the acquisition of the IBM 195 in 1972, two of the groups of jobs had been run on a CDC 7600 by CDC personnel in Arden Hills, Minnesota, on an IBM 360/195 by IBM personnel in Poughkeepsie, New York, and on the AMD 360/50/75 production system in June, 1971. 6 figures, 9 tables.« less

  8. PixonVision real-time video processor

    NASA Astrophysics Data System (ADS)

    Puetter, R. C.; Hier, R. G.

    2007-09-01

    PixonImaging LLC and DigiVision, Inc. have developed a real-time video processor, the PixonVision PV-200, based on the patented Pixon method for image deblurring and denoising, and DigiVision's spatially adaptive contrast enhancement processor, the DV1000. The PV-200 can process NTSC and PAL video in real time with a latency of 1 field (1/60 th of a second), remove the effects of aerosol scattering from haze, mist, smoke, and dust, improve spatial resolution by up to 2x, decrease noise by up to 6x, and increase local contrast by up to 8x. A newer version of the processor, the PV-300, is now in prototype form and can handle high definition video. Both the PV-200 and PV-300 are FPGA-based processors, which could be spun into ASICs if desired. Obvious applications of these processors include applications in the DOD (tanks, aircraft, and ships), homeland security, intelligence, surveillance, and law enforcement. If developed into an ASIC, these processors will be suitable for a variety of portable applications, including gun sights, night vision goggles, binoculars, and guided munitions. This paper presents a variety of examples of PV-200 processing, including examples appropriate to border security, battlefield applications, port security, and surveillance from unmanned aerial vehicles.

  9. 7 CFR 926.13 - Processor.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... RECORDKEEPING REQUIREMENTS APPLICABLE TO CRANBERRIES NOT SUBJECT TO THE CRANBERRY MARKETING ORDER § 926.13 Processor. Processor means any person who receives or acquires fresh or frozen cranberries or cranberries in... uses such cranberries or concentrate, with or without other ingredients, in the production of a product...

  10. 7 CFR 926.13 - Processor.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... RECORDKEEPING REQUIREMENTS APPLICABLE TO CRANBERRIES NOT SUBJECT TO THE CRANBERRY MARKETING ORDER § 926.13 Processor. Processor means any person who receives or acquires fresh or frozen cranberries or cranberries in... uses such cranberries or concentrate, with or without other ingredients, in the production of a product...

  11. 7 CFR 926.13 - Processor.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... RECORDKEEPING REQUIREMENTS APPLICABLE TO CRANBERRIES NOT SUBJECT TO THE CRANBERRY MARKETING ORDER § 926.13 Processor. Processor means any person who receives or acquires fresh or frozen cranberries or cranberries in... uses such cranberries or concentrate, with or without other ingredients, in the production of a product...

  12. 7 CFR 926.13 - Processor.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... RECORDKEEPING REQUIREMENTS APPLICABLE TO CRANBERRIES NOT SUBJECT TO THE CRANBERRY MARKETING ORDER § 926.13 Processor. Processor means any person who receives or acquires fresh or frozen cranberries or cranberries in... uses such cranberries or concentrate, with or without other ingredients, in the production of a product...

  13. REMOTE: Modem Communicator Program for the IBM personal computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGirt, F.

    1984-06-01

    REMOTE, a Modem Communicator Program, was developed to provide full duplex serial communication with arbitrary remote computers via either dial-up telephone modems or direct lines. The latest version of REMOTE (documented in this report) was developed for the IBM Personal Computer.

  14. 7 CFR 1160.108 - Fluid milk processor.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Fluid milk processor. 1160.108 Section 1160.108... Agreements and Orders; Milk), DEPARTMENT OF AGRICULTURE FLUID MILK PROMOTION PROGRAM Fluid Milk Promotion Order Definitions § 1160.108 Fluid milk processor. (a) Fluid milk processor means any person who...

  15. 7 CFR 1160.108 - Fluid milk processor.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 9 2011-01-01 2011-01-01 false Fluid milk processor. 1160.108 Section 1160.108... Agreements and Orders; Milk), DEPARTMENT OF AGRICULTURE FLUID MILK PROMOTION PROGRAM Fluid Milk Promotion Order Definitions § 1160.108 Fluid milk processor. (a) Fluid milk processor means any person who...

  16. 7 CFR 1160.108 - Fluid milk processor.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 9 2013-01-01 2013-01-01 false Fluid milk processor. 1160.108 Section 1160.108... AGREEMENTS AND ORDERS; MILK), DEPARTMENT OF AGRICULTURE FLUID MILK PROMOTION PROGRAM Fluid Milk Promotion Order Definitions § 1160.108 Fluid milk processor. (a) Fluid milk processor means any person who...

  17. 7 CFR 1160.108 - Fluid milk processor.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 9 2012-01-01 2012-01-01 false Fluid milk processor. 1160.108 Section 1160.108... Agreements and Orders; Milk), DEPARTMENT OF AGRICULTURE FLUID MILK PROMOTION PROGRAM Fluid Milk Promotion Order Definitions § 1160.108 Fluid milk processor. (a) Fluid milk processor means any person who...

  18. 7 CFR 1160.108 - Fluid milk processor.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 9 2014-01-01 2013-01-01 true Fluid milk processor. 1160.108 Section 1160.108... AGREEMENTS AND ORDERS; MILK), DEPARTMENT OF AGRICULTURE FLUID MILK PROMOTION PROGRAM Fluid Milk Promotion Order Definitions § 1160.108 Fluid milk processor. (a) Fluid milk processor means any person who...

  19. Compute Server Performance Results

    NASA Technical Reports Server (NTRS)

    Stockdale, I. E.; Barton, John; Woodrow, Thomas (Technical Monitor)

    1994-01-01

    Parallel-vector supercomputers have been the workhorses of high performance computing. As expectations of future computing needs have risen faster than projected vector supercomputer performance, much work has been done investigating the feasibility of using Massively Parallel Processor systems as supercomputers. An even more recent development is the availability of high performance workstations which have the potential, when clustered together, to replace parallel-vector systems. We present a systematic comparison of floating point performance and price-performance for various compute server systems. A suite of highly vectorized programs was run on systems including traditional vector systems such as the Cray C90, and RISC workstations such as the IBM RS/6000 590 and the SGI R8000. The C90 system delivers 460 million floating point operations per second (FLOPS), the highest single processor rate of any vendor. However, if the price-performance ration (PPR) is considered to be most important, then the IBM and SGI processors are superior to the C90 processors. Even without code tuning, the IBM and SGI PPR's of 260 and 220 FLOPS per dollar exceed the C90 PPR of 160 FLOPS per dollar when running our highly vectorized suite,

  20. 76 FR 5832 - International Business Machines (IBM), Software Group Business Unit, Optim Data Studio Tools QA...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-02

    ... DEPARTMENT OF LABOR Employment and Training Administration [TA-W-74,554] International Business Machines (IBM), Software Group Business Unit, Optim Data Studio Tools QA, San Jose, CA; Notice of... determination of the TAA petition filed on behalf of workers at International Business Machines (IBM), Software...

  1. The web server of IBM's Bioinformatics and Pattern Discovery group: 2004 update

    PubMed Central

    Huynh, Tien; Rigoutsos, Isidore

    2004-01-01

    In this report, we provide an update on the services and content which are available on the web server of IBM's Bioinformatics and Pattern Discovery group. The server, which is operational around the clock, provides access to a large number of methods that have been developed and published by the group's members. There is an increasing number of problems that these tools can help tackle; these problems range from the discovery of patterns in streams of events and the computation of multiple sequence alignments, to the discovery of genes in nucleic acid sequences, the identification—directly from sequence—of structural deviations from α-helicity and the annotation of amino acid sequences for antimicrobial activity. Additionally, annotations for more than 130 archaeal, bacterial, eukaryotic and viral genomes are now available on-line and can be searched interactively. The tools and code bundles continue to be accessible from http://cbcsrv.watson.ibm.com/Tspd.html whereas the genomics annotations are available at http://cbcsrv.watson.ibm.com/Annotations/. PMID:15215340

  2. Modeling heterogeneous processor scheduling for real time systems

    NASA Technical Reports Server (NTRS)

    Leathrum, J. F.; Mielke, R. R.; Stoughton, J. W.

    1994-01-01

    A new model is presented to describe dataflow algorithms implemented in a multiprocessing system. Called the resource/data flow graph (RDFG), the model explicitly represents cyclo-static processor schedules as circuits of processor arcs which reflect the order that processors execute graph nodes. The model also allows the guarantee of meeting hard real-time deadlines. When unfolded, the model identifies statically the processor schedule. The model therefore is useful for determining the throughput and latency of systems with heterogeneous processors. The applicability of the model is demonstrated using a space surveillance algorithm.

  3. Simulation of continuously logical base cells (CL BC) with advanced functions for analog-to-digital converters and image processors

    NASA Astrophysics Data System (ADS)

    Krasilenko, Vladimir G.; Lazarev, Alexander A.; Nikitovich, Diana V.

    2017-10-01

    The paper considers results of design and modeling of continuously logical base cells (CL BC) based on current mirrors (CM) with functions of preliminary analogue and subsequent analogue-digital processing for creating sensor multichannel analog-to-digital converters (SMC ADCs) and image processors (IP). For such with vector or matrix parallel inputs-outputs IP and SMC ADCs it is needed active basic photosensitive cells with an extended electronic circuit, which are considered in paper. Such basic cells and ADCs based on them have a number of advantages: high speed and reliability, simplicity, small power consumption, high integration level for linear and matrix structures. We show design of the CL BC and ADC of photocurrents and their various possible implementations and its simulations. We consider CL BC for methods of selection and rank preprocessing and linear array of ADCs with conversion to binary codes and Gray codes. In contrast to our previous works here we will dwell more on analogue preprocessing schemes for signals of neighboring cells. Let us show how the introduction of simple nodes based on current mirrors extends the range of functions performed by the image processor. Each channel of the structure consists of several digital-analog cells (DC) on 15-35 CMOS. The amount of DC does not exceed the number of digits of the formed code, and for an iteration type, only one cell of DC, complemented by the device of selection and holding (SHD), is required. One channel of ADC with iteration is based on one DC-(G) and SHD, and it has only 35 CMOS transistors. In such ADCs easily parallel code can be realized and also serial-parallel output code. The circuits and simulation results of their design with OrCAD are shown. The supply voltage of the DC is 1.8÷3.3V, the range of an input photocurrent is 0.1÷24μA, the transformation time is 20÷30nS at 6-8 bit binary or Gray codes. The general power consumption of the ADC with iteration is only 50÷100μW, if the

  4. Performance of a natural gas fuel processor for residential PEFC system using a novel CO preferential oxidation catalyst

    NASA Astrophysics Data System (ADS)

    Echigo, Mitsuaki; Shinke, Norihisa; Takami, Susumu; Tabata, Takeshi

    Natural gas fuel processors have been developed for 500 W and 1 kW class residential polymer electrolyte fuel cell (PEFC) systems. These fuel processors contain all the elements—desulfurizers, steam reformers, CO shift converters, CO preferential oxidation (PROX) reactors, steam generators, burners and heat exchangers—in one package. For the PROX reactor, a single-stage PROX process using a novel PROX catalyst was adopted. In the 1 kW class fuel processor, thermal efficiency of 83% at HHV was achieved at nominal output assuming a H 2 utilization rate in the cell stack of 76%. CO concentration below 1 ppm in the product gas was achieved even under the condition of [O 2]/[CO]=1.5 at the PROX reactor. The long-term durability of the fuel processor was demonstrated with almost no deterioration in thermal efficiency and CO concentration for 10,000 h, 1000 times start and stop cycles, 25,000 cycles of load change.

  5. A Survey of Parallel Sorting Algorithms.

    DTIC Science & Technology

    1981-12-01

    see that, in this algorithm, each Processor i, for 1 itp -2, interacts directly only with Processors i+l and i-l. Processor j 0 only interacts with...Chan76] Chandra, A.K., "Maximal Parallelism in Matrix Multiplication," IBM Report RC. 6193, Watson Research Center, Yorktown Heights, N.Y., October 1976

  6. Mathematics Programming on the Apple II and IBM PC.

    ERIC Educational Resources Information Center

    Myers, Roy E.; Schneider, David I.

    1987-01-01

    Details the features of BASIC used in mathematics programming and provides the information needed to translate between the Apple II and IBM PC computers. Discusses inputing a user-defined function, setting scroll windows, displaying subscripts and exponents, variable names, mathematical characters and special symbols. (TW)

  7. An Apple for Your IBM PC--The Quadlink Board.

    ERIC Educational Resources Information Center

    Owen, G. Scott

    1984-01-01

    Describes nature and installation of the QUADLINK board which allows Apple software to be run on IBM PC microcomputers. Although programs tested ran without problems, users should test their own programs since there are some copy protection schemes that can baffle the board. (JN)

  8. Hazardous Waste Cleanup: IBM Corporation in Poughkeepsie, New York

    EPA Pesticide Factsheets

    This site covers approximately 423 acres, two-thirds of which is occupied by a manufacturing complex with more than 50 buildings. The land use in the area is a mix of industrial, commercial and residential. IBM is located approximately six miles south of t

  9. Onboard processor technology review

    NASA Technical Reports Server (NTRS)

    Benz, Harry F.

    1990-01-01

    The general need and requirements for the onboard embedded processors necessary to control and manipulate data in spacecraft systems are discussed. The current known requirements are reviewed from a user perspective, based on current practices in the spacecraft development process. The current capabilities of available processor technologies are then discussed, and these are projected to the generation of spacecraft computers currently under identified, funded development. An appraisal is provided for the current national developmental effort.

  10. A fully reconfigurable photonic integrated signal processor

    NASA Astrophysics Data System (ADS)

    Liu, Weilin; Li, Ming; Guzzon, Robert S.; Norberg, Erik J.; Parker, John S.; Lu, Mingzhi; Coldren, Larry A.; Yao, Jianping

    2016-03-01

    Photonic signal processing has been considered a solution to overcome the inherent electronic speed limitations. Over the past few years, an impressive range of photonic integrated signal processors have been proposed, but they usually offer limited reconfigurability, a feature highly needed for the implementation of large-scale general-purpose photonic signal processors. Here, we report and experimentally demonstrate a fully reconfigurable photonic integrated signal processor based on an InP-InGaAsP material system. The proposed photonic signal processor is capable of performing reconfigurable signal processing functions including temporal integration, temporal differentiation and Hilbert transformation. The reconfigurability is achieved by controlling the injection currents to the active components of the signal processor. Our demonstration suggests great potential for chip-scale fully programmable all-optical signal processing.

  11. Artificial intelligence in neurodegenerative disease research: use of IBM Watson to identify additional RNA-binding proteins altered in amyotrophic lateral sclerosis.

    PubMed

    Bakkar, Nadine; Kovalik, Tina; Lorenzini, Ileana; Spangler, Scott; Lacoste, Alix; Sponaugle, Kyle; Ferrante, Philip; Argentinis, Elenee; Sattler, Rita; Bowser, Robert

    2018-02-01

    Amyotrophic lateral sclerosis (ALS) is a devastating neurodegenerative disease with no effective treatments. Numerous RNA-binding proteins (RBPs) have been shown to be altered in ALS, with mutations in 11 RBPs causing familial forms of the disease, and 6 more RBPs showing abnormal expression/distribution in ALS albeit without any known mutations. RBP dysregulation is widely accepted as a contributing factor in ALS pathobiology. There are at least 1542 RBPs in the human genome; therefore, other unidentified RBPs may also be linked to the pathogenesis of ALS. We used IBM Watson ® to sieve through all RBPs in the genome and identify new RBPs linked to ALS (ALS-RBPs). IBM Watson extracted features from published literature to create semantic similarities and identify new connections between entities of interest. IBM Watson analyzed all published abstracts of previously known ALS-RBPs, and applied that text-based knowledge to all RBPs in the genome, ranking them by semantic similarity to the known set. We then validated the Watson top-ten-ranked RBPs at the protein and RNA levels in tissues from ALS and non-neurological disease controls, as well as in patient-derived induced pluripotent stem cells. 5 RBPs previously unlinked to ALS, hnRNPU, Syncrip, RBMS3, Caprin-1 and NUPL2, showed significant alterations in ALS compared to controls. Overall, we successfully used IBM Watson to help identify additional RBPs altered in ALS, highlighting the use of artificial intelligence tools to accelerate scientific discovery in ALS and possibly other complex neurological disorders.

  12. Artifact-Based Transformation of IBM Global Financing

    NASA Astrophysics Data System (ADS)

    Chao, Tian; Cohn, David; Flatgard, Adrian; Hahn, Sandy; Linehan, Mark; Nandi, Prabir; Nigam, Anil; Pinel, Florian; Vergo, John; Wu, Frederick Y.

    IBM Global Financing (IGF) is transforming its business using the Business Artifact Method, an innovative business process modeling technique that identifies key business artifacts and traces their life cycles as they are processed by the business. IGF is a complex, global business operation with many business design challenges. The Business Artifact Method is a fundamental shift in how to conceptualize, design and implement business operations. The Business Artifact Method was extended to solve the problem of designing a global standard for a complex, end-to-end process while supporting local geographic variations. Prior to employing the Business Artifact method, process decomposition, Lean and Six Sigma methods were each employed on different parts of the financing operation. Although they provided critical input to the final operational model, they proved insufficient for designing a complete, integrated, standard operation. The artifact method resulted in a business operations model that was at the right level of granularity for the problem at hand. A fully functional rapid prototype was created early in the engagement, which facilitated an improved understanding of the redesigned operations model. The resulting business operations model is being used as the basis for all aspects of business transformation in IBM Global Financing.

  13. Specification and preliminary design of an array processor

    NASA Technical Reports Server (NTRS)

    Slotnick, D. L.; Graham, M. L.

    1975-01-01

    The design of a computer suited to the class of problems typified by the general circulation of the atmosphere was investigated. A fundamental goal was that the resulting machine should have roughly 100 times the computing capability of an IBM 360/95 computer. A second requirement was that the machine should be programmable in a higher level language similar to FORTRAN. Moreover, the new machine would have to be compatible with the IBM 360/95 since the IBM machine would continue to be used for pre- and post-processing. A third constraint was that the cost of the new machine was to be significantly less than that of other extant machines of similar computing capability, such as the ILLIAC IV and CDC STAR. A final constraint was that it should be feasible to fabricate a complete system and put it in operation by early 1978. Although these objectives were generally met, considerable work remains to be done on the routing system.

  14. An Input Routine Using Arithmetic Statements for the IBM 704 Digital Computer

    NASA Technical Reports Server (NTRS)

    Turner, Don N.; Huff, Vearl N.

    1961-01-01

    An input routine has been designed for use with FORTRAN or SAP coded programs which are to be executed on an IBM 704 digital computer. All input to be processed by the routine is punched on IBM cards as declarative statements of the arithmetic type resembling the FORTRAN language. The routine is 850 words in length. It is capable of loading fixed- or floating-point numbers, octal numbers, and alphabetic words, and of performing simple arithmetic as indicated on input cards. Provisions have been made for rapid loading of arrays of numbers in consecutive memory locations.

  15. Novel processor architecture for onboard infrared sensors

    NASA Astrophysics Data System (ADS)

    Hihara, Hiroki; Iwasaki, Akira; Tamagawa, Nobuo; Kuribayashi, Mitsunobu; Hashimoto, Masanori; Mitsuyama, Yukio; Ochi, Hiroyuki; Onodera, Hidetoshi; Kanbara, Hiroyuki; Wakabayashi, Kazutoshi; Tada, Munehiro

    2016-09-01

    Infrared sensor system is a major concern for inter-planetary missions that investigate the nature and the formation processes of planets and asteroids. The infrared sensor system requires signal preprocessing functions that compensate for the intensity of infrared image sensors to get high quality data and high compression ratio through the limited capacity of transmission channels towards ground stations. For those implementations, combinations of Field Programmable Gate Arrays (FPGAs) and microprocessors are employed by AKATSUKI, the Venus Climate Orbiter, and HAYABUSA2, the asteroid probe. On the other hand, much smaller size and lower power consumption are demanded for future missions to accommodate more sensors. To fulfill this future demand, we developed a novel processor architecture which consists of reconfigurable cluster cores and programmable-logic cells with complementary atom switches. The complementary atom switches enable hardware programming without configuration memories, and thus soft-error on logic circuit connection is completely eliminated. This is a noteworthy advantage for space applications which cannot be found in conventional re-writable FPGAs. Almost one-tenth of lower power consumption is expected compared to conventional re-writable FPGAs because of the elimination of configuration memories. The proposed processor architecture can be reconfigured by behavioral synthesis with higher level language specification. Consequently, compensation functions are implemented in a single chip without accommodating program memories, which is accompanied with conventional microprocessors, while maintaining the comparable performance. This enables us to embed a processor element on each infrared signal detector output channel.

  16. Image acquisition unit for the Mayo/IBM PACS project

    NASA Astrophysics Data System (ADS)

    Reardon, Frank J.; Salutz, James R.

    1991-07-01

    The Mayo Clinic and IBM Rochester, Minnesota, have jointly developed a picture archiving, distribution and viewing system for use with Mayo's CT and MRI imaging modalities. Images are retrieved from the modalities and sent over the Mayo city-wide token ring network to optical storage subsystems for archiving, and to server subsystems for viewing on image review stations. Images may also be retrieved from archive and transmitted back to the modalities. The subsystems that interface to the modalities and communicate to the other components of the system are termed Image Acquisition Units (LAUs). The IAUs are IBM Personal System/2 (PS/2) computers with specially developed software. They operate independently in a network of cooperative subsystems and communicate with the modalities, archive subsystems, image review server subsystems, and a central subsystem that maintains information about the content and location of images. This paper provides a detailed description of the function and design of the Image Acquisition Units.

  17. I Love to Rite! Spelling Checkers in the Writing Classroom.

    ERIC Educational Resources Information Center

    Eiser, Leslie

    1986-01-01

    Highlights the advantages of word processors and spelling checkers in improving student writing skills. Explains how spelling checkers work and describes the types of available checkers. Also provides lists of Apple, IBM, and Commodore word processors and checkers. (ML)

  18. Effect of processor temperature on film dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Srivastava, Shiv P.; Das, Indra J., E-mail: idas@iupui.edu

    2012-07-01

    Optical density (OD) of a radiographic film plays an important role in radiation dosimetry, which depends on various parameters, including beam energy, depth, field size, film batch, dose, dose rate, air film interface, postexposure processing time, and temperature of the processor. Most of these parameters have been studied for Kodak XV and extended dose range (EDR) films used in radiation oncology. There is very limited information on processor temperature, which is investigated in this study. Multiple XV and EDR films were exposed in the reference condition (d{sub max.}, 10 Multiplication-Sign 10 cm{sup 2}, 100 cm) to a given dose. Anmore » automatic film processor (X-Omat 5000) was used for processing films. The temperature of the processor was adjusted manually with increasing temperature. At each temperature, a set of films was processed to evaluate OD at a given dose. For both films, OD is a linear function of processor temperature in the range of 29.4-40.6 Degree-Sign C (85-105 Degree-Sign F) for various dose ranges. The changes in processor temperature are directly related to the dose by a quadratic function. A simple linear equation is provided for the changes in OD vs. processor temperature, which could be used for correcting dose in radiation dosimetry when film is used.« less

  19. Engraftment Efficiency after Intra-Bone Marrow versus Intravenous Transplantation of Bone Marrow Cells in a Canine Nonmyeloablative Dog Leukocyte Antigen-Identical Transplantation Model.

    PubMed

    Lange, Sandra; Steder, Anne; Killian, Doreen; Knuebel, Gudrun; Sekora, Anett; Vogel, Heike; Lindner, Iris; Dunkelmann, Simone; Prall, Friedrich; Murua Escobar, Hugo; Freund, Mathias; Junghanss, Christian

    2017-02-01

    An intra-bone marrow (IBM) hematopoietic stem cell transplantation (HSCT) is assumed to optimize the homing process and therefore to improve engraftment as well as hematopoietic recovery compared with conventional i.v. HSCT. This study investigated the feasibility and efficacy of IBM HSCT after nonmyeloablative conditioning in an allogeneic canine HSCT model. Two study cohorts received IBM HSCT of either density gradient (IBM-I, n = 7) or buffy coat (IBM-II, n = 6) enriched bone marrow cells. An historical i.v. HSCT cohort served as control. Before allogeneic HSCT experiments were performed, we investigated the feasibility of IBM HSCT by using technetium-99m marked autologous grafts. Scintigraphic analyses confirmed that most IBM-injected autologous cells remained at the injection sites, independent of the applied volume. In addition, cell migration to other bones occurred. The enrichment process led to different allogeneic graft volumes (IBM-I, 2 × 5 mL; IBM-II, 2 × 25 mL) and significantly lower counts of total nucleated cells in IBM-I grafts compared with IBM-II grafts (1.6 × 10 8 /kg versus 3.8 × 10 8 /kg). After allogeneic HSCT, dogs of the IBM-I group showed a delayed engraftment with lower levels of donor chimerism when compared with IBM-II or to i.v. HSCT. Dogs of the IBM-II group tended to reveal slightly faster early leukocyte engraftment kinetics than intravenously transplanted animals. However, thrombocytopenia was significantly prolonged in both IBM groups when compared with i.v. HSCT. In conclusion, IBM HSCT is feasible in a nonmyeloablative HSCT setting but failed to significantly improve engraftment kinetics and hematopoietic recovery in comparison with conventional i.v. HSCT. Copyright © 2017 The American Society for Blood and Marrow Transplantation. Published by Elsevier Inc. All rights reserved.

  20. Ada (Trade Name) Compiler Validation Summary Report: IBM Corporation. IBM Development System for the Ada Language System, Version 1.1.0, IBM 4381 under VM/SP CMS, Release 3.6.

    DTIC Science & Technology

    1988-05-19

    System for the Ada Language System, Version 1.1.0, 1.% International Business Machines Corporation, Wright-Patterson AFB. IBM 4381 under VM/SP CMS...THIS PAGE (When Data Enre’ed) AVF Control Number: AVF-VSR-82.1087 87-03-10-TEL ! Ada® COMPILER VALIDATION SUMMARY REPORT: International Business Machines...Organization (AVO). On-site testing was conducted from !8 May 1987 through 19 May 1987 at International Business Machines -orporation, San Diego CA. 1.2

  1. An IBM-compatible program for interactive three-dimensional gravity modeling

    NASA Astrophysics Data System (ADS)

    Broome, John

    1992-04-01

    G3D is a 3-D interactive gravity modeling program for IBM-compatible microcomputers. The program allows a model to be created interactively by defining multiple tabular bodies with horizontal tops and bottoms. The resulting anomaly is calculated using Plouff's algorithm at up to 2000 predefined random or regularly located points. In order to display the anomaly as a color image, the point data are interpolated onto a regular grid and quantized into discrete intervals. Observed and residual gravity field images also can be generated. Adjustments to the model are made using a graphics cursor to move, insert, and delete body points or whole bodies. To facilitate model changes, planview body outlines can be overlain on any of the gravity field images during editing. The model's geometry can be displayed in planview or along a user-defined vertical section. G3D is written in Microsoft® FORTRAN and utilizes the Halo-Professional® (or Halo-88®) graphics subroutine library. The program is written for use on an IBM-compatible microcomputer equipped with hard disk, numeric coprocessor, and VGA, Number Nine Revolution (Halo-88® only), or TIGA® compatible graphics cards. A mouse or digitizing tablet is recommended for cursor positioning. Program source code, a user's guide, and sample data are available as Geological Survey of Canada Open File (G3D: A Three-dimensional Gravity Modeling Program for IBM-compatible Microcomputers).

  2. Ada Compiler Validation Summary Report. Certificate Number: 891129W1. 10198 International Business Machines Corporation, the IBM Development System for the Ada Language AIX/RT Follow-on, Version 1.1 IBM RT Follow-on. Completion of On-Site Testing: 29 November 1989

    DTIC Science & Technology

    1989-11-29

    nvmbe’j International Business Machines Corporation Wright-Patterson AFB, The IBM Development System for the Ada Language AIX/RT follow-on, Version 1.1...Certificate Number: 891129W1.10198 International Business Machines Corporation The IBM Development System for the Ada Language AIX/RT Follow-on, Version 1.1 IBM...scripts provided by International Business Machines Corporation and reviewed by the validation team. The compiler was tested using all the following

  3. Ada Compiler Validation Summary Report: Certificate Number 880318W1. 09042, International Business Machines Corporation, IBM Development System for the Ada Language, Version 2.1.0, IBM 4381 under MVS/XA, Host and Target

    DTIC Science & Technology

    1988-03-28

    International Business Machines Corporation IBM Development System for the Ada Language, Version 2.1.0 IBM 4381 under MVS/XA, host and target Completion...Joint Program Office, AJPO 20. ABSTRACT (Continue on reverse side if necessary and identify by block number) International Business Machines Corporation...in the compiler listed in this declaration. I declare that International Business Machines Corporation is the owner of record of the object code of

  4. Development and design of experiments optimization of a high temperature proton exchange membrane fuel cell auxiliary power unit with onboard fuel processor

    NASA Astrophysics Data System (ADS)

    Karstedt, Jörg; Ogrzewalla, Jürgen; Severin, Christopher; Pischinger, Stefan

    In this work, the concept development, system layout, component simulation and the overall DOE system optimization of a HT-PEM fuel cell APU with a net electric power output of 4.5 kW and an onboard methane fuel processor are presented. A highly integrated system layout has been developed that enables fast startup within 7.5 min, a closed system water balance and high fuel processor efficiencies of up to 85% due to the recuperation of the anode offgas burner heat. The integration of the system battery into the load management enhances the transient electric performance and the maximum electric power output of the APU system. Simulation models of the carbon monoxide influence on HT-PEM cell voltage, the concentration and temperature profiles within the autothermal reformer (ATR) and the CO conversion rates within the watergas shift stages (WGSs) have been developed. They enable the optimization of the CO concentration in the anode gas of the fuel cell in order to achieve maximum system efficiencies and an optimized dimensioning of the ATR and WGS reactors. Furthermore a DOE optimization of the global system parameters cathode stoichiometry, anode stoichiometry, air/fuel ratio and steam/carbon ratio of the fuel processing system has been performed in order to achieve maximum system efficiencies for all system operating points under given boundary conditions.

  5. Never Trust Your Word Processor

    ERIC Educational Resources Information Center

    Linke, Dirk

    2009-01-01

    In this article, the author talks about the auto correction mode of word processors that leads to a number of problems and describes an example in biochemistry exams that shows how word processors can lead to mistakes in databases and in papers. The author contends that, where this system is applied, spell checking should not be left to a word…

  6. Programmable DNA-Mediated Multitasking Processor.

    PubMed

    Shu, Jian-Jun; Wang, Qi-Wen; Yong, Kian-Yan; Shao, Fangwei; Lee, Kee Jin

    2015-04-30

    Because of DNA appealing features as perfect material, including minuscule size, defined structural repeat and rigidity, programmable DNA-mediated processing is a promising computing paradigm, which employs DNAs as information storing and processing substrates to tackle the computational problems. The massive parallelism of DNA hybridization exhibits transcendent potential to improve multitasking capabilities and yield a tremendous speed-up over the conventional electronic processors with stepwise signal cascade. As an example of multitasking capability, we present an in vitro programmable DNA-mediated optimal route planning processor as a functional unit embedded in contemporary navigation systems. The novel programmable DNA-mediated processor has several advantages over the existing silicon-mediated methods, such as conducting massive data storage and simultaneous processing via much fewer materials than conventional silicon devices.

  7. Multi-Core Processor Memory Contention Benchmark Analysis Case Study

    NASA Technical Reports Server (NTRS)

    Simon, Tyler; McGalliard, James

    2009-01-01

    Multi-core processors dominate current mainframe, server, and high performance computing (HPC) systems. This paper provides synthetic kernel and natural benchmark results from an HPC system at the NASA Goddard Space Flight Center that illustrate the performance impacts of multi-core (dual- and quad-core) vs. single core processor systems. Analysis of processor design, application source code, and synthetic and natural test results all indicate that multi-core processors can suffer from significant memory subsystem contention compared to similar single-core processors.

  8. Performance of Distributed CFAR Processors in Pearson Distributed Clutter

    NASA Astrophysics Data System (ADS)

    Messali, Zoubeida; Soltani, Faouzi

    2006-12-01

    This paper deals with the distributed constant false alarm rate (CFAR) radar detection of targets embedded in heavy-tailed Pearson distributed clutter. In particular, we extend the results obtained for the cell averaging (CA), order statistics (OS), and censored mean level CMLD CFAR processors operating in positive alpha-stable (P&S) random variables to more general situations, specifically to the presence of interfering targets and distributed CFAR detectors. The receiver operating characteristics of the greatest of (GO) and the smallest of (SO) CFAR processors are also determined. The performance characteristics of distributed systems are presented and compared in both homogeneous and in presence of interfering targets. We demonstrate, via simulation results, that the distributed systems when the clutter is modelled as positive alpha-stable distribution offer robustness properties against multiple target situations especially when using the "OR" fusion rule.

  9. Ada Compiler Validation Summary Report: Certificate Number: 890420W1. 10066 International Business Machines Corporation, IBM Development System for the Ada Language, AIX/RT Ada Compiler, Version 1.1.1, IBM RT PC 6150-125

    DTIC Science & Technology

    1989-04-20

    International Business Machines Corporation, IBM Development System. for the Ada Language AIX/RT Ada Compiler, Version 1.1.1, Wright-Patterson APB...Certificate Number: 890420V1.10066 International Business Machines Corporation IBM Development System for the Ada Language AIX/RT Ada Compiler, Version 1.1.1...TEST INFORMATION The compiler was tested using command scripts provided by International Business Machines Corporation and reviewed by the validation

  10. Limit characteristics of digital optoelectronic processor

    NASA Astrophysics Data System (ADS)

    Kolobrodov, V. G.; Tymchik, G. S.; Kolobrodov, M. S.

    2018-01-01

    In this article, the limiting characteristics of a digital optoelectronic processor are explored. The limits are defined by diffraction effects and a matrix structure of the devices for input and output of optical signals. The purpose of a present research is to optimize the parameters of the processor's components. The developed physical and mathematical model of DOEP allowed to establish the limit characteristics of the processor, restricted by diffraction effects and an array structure of the equipment for input and output of optical signals, as well as to optimize the parameters of the processor's components. The diameter of the entrance pupil of the Fourier lens is determined by the size of SLM and the pixel size of the modulator. To determine the spectral resolution, it is offered to use a concept of an optimum phase when the resolved diffraction maxima coincide with the pixel centers of the radiation detector.

  11. 21 CFR 892.1900 - Automatic radiographic film processor.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Automatic radiographic film processor. 892.1900... (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1900 Automatic radiographic film processor. (a) Identification. An automatic radiographic film processor is a device intended to be used to...

  12. 21 CFR 892.1900 - Automatic radiographic film processor.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Automatic radiographic film processor. 892.1900... (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1900 Automatic radiographic film processor. (a) Identification. An automatic radiographic film processor is a device intended to be used to...

  13. 21 CFR 892.1900 - Automatic radiographic film processor.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Automatic radiographic film processor. 892.1900... (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1900 Automatic radiographic film processor. (a) Identification. An automatic radiographic film processor is a device intended to be used to...

  14. 21 CFR 892.1900 - Automatic radiographic film processor.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Automatic radiographic film processor. 892.1900... (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1900 Automatic radiographic film processor. (a) Identification. An automatic radiographic film processor is a device intended to be used to...

  15. 21 CFR 892.1900 - Automatic radiographic film processor.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Automatic radiographic film processor. 892.1900... (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1900 Automatic radiographic film processor. (a) Identification. An automatic radiographic film processor is a device intended to be used to...

  16. IPEG- IMPROVED PRICE ESTIMATION GUIDELINES (IBM PC VERSION)

    NASA Technical Reports Server (NTRS)

    Aster, R. W.

    1994-01-01

    The Improved Price Estimation Guidelines, IPEG, program provides a simple yet accurate estimate of the price of a manufactured product. IPEG facilitates sensitivity studies of price estimates at considerably less expense than would be incurred by using the Standard Assembly-line Manufacturing Industry Simulation, SAMIS, program (COSMIC program NPO-16032). A difference of less than one percent between the IPEG and SAMIS price estimates has been observed with realistic test cases. However, the IPEG simplification of SAMIS allows the analyst with limited time and computing resources to perform a greater number of sensitivity studies than with SAMIS. Although IPEG was developed for the photovoltaics industry, it is readily adaptable to any standard assembly line type of manufacturing industry. IPEG estimates the annual production price per unit. The input data includes cost of equipment, space, labor, materials, supplies, and utilities. Production on an industry wide basis or a process wide basis can be simulated. Once the IPEG input file is prepared, the original price is estimated and sensitivity studies may be performed. The IPEG user selects a sensitivity variable and a set of values. IPEG will compute a price estimate and a variety of other cost parameters for every specified value of the sensitivity variable. IPEG is designed as an interactive system and prompts the user for all required information and offers a variety of output options. The IPEG/PC program is written in TURBO PASCAL for interactive execution on an IBM PC computer under DOS 2.0 or above with at least 64K of memory. The IBM PC color display and color graphics adapter are needed to use the plotting capabilities in IPEG/PC. IPEG/PC was developed in 1984. The original IPEG program is written in SIMSCRIPT II.5 for interactive execution and has been implemented on an IBM 370 series computer with a central memory requirement of approximately 300K of 8 bit bytes. The original IPEG was developed in 1980.

  17. IPEG- IMPROVED PRICE ESTIMATION GUIDELINES (IBM 370 VERSION)

    NASA Technical Reports Server (NTRS)

    Chamberlain, R. G.

    1994-01-01

    The Improved Price Estimation Guidelines, IPEG, program provides a simple yet accurate estimate of the price of a manufactured product. IPEG facilitates sensitivity studies of price estimates at considerably less expense than would be incurred by using the Standard Assembly-line Manufacturing Industry Simulation, SAMIS, program (COSMIC program NPO-16032). A difference of less than one percent between the IPEG and SAMIS price estimates has been observed with realistic test cases. However, the IPEG simplification of SAMIS allows the analyst with limited time and computing resources to perform a greater number of sensitivity studies than with SAMIS. Although IPEG was developed for the photovoltaics industry, it is readily adaptable to any standard assembly line type of manufacturing industry. IPEG estimates the annual production price per unit. The input data includes cost of equipment, space, labor, materials, supplies, and utilities. Production on an industry wide basis or a process wide basis can be simulated. Once the IPEG input file is prepared, the original price is estimated and sensitivity studies may be performed. The IPEG user selects a sensitivity variable and a set of values. IPEG will compute a price estimate and a variety of other cost parameters for every specified value of the sensitivity variable. IPEG is designed as an interactive system and prompts the user for all required information and offers a variety of output options. The IPEG/PC program is written in TURBO PASCAL for interactive execution on an IBM PC computer under DOS 2.0 or above with at least 64K of memory. The IBM PC color display and color graphics adapter are needed to use the plotting capabilities in IPEG/PC. IPEG/PC was developed in 1984. The original IPEG program is written in SIMSCRIPT II.5 for interactive execution and has been implemented on an IBM 370 series computer with a central memory requirement of approximately 300K of 8 bit bytes. The original IPEG was developed in 1980.

  18. Testing and operating a multiprocessor chip with processor redundancy

    DOEpatents

    Bellofatto, Ralph E; Douskey, Steven M; Haring, Rudolf A; McManus, Moyra K; Ohmacht, Martin; Schmunkamp, Dietmar; Sugavanam, Krishnan; Weatherford, Bryan J

    2014-10-21

    A system and method for improving the yield rate of a multiprocessor semiconductor chip that includes primary processor cores and one or more redundant processor cores. A first tester conducts a first test on one or more processor cores, and encodes results of the first test in an on-chip non-volatile memory. A second tester conducts a second test on the processor cores, and encodes results of the second test in an external non-volatile storage device. An override bit of a multiplexer is set if a processor core fails the second test. In response to the override bit, the multiplexer selects a physical-to-logical mapping of processor IDs according to one of: the encoded results in the memory device or the encoded results in the external storage device. On-chip logic configures the processor cores according to the selected physical-to-logical mapping.

  19. Maintenance of Microcomputers. Manual and Apple II Session, IBM Session.

    ERIC Educational Resources Information Center

    Coffey, Michael A.; And Others

    This guide describes maintenance procedures for IBM and Apple personal computers, provides information on detecting and diagnosing problems, and details diagnostic programs. Included are discussions of printers, terminals, disks, disk drives, keyboards, hardware, and software. The text is supplemented by various diagrams. (EW)

  20. Authoritarian Decision-Making and Alternative Patterns of Power and Influence: The Mexican IBM Case

    DTIC Science & Technology

    1988-05-01

    34 p. 111. For a fuller treatment, see Olga Pellicer de Brody, "El Ilamado a las Inversiones extranjeras, 1953-1958," In Las Empresas Transnaclonales en ...Development, Hewlett-Packard, 27 February 1987. See also Steve Frazier, "Apple y Hewlett-Packard Contra una Empresa 100% de IBM en Mexico," Excelsior, 27...Heraldo. See "IBM Piensa Crear en Mexico una Planta de Microcomputadoras," El Heraldo, 22 January 1985. 125. Orme, ibid. 126. ibid. 127. It is just as

  1. Database for LDV Signal Processor Performance Analysis

    NASA Technical Reports Server (NTRS)

    Baker, Glenn D.; Murphy, R. Jay; Meyers, James F.

    1989-01-01

    A comparative and quantitative analysis of various laser velocimeter signal processors is difficult because standards for characterizing signal bursts have not been established. This leaves the researcher to select a signal processor based only on manufacturers' claims without the benefit of direct comparison. The present paper proposes the use of a database of digitized signal bursts obtained from a laser velocimeter under various configurations as a method for directly comparing signal processors.

  2. Communications Processor Operating System Study. Executive Summary,

    DTIC Science & Technology

    1980-11-01

    AD-A095 b36 ROME AIR DEVELOPMENT CENTER GRIFFISS AFB NY F/e 17/2 COMMUNICATIONS PROCESSOR OPERATING SYSTEM STUDY. EXECUTIVE SUMM—ETC(U) NOV 80 J...COMMUNICATIONS PROCESSOR OPERATING SYSTEM STUDY Julian Gitlih SPTIC ELECTE«^ FEfi 2 6 1981^ - E APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED "a O...Subtitle) EXECUTIVE^SUMMARY 0F> COMMUNICATIONS PROCESSOR OPERATING SYSTEM $t - • >X W tdLl - ’•• • 7 AUTHORf«! ! , Julian

  3. Soft-core processor study for node-based architectures.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Houten, Jonathan Roger; Jarosz, Jason P.; Welch, Benjamin James

    2008-09-01

    Node-based architecture (NBA) designs for future satellite projects hold the promise of decreasing system development time and costs, size, weight, and power and positioning the laboratory to address other emerging mission opportunities quickly. Reconfigurable Field Programmable Gate Array (FPGA) based modules will comprise the core of several of the NBA nodes. Microprocessing capabilities will be necessary with varying degrees of mission-specific performance requirements on these nodes. To enable the flexibility of these reconfigurable nodes, it is advantageous to incorporate the microprocessor into the FPGA itself, either as a hardcore processor built into the FPGA or as a soft-core processor builtmore » out of FPGA elements. This document describes the evaluation of three reconfigurable FPGA based processors for use in future NBA systems--two soft cores (MicroBlaze and non-fault-tolerant LEON) and one hard core (PowerPC 405). Two standard performance benchmark applications were developed for each processor. The first, Dhrystone, is a fixed-point operation metric. The second, Whetstone, is a floating-point operation metric. Several trials were run at varying code locations, loop counts, processor speeds, and cache configurations. FPGA resource utilization was recorded for each configuration. Cache configurations impacted the results greatly; for optimal processor efficiency it is necessary to enable caches on the processors. Processor caches carry a penalty; cache error mitigation is necessary when operating in a radiation environment.« less

  4. Diagnostic report acquisition unit for the Mayo/IBM PACS project

    NASA Astrophysics Data System (ADS)

    Brooks, Everett G.; Rothman, Melvyn L.

    1991-07-01

    The Mayo Clinic and IBM Rochester have jointly developed a picture archive and control system (PACS) for use with Mayo's MRI and Neuro-CT imaging modalities. One of the challenges of developing a useful PACS involves integrating the diagnostic reports with the electronic images so they can be displayed simultaneously. By the time a diagnostic report is generated for a particular case, its images have already been captured and archived by the PACS. To integrate the report with the images, the authors have developed an IBM Personal System/2 computer (PS/2) based diagnostic report acquisition unit (RAU). A typed copy of the report is transmitted via facsimile to the RAU where it is stacked electronically with other reports that have been sent previously but not yet processed. By processing these reports at the RAU, the information they contain is integrated with the image database and a copy of the report is archived electronically on an IBM Application System/400 computer (AS/400). When a user requests a set of images for viewing, the report is automatically integrated with the image data. By using a hot key, the user can toggle on/off the report on the display screen. This report describes process, hardware, and software employed to integrate the diagnostic report information into the PACS, including how the report images are captured, transmitted, and entered into the AS/400 database. Also described is how the archived reports and their associated medical images are located and merged for retrieval and display. The methods used to detect and process error conditions are also discussed.

  5. Extended Durability Testing of an External Fuel Processor for a Solid Oxide Fuel Cell (SOFC)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mark Perna; Anant Upadhyayula; Mark Scotto

    2012-11-05

    Durability testing was performed on an external fuel processor (EFP) for a solid oxide fuel cell (SOFC) power plant. The EFP enables the SOFC to reach high system efficiency (electrical efficiency up to 60%) using pipeline natural gas and eliminates the need for large quantities of bottled gases. LG Fuel Cell Systems Inc. (formerly known as Rolls-Royce Fuel Cell Systems (US) Inc.) (LGFCS) is developing natural gas-fired SOFC power plants for stationary power applications. These power plants will greatly benefit the public by reducing the cost of electricity while reducing the amount of gaseous emissions of carbon dioxide, sulfur oxides,more » and nitrogen oxides compared to conventional power plants. The EFP uses pipeline natural gas and air to provide all the gas streams required by the SOFC power plant; specifically those needed for start-up, normal operation, and shutdown. It includes a natural gas desulfurizer, a synthesis-gas generator and a start-gas generator. The research in this project demonstrated that the EFP could meet its performance and durability targets. The data generated helped assess the impact of long-term operation on system performance and system hardware. The research also showed the negative impact of ambient weather (both hot and cold conditions) on system operation and performance.« less

  6. An optical/digital processor - Hardware and applications

    NASA Technical Reports Server (NTRS)

    Casasent, D.; Sterling, W. M.

    1975-01-01

    A real-time two-dimensional hybrid processor consisting of a coherent optical system, an optical/digital interface, and a PDP-11/15 control minicomputer is described. The input electrical-to-optical transducer is an electron-beam addressed potassium dideuterium phosphate (KD2PO4) light valve. The requirements and hardware for the output optical-to-digital interface, which is constructed from modular computer building blocks, are presented. Initial experimental results demonstrating the operation of this hybrid processor in phased-array radar data processing, synthetic-aperture image correlation, and text correlation are included. The applications chosen emphasize the role of the interface in the analysis of data from an optical processor and possible extensions to the digital feedback control of an optical processor.

  7. A tale of two arcs? Plate tectonics of the Izu-Bonin-Mariana (IBM) arc using subducted slab constraints

    NASA Astrophysics Data System (ADS)

    Wu, J. E.; Suppe, J.; Renqi, L.; Kanda, R. V. S.

    2014-12-01

    Published plate reconstructions typically show the Izu-Bonin Marianas arc (IBM) forming as a result of long-lived ~50 Ma Pacific subduction beneath the Philippine Sea. These reconstructions rely on the critical assumption that the Philippine Sea was continuously coupled to the Pacific during the lifetime of the IBM arc. Because of this assumption, significant (up to 1500 km) Pacific trench retreat is required to accommodate the 2000 km of Philippine Sea/IBM northward motion since the Eocene that is constrained by paleomagnetic data. In this study, we have mapped subducted slabs of mantle lithosphere from MITP08 global seismic tomography (Li et al., 2008) and restored them to a model Earth surface to constrain plate tectonic reconstructions. Here we present two subducted slab constraints that call into question current IBM arc reconstructions: 1) The northern and central Marianas slabs form a sub-vertical 'slab wall' down to maximum 1500 km depths in the lower mantle. This slab geometry is best explained by a near-stationary Marianas trench that has remained +/- 250 km E-W of its present-day position since ~45 Ma, and does not support any significant Pacific slab retreat. 2) A vanished ocean is revealed by an extensive swath of sub-horizontal slabs at 700 to 1000 km depths in the lower mantle below present-day Philippine Sea to Papua New Guinea. We call this vanished ocean the 'East Asian Sea'. When placed in an Eocene plate reconstruction, the East Asian Sea fits west of the reconstructed Marianas Pacific trench position and north of the Philippine Sea plate. This implies that the Philippine Sea and Pacific were not adjacent at IBM initiation, but were in fact separated by a lost ocean. Here we propose a new IBM arc reconstruction constrained by subducted slabs mapped under East Asia. At ~50 Ma, the present-day IBM arc initiated at equatorial latitudes from East Asian Sea subduction below the Philippine Sea. A separate arc was formed from Pacific subduction below

  8. Simulation of a master-slave event set processor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Comfort, J.C.

    1984-03-01

    Event set manipulation may consume a considerable amount of the computation time spent in performing a discrete-event simulation. One way of minimizing this time is to allow event set processing to proceed in parallel with the remainder of the simulation computation. The paper describes a multiprocessor simulation computer, in which all non-event set processing is performed by the principal processor (called the host). Event set processing is coordinated by a front end processor (the master) and actually performed by several other functionally identical processors (the slaves). A trace-driven simulation program modeling this system was constructed, and was run with tracemore » output taken from two different simulation programs. Output from this simulation suggests that a significant reduction in run time may be realized by this approach. Sensitivity analysis was performed on the significant parameters to the system (number of slave processors, relative processor speeds, and interprocessor communication times). A comparison between actual and simulation run times for a one-processor system was used to assist in the validation of the simulation. 7 references.« less

  9. The GF-3 SAR Data Processor

    PubMed Central

    Han, Bing; Ding, Chibiao; Zhong, Lihua; Liu, Jiayin; Qiu, Xiaolan; Hu, Yuxin; Lei, Bin

    2018-01-01

    The Gaofen-3 (GF-3) data processor was developed as a workstation-based GF-3 synthetic aperture radar (SAR) data processing system. The processor consists of two vital subsystems of the GF-3 ground segment, which are referred to as data ingesting subsystem (DIS) and product generation subsystem (PGS). The primary purpose of DIS is to record and catalogue GF-3 raw data with a transferring format, and PGS is to produce slant range or geocoded imagery from the signal data. This paper presents a brief introduction of the GF-3 data processor, including descriptions of the system architecture, the processing algorithms and its output format. PMID:29534464

  10. Middle School Pupil Writing and the Word Processor.

    ERIC Educational Resources Information Center

    Ediger, Marlow

    Pupils in middle schools should have ample opportunities to write with the use of word processors. Legible writing in longhand will always be necessary in selected situations but, nevertheless, much drudgery is taken care of when using a word processor. Word processors tend to be very user friendly in that few mechanical skills are needed by the…

  11. The computational structural mechanics testbed generic structural-element processor manual

    NASA Technical Reports Server (NTRS)

    Stanley, Gary M.; Nour-Omid, Shahram

    1990-01-01

    The usage and development of structural finite element processors based on the CSM Testbed's Generic Element Processor (GEP) template is documented. By convention, such processors have names of the form ESi, where i is an integer. This manual is therefore intended for both Testbed users who wish to invoke ES processors during the course of a structural analysis, and Testbed developers who wish to construct new element processors (or modify existing ones).

  12. Multiple core computer processor with globally-accessible local memories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shalf, John; Donofrio, David; Oliker, Leonid

    A multi-core computer processor including a plurality of processor cores interconnected in a Network-on-Chip (NoC) architecture, a plurality of caches, each of the plurality of caches being associated with one and only one of the plurality of processor cores, and a plurality of memories, each of the plurality of memories being associated with a different set of at least one of the plurality of processor cores and each of the plurality of memories being configured to be visible in a global memory address space such that the plurality of memories are visible to two or more of the plurality ofmore » processor cores.« less

  13. Organization of brain tissue - Is the brain a noisy processor.

    NASA Technical Reports Server (NTRS)

    Adey, W. R.

    1972-01-01

    This paper presents some thoughts on functional organization in cerebral tissue. 'Spontaneous' wave and unit firing are considered as essential phenomena in the handling of information. Various models are discussed which have been suggested to describe the pseudorandom behavior of brain cells, leading to a view of the brain as an information processor and its role in learning, memory, remembering and forgetting.

  14. 7 CFR 1435.310 - Sharing processors' allocations with producers.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... CREDIT CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Flexible Marketing Allotments For Sugar § 1435.310 Sharing processors' allocations with producers. (a) Every sugar beet and sugarcane processor must provide CCC a certification that: (1) The processor...

  15. 7 CFR 1435.310 - Sharing processors' allocations with producers.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... CREDIT CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Flexible Marketing Allotments For Sugar § 1435.310 Sharing processors' allocations with producers. (a) Every sugar beet and sugarcane processor must provide CCC a certification that: (1) The processor...

  16. 7 CFR 1435.310 - Sharing processors' allocations with producers.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... CREDIT CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Flexible Marketing Allotments For Sugar § 1435.310 Sharing processors' allocations with producers. (a) Every sugar beet and sugarcane processor must provide CCC a certification that: (1) The processor...

  17. 7 CFR 1435.310 - Sharing processors' allocations with producers.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... CREDIT CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Flexible Marketing Allotments For Sugar § 1435.310 Sharing processors' allocations with producers. (a) Every sugar beet and sugarcane processor must provide CCC a certification that: (1) The processor...

  18. 7 CFR 1435.310 - Sharing processors' allocations with producers.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... CREDIT CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Flexible Marketing Allotments For Sugar § 1435.310 Sharing processors' allocations with producers. (a) Every sugar beet and sugarcane processor must provide CCC a certification that: (1) The processor...

  19. Aberration correction results in the IBM STEM instrument.

    PubMed

    Batson, P E

    2003-09-01

    Results from the installation of aberration correction in the IBM 120 kV STEM argue that a sub-angstrom probe size has been achieved. Results and the experimental methods used to obtain them are described here. Some post-experiment processing is necessary to demonstrate the probe size of about 0.078 nm. While the promise of aberration correction is demonstrated, we remain at the very threshold of practicality, given the very stringent stability requirements.

  20. A data base processor semantics specification package

    NASA Technical Reports Server (NTRS)

    Fishwick, P. A.

    1983-01-01

    A Semantics Specification Package (DBPSSP) for the Intel Data Base Processor (DBP) is defined. DBPSSP serves as a collection of cross assembly tools that allow the analyst to assemble request blocks on the host computer for passage to the DBP. The assembly tools discussed in this report may be effectively used in conjunction with a DBP compatible data communications protocol to form a query processor, precompiler, or file management system for the database processor. The source modules representing the components of DBPSSP are fully commented and included.

  1. Ada Compiler Validation Summary Report. Certificate Number: 900726W1. 11017, Verdix Corporation VADS IBM RISC System/6000, AIX 3.1, VAda-110-7171, Version 6.0 IBM RISC System/6000 Model 530 = IBM RISC System/6000 Model 530

    DTIC Science & Technology

    1991-01-22

    Customer Agreement Number: 90-05-29- VRX See Section 3.1 for any additional information about the testing environment. As a result of this validation...22 January 1991 90-05-29- VRX Ada COMPILER VALIDATION SUMMARY REPORT: Certificate Number: 900726W1.11017 Verdix Corporation VADS IBM RISC System/6000

  2. NAS Experiences of Porting CM Fortran Codes to HPF on IBM SP2 and SGI Power Challenge

    NASA Technical Reports Server (NTRS)

    Saini, Subhash

    1995-01-01

    Current Connection Machine (CM) Fortran codes developed for the CM-2 and the CM-5 represent an important class of parallel applications. Several users have employed CM Fortran codes in production mode on the CM-2 and the CM-5 for the last five to six years, constituting a heavy investment in terms of cost and time. With Thinking Machines Corporation's decision to withdraw from the hardware business and with the decommissioning of many CM-2 and CM-5 machines, the best way to protect the substantial investment in CM Fortran codes is to port the codes to High Performance Fortran (HPF) on highly parallel systems. HPF is very similar to CM Fortran and thus represents a natural transition. Conversion issues involved in porting CM Fortran codes on the CM-5 to HPF are presented. In particular, the differences between data distribution directives and the CM Fortran Utility Routines Library, as well as the equivalent functionality in the HPF Library are discussed. Several CM Fortran codes (Cannon algorithm for matrix-matrix multiplication, Linear solver Ax=b, 1-D convolution for 2-D datasets, Laplace's Equation solver, and Direct Simulation Monte Carlo (DSMC) codes have been ported to Subset HPF on the IBM SP2 and the SGI Power Challenge. Speedup ratios versus number of processors for the Linear solver and DSMC code are presented.

  3. Using IBMs to Investigate Spatially-dependent Processes in Landscape Genetics Theory

    EPA Science Inventory

    Much of landscape and conservation genetics theory has been derived using non-spatialmathematical models. Here, we use a mechanistic, spatially-explicit, eco-evolutionary IBM to examine the utility of this theoretical framework in landscapes with spatial structure. Our analysis...

  4. Acoustooptic linear algebra processors - Architectures, algorithms, and applications

    NASA Technical Reports Server (NTRS)

    Casasent, D.

    1984-01-01

    Architectures, algorithms, and applications for systolic processors are described with attention to the realization of parallel algorithms on various optical systolic array processors. Systolic processors for matrices with special structure and matrices of general structure, and the realization of matrix-vector, matrix-matrix, and triple-matrix products and such architectures are described. Parallel algorithms for direct and indirect solutions to systems of linear algebraic equations and their implementation on optical systolic processors are detailed with attention to the pipelining and flow of data and operations. Parallel algorithms and their optical realization for LU and QR matrix decomposition are specifically detailed. These represent the fundamental operations necessary in the implementation of least squares, eigenvalue, and SVD solutions. Specific applications (e.g., the solution of partial differential equations, adaptive noise cancellation, and optimal control) are described to typify the use of matrix processors in modern advanced signal processing.

  5. Analysis of the control structures for an integrated ethanol processor for proton exchange membrane fuel cell systems

    NASA Astrophysics Data System (ADS)

    Biset, S.; Nieto Deglioumini, L.; Basualdo, M.; Garcia, V. M.; Serra, M.

    The aim of this work is to investigate which would be a good preliminary plantwide control structure for the process of Hydrogen production from bioethanol to be used in a proton exchange membrane (PEM) accounting only steady-state information. The objective is to keep the process under optimal operation point, that is doing energy integration to achieve the maximum efficiency. Ethanol, produced from renewable feedstocks, feeds a fuel processor investigated for steam reforming, followed by high- and low-temperature shift reactors and preferential oxidation, which are coupled to a polymeric fuel cell. Applying steady-state simulation techniques and using thermodynamic models the performance of the complete system with two different control structures have been evaluated for the most typical perturbations. A sensitivity analysis for the key process variables together with the rigorous operability requirements for the fuel cell are taking into account for defining acceptable plantwide control structure. This is the first work showing an alternative control structure applied to this kind of process.

  6. Modifications of the IBM personal computer synchronous communications support programs for use with the Multics

    USGS Publications Warehouse

    Kork, John O.

    1983-01-01

    Version 1.00 of the Asynchronous Communications Support supplied with the IBM Personal Computer must be modified to be used for communications with Multics. Version 2.00 can be used as supplied, but error checking and screen printing capabilities can be added by using modifications very similar to those required for Version 1.00. This paper describes and lists required programs on Multics and appropriate modifications to both Versions 1.00 and 2.00 of the programs supplied by IBM.

  7. Generalized Information Architecture for Managing Requirements in IBM?s Rational DOORS(r) Application.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aragon, Kathryn M.; Eaton, Shelley M.; McCornack, Marjorie Turner

    When a requirements engineering effort fails to meet expectations, often times the requirements management tool is blamed. Working with numerous project teams at Sandia National Laboratories over the last fifteen years has shown us that the tool is rarely the culprit; usually it is the lack of a viable information architecture with well- designed processes to support requirements engineering. This document illustrates design concepts with rationale, as well as a proven information architecture to structure and manage information in support of requirements engineering activities for any size or type of project. This generalized information architecture is specific to IBM's Rationalmore » DOORS (Dynamic Object Oriented Requirements System) software application, which is the requirements management tool in Sandia's CEE (Common Engineering Environment). This generalized information architecture can be used as presented or as a foundation for designing a tailored information architecture for project-specific needs. It may also be tailored for another software tool. Version 1.0 4 November 201« less

  8. The CSM testbed matrix processors internal logic and dataflow descriptions

    NASA Technical Reports Server (NTRS)

    Regelbrugge, Marc E.; Wright, Mary A.

    1988-01-01

    This report constitutes the final report for subtask 1 of Task 5 of NASA Contract NAS1-18444, Computational Structural Mechanics (CSM) Research. This report contains a detailed description of the coded workings of selected CSM Testbed matrix processors (i.e., TOPO, K, INV, SSOL) and of the arithmetic utility processor AUS. These processors and the current sparse matrix data structures are studied and documented. Items examined include: details of the data structures, interdependence of data structures, data-blocking logic in the data structures, processor data flow and architecture, and processor algorithmic logic flow.

  9. The ISS Water Processor Catalytic Reactor as a Post Processor for Advanced Water Reclamation Systems

    NASA Technical Reports Server (NTRS)

    Nalette, Tim; Snowdon, Doug; Pickering, Karen D.; Callahan, Michael

    2007-01-01

    Advanced water processors being developed for NASA s Exploration Initiative rely on phase change technologies and/or biological processes as the primary means of water reclamation. As a result of the phase change, volatile compounds will also be transported into the distillate product stream. The catalytic reactor assembly used in the International Space Station (ISS) water processor assembly, referred to as Volatile Removal Assembly (VRA), has demonstrated high efficiency oxidation of many of these volatile contaminants, such as low molecular weight alcohols and acetic acid, and is considered a viable post treatment system for all advanced water processors. To support this investigation, two ersatz solutions were defined to be used for further evaluation of the VRA. The first solution was developed as part of an internal research and development project at Hamilton Sundstrand (HS) and is based primarily on ISS experience related to the development of the VRA. The second ersatz solution was defined by NASA in support of a study contract to Hamilton Sundstrand to evaluate the VRA as a potential post processor for the Cascade Distillation system being developed by Honeywell. This second ersatz solution contains several low molecular weight alcohols, organic acids, and several inorganic species. A range of residence times, oxygen concentrations and operating temperatures have been studied with both ersatz solutions to provide addition performance capability of the VRA catalyst.

  10. Embedded processor extensions for image processing

    NASA Astrophysics Data System (ADS)

    Thevenin, Mathieu; Paindavoine, Michel; Letellier, Laurent; Heyrman, Barthélémy

    2008-04-01

    The advent of camera phones marks a new phase in embedded camera sales. By late 2009, the total number of camera phones will exceed that of both conventional and digital cameras shipped since the invention of photography. Use in mobile phones of applications like visiophony, matrix code readers and biometrics requires a high degree of component flexibility that image processors (IPs) have not, to date, been able to provide. For all these reasons, programmable processor solutions have become essential. This paper presents several techniques geared to speeding up image processors. It demonstrates that a gain of twice is possible for the complete image acquisition chain and the enhancement pipeline downstream of the video sensor. Such results confirm the potential of these computing systems for supporting future applications.

  11. LION4; LION; three-dimensional temperature distribution program. [CDC6600,7600; UNIVAC1108; IBM360,370; FORTRAN IV and ASCENT (CDC6600,7600), FORTRAN IV (UNIVAC1108A,B and IBM360,370)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Binney, E.J.

    LION4 is a computer program for calculating one-, two-, or three-dimensional transient and steady-state temperature distributions in reactor and reactor plant components. It is used primarily for thermal-structural analyses. It utilizes finite difference techniques with first-order forward difference integration and is capable of handling a wide variety of bounding conditions. Heat transfer situations accommodated include forced and free convection in both reduced and fully-automated temperature dependent forms, coolant flow effects, a limited thermal radiation capability, a stationary or stagnant fluid gap, a dual dependency (temperature difference and temperature level) heat transfer, an alternative heat transfer mode comparison and selection facilitymore » combined with heat flux direction sensor, and any form of time-dependent boundary temperatures. The program, which handles time and space dependent internal heat generation, can also provide temperature dependent material properties with limited non-isotropic properties. User-oriented capabilities available include temperature means with various weightings and a complete heat flow rate surveillance system.CDC6600,7600;UNIVAC1108;IBM360,370; FORTRAN IV and ASCENT (CDC6600,7600), FORTRAN IV (UNIVAC1108A,B and IBM360,370); SCOPE (CDC6600,7600), EXEC8 (UNIVAC1108A,B), OS/360,370 (IBM360,370); The CDC6600 version plotter routine LAPL4 is used to produce the input required by the associated CalComp plotter for graphical output. The IBM360 version requires 350K for execution and one additional input/output unit besides the standard units.« less

  12. Design of a MIMD neural network processor

    NASA Astrophysics Data System (ADS)

    Saeks, Richard E.; Priddy, Kevin L.; Pap, Robert M.; Stowell, S.

    1994-03-01

    The Accurate Automation Corporation (AAC) neural network processor (NNP) module is a fully programmable multiple instruction multiple data (MIMD) parallel processor optimized for the implementation of neural networks. The AAC NNP design fully exploits the intrinsic sparseness of neural network topologies. Moreover, by using a MIMD parallel processing architecture one can update multiple neurons in parallel with efficiency approaching 100 percent as the size of the network increases. Each AAC NNP module has 8 K neurons and 32 K interconnections and is capable of 140,000,000 connections per second with an eight processor array capable of over one billion connections per second.

  13. International Business Machines (IBM) Corporation Interim Agreement EPA Case No. 08-0113-00

    EPA Pesticide Factsheets

    On March 27, 2008, the United States Environmental Protection Agency (EPA), suspended International Business Machines (IBM) from receiving Federal Contracts, approved subcontracts, assistance, loans and other benefits.

  14. A MAP read-routine for IBM 7094 Fortran II binary tapes

    Treesearch

    Robert S. Helfman

    1966-01-01

    Two MAP (Macro Assembly Program) language routines are descrived. They permit Fortran IV programs to read binary tapes generated by Fortran II programs, on the IBM 7090 and 7094 computers. One routine is for use with 7040/44-IBSYS, the other for 7090/94-IBSYS.

  15. An IBM 370 assembly language program verifier

    NASA Technical Reports Server (NTRS)

    Maurer, W. D.

    1977-01-01

    The paper describes a program written in SNOBOL which verifies the correctness of programs written in assembly language for the IBM 360 and 370 series of computers. The motivation for using assembly language as a source language for a program verifier was the realization that many errors in programs are caused by misunderstanding or ignorance of the characteristics of specific computers. The proof of correctness of a program written in assembly language must take these characteristics into account. The program has been compiled and is currently running at the Center for Academic and Administrative Computing of The George Washington University.

  16. Coding, testing and documentation of processors for the flight design system

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The general functional design and implementation of processors for a space flight design system are briefly described. Discussions of a basetime initialization processor; conic, analytical, and precision coasting flight processors; and an orbit lifetime processor are included. The functions of several utility routines are also discussed.

  17. MBASIC batch processor architectural overview

    NASA Technical Reports Server (NTRS)

    Reynolds, S. M.

    1978-01-01

    The MBASIC (TM) batch processor, a language translator designed to operate in the MBASIC (TM) environment is described. Features include: (1) a CONVERT TO BATCH command, usable from the ready mode; and (2) translation of the users program in stages through several levels of intermediate language and optimization. The processor is to be designed and implemented in both machine-independent and machine-dependent sections. The architecture is planned so that optimization processes are transparent to the rest of the system and need not be included in the first design implementation cycle.

  18. Design of RISC Processor Using VHDL and Cadence

    NASA Astrophysics Data System (ADS)

    Moslehpour, Saeid; Puliroju, Chandrasekhar; Abu-Aisheh, Akram

    The project deals about development of a basic RISC processor. The processor is designed with basic architecture consisting of internal modules like clock generator, memory, program counter, instruction register, accumulator, arithmetic and logic unit and decoder. This processor is mainly used for simple general purpose like arithmetic operations and which can be further developed for general purpose processor by increasing the size of the instruction register. The processor is designed in VHDL by using Xilinx 8.1i version. The present project also serves as an application of the knowledge gained from past studies of the PSPICE program. The study will show how PSPICE can be used to simplify massive complex circuits designed in VHDL Synthesis. The purpose of the project is to explore the designed RISC model piece by piece, examine and understand the Input/ Output pins, and to show how the VHDL synthesis code can be converted to a simplified PSPICE model. The project will also serve as a collection of various research materials about the pieces of the circuit.

  19. Real time processor for array speckle interferometry

    NASA Astrophysics Data System (ADS)

    Chin, Gordon; Florez, Jose; Borelli, Renan; Fong, Wai; Miko, Joseph; Trujillo, Carlos

    1989-02-01

    The authors are constructing a real-time processor to acquire image frames, perform array flat-fielding, execute a 64 x 64 element two-dimensional complex FFT (fast Fourier transform) and average the power spectrum, all within the 25 ms coherence time for speckles at near-IR (infrared) wavelength. The processor will be a compact unit controlled by a PC with real-time display and data storage capability. This will provide the ability to optimize observations and obtain results on the telescope rather than waiting several weeks before the data can be analyzed and viewed with offline methods. The image acquisition and processing, design criteria, and processor architecture are described.

  20. Real time processor for array speckle interferometry

    NASA Technical Reports Server (NTRS)

    Chin, Gordon; Florez, Jose; Borelli, Renan; Fong, Wai; Miko, Joseph; Trujillo, Carlos

    1989-01-01

    The authors are constructing a real-time processor to acquire image frames, perform array flat-fielding, execute a 64 x 64 element two-dimensional complex FFT (fast Fourier transform) and average the power spectrum, all within the 25 ms coherence time for speckles at near-IR (infrared) wavelength. The processor will be a compact unit controlled by a PC with real-time display and data storage capability. This will provide the ability to optimize observations and obtain results on the telescope rather than waiting several weeks before the data can be analyzed and viewed with offline methods. The image acquisition and processing, design criteria, and processor architecture are described.

  1. Noncoherent parallel optical processor for discrete two-dimensional linear transformations.

    PubMed

    Glaser, I

    1980-10-01

    We describe a parallel optical processor, based on a lenslet array, that provides general linear two-dimensional transformations using noncoherent light. Such a processor could become useful in image- and signal-processing applications in which the throughput requirements cannot be adequately satisfied by state-of-the-art digital processors. Experimental results that illustrate the feasibility of the processor by demonstrating its use in parallel optical computation of the two-dimensional Walsh-Hadamard transformation are presented.

  2. Advanced Multiple Processor Configuration Study. Final Report.

    ERIC Educational Resources Information Center

    Clymer, S. J.

    This summary of a study on multiple processor configurations includes the objectives, background, approach, and results of research undertaken to provide the Air Force with a generalized model of computer processor combinations for use in the evaluation of proposed flight training simulator computational designs. An analysis of a real-time flight…

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sudman, D.L.

    For 17 years, the sensor-based IBM 1800 computer successfully fulfilled Sun's requirements for data acquisition and process control at its petroleum refinery in Toledo, Ohio. However, faltering reliability due to deterioration, coupled with IBM's announced withdrawal of contractual hardware maintenance, prompted Sun to approach IBM regarding potential solutions to the problem of economically maintaining the IBM 1800 as a viable system in the Toledo Refinery. In concert, IBM and Sun identified several options, but an IBM proposal which held the most promise for long term success was the direct replacement of the IBM 1800 processor and software systems with anmore » IBM 4300 running IBM's licensed program product ''Advanced Control System,'' i.e., ACS. Sun chose this solution. The intent of this paper is to chronicle the highlights of the project which successfully revitalized the process computer facilities in Sun's Toledo Refinery in only 10 months, under financial constraints, and using limited human resources.« less

  4. A Medical Language Processor for Two Indo-European Languages

    PubMed Central

    Nhan, Ngo Thanh; Sager, Naomi; Lyman, Margaret; Tick, Leo J.; Borst, François; Su, Yun

    1989-01-01

    The syntax and semantics of clinical narrative across Indo-European languages are quite similar, making it possible to envison a single medical language processor that can be adapted for different European languages. The Linguistic String Project of New York University is continuing the development of its Medical Language Processor in this direction. The paper describes how the processor operates on English and French.

  5. Software-Reconfigurable Processors for Spacecraft

    NASA Technical Reports Server (NTRS)

    Farrington, Allen; Gray, Andrew; Bell, Bryan; Stanton, Valerie; Chong, Yong; Peters, Kenneth; Lee, Clement; Srinivasan, Jeffrey

    2005-01-01

    A report presents an overview of an architecture for a software-reconfigurable network data processor for a spacecraft engaged in scientific exploration. When executed on suitable electronic hardware, the software performs the functions of a physical layer (in effect, acts as a software radio in that it performs modulation, demodulation, pulse-shaping, error correction, coding, and decoding), a data-link layer, a network layer, a transport layer, and application-layer processing of scientific data. The software-reconfigurable network processor is undergoing development to enable rapid prototyping and rapid implementation of communication, navigation, and scientific signal-processing functions; to provide a long-lived communication infrastructure; and to provide greatly improved scientific-instrumentation and scientific-data-processing functions by enabling science-driven in-flight reconfiguration of computing resources devoted to these functions. This development is an extension of terrestrial radio and network developments (e.g., in the cellular-telephone industry) implemented in software running on such hardware as field-programmable gate arrays, digital signal processors, traditional digital circuits, and mixed-signal application-specific integrated circuits (ASICs).

  6. Dynamic Load Balancing For Grid Partitioning on a SP-2 Multiprocessor: A Framework

    NASA Technical Reports Server (NTRS)

    Sohn, Andrew; Simon, Horst; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    Computational requirements of full scale computational fluid dynamics change as computation progresses on a parallel machine. The change in computational intensity causes workload imbalance of processors, which in turn requires a large amount of data movement at runtime. If parallel CFD is to be successful on a parallel or massively parallel machine, balancing of the runtime load is indispensable. Here a framework is presented for dynamic load balancing for CFD applications, called Jove. One processor is designated as a decision maker Jove while others are assigned to computational fluid dynamics. Processors running CFD send flags to Jove in a predetermined number of iterations to initiate load balancing. Jove starts working on load balancing while other processors continue working with the current data and load distribution. Jove goes through several steps to decide if the new data should be taken, including preliminary evaluate, partition, processor reassignment, cost evaluation, and decision. Jove running on a single IBM SP2 node has been completely implemented. Preliminary experimental results show that the Jove approach to dynamic load balancing can be effective for full scale grid partitioning on the target machine IBM SP2.

  7. Integrated Building Management System (IBMS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anita Lewis

    This project provides a combination of software and services that more easily and cost-effectively help to achieve optimized building performance and energy efficiency. Featuring an open-platform, cloud- hosted application suite and an intuitive user experience, this solution simplifies a traditionally very complex process by collecting data from disparate building systems and creating a single, integrated view of building and system performance. The Fault Detection and Diagnostics algorithms developed within the IBMS have been designed and tested as an integrated component of the control algorithms running the equipment being monitored. The algorithms identify the normal control behaviors of the equipment withoutmore » interfering with the equipment control sequences. The algorithms also work without interfering with any cooperative control sequences operating between different pieces of equipment or building systems. In this manner the FDD algorithms create an integrated building management system.« less

  8. George E. Pake Prize Lecture: Physical Sciences Research at IBM: Still at the Cutting Edge

    NASA Astrophysics Data System (ADS)

    Theis, Thomas

    2015-03-01

    The information technology revolution is in its ``build out'' phase. The foundational scientific insights and hardware inventions are now many decades old. The microelectronics industry is maturing. An increasing fraction of the total research investment is in software and services, as applications of information technology transform every business and every sector of the public and private economy. Yet IBM Research continues to make substantial investments in hardware technology and the underlying physical sciences. While some of this investment is aimed at extending the established transistor technology, an increasing fraction is aimed at longer-term and possibly disruptive research - new devices for computing, such as tunneling field-effect transistors and nanophotonic circuits, and new architectures, such as neurosynaptic systems and quantum computing. This research investment is a bet that the old foundations of information technology are ripe for reinvention. After all, today's information technology devices and systems operate far from any fundamental limits on speed and energy efficiency. But how can IBM make risky long-term research investments in an era of global competition, with financial markets focused on the short term? One important answer is partnerships. Since its early days, IBM Research has pursued innovation in information technology and innovation in the ways it conducts the business of research. By continuously evolving new models for research and development partnerships, it has extended its global reach, increased its impact on IBM's customers, and expanded the breadth and depth of its research project portfolio. Research in the physical sciences has often led the way. Currently on assignment to the Semiconductor Research Corporation.

  9. Baseband processor development for the Advanced Communications Satellite Program

    NASA Technical Reports Server (NTRS)

    Moat, D.; Sabourin, D.; Stilwell, J.; Mccallister, R.; Borota, M.

    1982-01-01

    An onboard-baseband-processor concept for a satellite-switched time-division-multiple-access (SS-TDMA) communication system was developed for NASA Lewis Research Center. The baseband processor routes and controls traffic on an individual message basis while providing significant advantages in improved link margins and system flexibility. Key technology developments required to prove the flight readiness of the baseband-processor design are being verified in a baseband-processor proof-of-concept model. These technology developments include serial MSK modems, Clos-type baseband routing switch, a single-chip CMOS maximum-likelihood convolutional decoder, and custom LSL implementation of high-speed, low-power ECL building blocks.

  10. COMCAN; COMCAN2A; system safety common cause analysis. [IBM360; CDC CYBER176,175; FORTRAN IV (30%) and BAL (70%) (IBM360), FORTRAN IV (97%) and COMPASS (3%) (CDC CYBER176)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burdick, G.R.; Wilson, J.R.

    COMCAN2A and COMCAN are designed to analyze complex systems such as nuclear plants for common causes of failure. A common cause event, or common mode failure, is a secondary cause that could contribute to the failure of more than one component and violates the assumption of independence. Analysis of such events is an integral part of system reliability and safety analysis. A significant common cause event is a secondary cause common to all basic events in one or more minimal cut sets. Minimal cut sets containing events from components sharing a common location or a common link are called commonmore » cause candidates. Components share a common location if no barrier insulates any one of them from the secondary cause. A common link is a dependency among components which cannot be removed by a physical barrier (e.g., a common energy source or common maintenance instructions).IBM360;CDC CYBER176,175; FORTRAN IV (30%) and BAL (70%) (IBM360), FORTRAN IV (97%) and COMPASS (3%) (CDC CYBER176).; OS/360 (IBM360) and NOS/BE 1.4 (CDC CYBER176), NOS 1.3 (CDC CYBER175); 140K bytes of memory for COMCAN and 242K (octal) words of memory for COMCAN2A.« less

  11. Air-Lubricated Thermal Processor For Dry Silver Film

    NASA Astrophysics Data System (ADS)

    Siryj, B. W.

    1980-09-01

    Since dry silver film is processed by heat, it may be viewed on a light table only seconds after exposure. On the other hand, wet films require both bulky chemicals and substantial time before an image can be analyzed. Processing of dry silver film, although simple in concept, is not so simple when reduced to practice. The main concern is the effect of film temperature gradients on uniformity of optical film density. RCA has developed two thermal processors, different in implementation but based on the same philosophy. Pressurized air is directed to both sides of the film to support the film and to conduct the heat to the film. Porous graphite is used as the medium through which heat and air are introduced. The initial thermal processor was designed to process 9.5-inch-wide film moving at speeds ranging from 0.0034 to 0.008 inch per second. The processor configuration was curved to match the plane generated by the laser recording beam. The second thermal processor was configured to process 5-inch-wide film moving at a continuously variable rate ranging from 0.15 to 3.5 inches per second. Due to field flattening optics used in this laser recorder, the required film processing area was plane. In addition, this processor was sectioned in the direction of film motion, giving the processor the capability of varying both temperature and effective processing area.

  12. Stability of lanthanum oxide-based H 2S sorbents in realistic fuel processor/fuel cell operation

    NASA Astrophysics Data System (ADS)

    Valsamakis, Ioannis; Si, Rui; Flytzani-Stephanopoulos, Maria

    We report that lanthana-based sulfur sorbents are an excellent choice as once-through chemical filters for the removal of trace amounts of H 2S and COS from any fuel gas at temperatures matching those of solid oxide fuel cells. We have examined sorbents based on lanthana and Pr-doped lanthana with up to 30 at.% praseodymium, having high desulfurization efficiency, as measured by their ability to remove H 2S from simulated reformate gas streams to below 50 ppbv with corresponding sulfur capacity exceeding 50 mg S g sorbent -1 at 800 °C. Intermittent sorbent operation with air-rich boiler exhaust-type gas mixtures and with frequent shutdowns and restarts is possible without formation of lanthanide oxycarbonate phases. Upon restart, desulfurization continues from where it left at the end of the previous cycle. These findings are important for practical applications of these sorbents as sulfur polishing units of fuel gases in the presence of small or large amounts of water vapor, and with the regular shutdown/start-up operation practiced in fuel processors/fuel cell systems, both stationary and mobile, and of any size/scale.

  13. Hazardous Waste Cleanup: IBM Corporation, Former in Hopewell Junction, New York

    EPA Pesticide Factsheets

    IBM's facility is located in Hopewell Junction, New York, bordered on the north by U.S. Route 52, to the east by County Highway 27, and to the south by U.S. Route 84. There is an unnamed creek next to the surrounding open fields to the west. The 592-acre

  14. Automatic film processors' quality control test in Greek military hospitals.

    PubMed

    Lymberis, C; Efstathopoulos, E P; Manetou, A; Poudridis, G

    1993-04-01

    The two major military radiology installations (Athens, Greece) using a total of 15 automatic film processors were assessed using the 21-step-wedge method. The results of quality control in all these processors are presented. The parameters measured under actual working conditions were base and fog, contrast and speed. Base and fog as well as speed displayed large variations with average values generally higher than acceptable, whilst contrast displayed greater stability. Developer temperature was measured daily during the test and was found to be outside the film manufacturers' recommended limits in nine of the 15 processors. In only one processor did film passing time vary on an every day basis and this was due to maloperation. Developer pH test was not part of the daily monitoring service being performed every 5 days for each film processor and found to be in the range 9-12; 10 of the 15 processors presented pH values outside the limits specified by the film manufacturers.

  15. Using the Word Processor in Writing Groups.

    ERIC Educational Resources Information Center

    Melia, Josie

    Writing groups can use word processors or microcomputers in many different types of writing activities. Four hour-long sessions at a word processor with the help of a skilled word processing tutor have been found to be sufficient to provide a working knowledge of word processing. When two or three students enrolled in a writing class are assigned…

  16. Rapid prototyping and evaluation of programmable SIMD SDR processors in LISA

    NASA Astrophysics Data System (ADS)

    Chen, Ting; Liu, Hengzhu; Zhang, Botao; Liu, Dongpei

    2013-03-01

    With the development of international wireless communication standards, there is an increase in computational requirement for baseband signal processors. Time-to-market pressure makes it impossible to completely redesign new processors for the evolving standards. Due to its high flexibility and low power, software defined radio (SDR) digital signal processors have been proposed as promising technology to replace traditional ASIC and FPGA fashions. In addition, there are large numbers of parallel data processed in computation-intensive functions, which fosters the development of single instruction multiple data (SIMD) architecture in SDR platform. So a new way must be found to prototype the SDR processors efficiently. In this paper we present a bit-and-cycle accurate model of programmable SIMD SDR processors in a machine description language LISA. LISA is a language for instruction set architecture which can gain rapid model at architectural level. In order to evaluate the availability of our proposed processor, three common baseband functions, FFT, FIR digital filter and matrix multiplication have been mapped on the SDR platform. Analytical results showed that the SDR processor achieved the maximum of 47.1% performance boost relative to the opponent processor.

  17. Optical backplane interconnect switch for data processors and computers

    NASA Technical Reports Server (NTRS)

    Hendricks, Herbert D.; Benz, Harry F.; Hammer, Jacob M.

    1989-01-01

    An optoelectronic integrated device design is reported which can be used to implement an all-optical backplane interconnect switch. The switch is sized to accommodate an array of processors and memories suitable for direct replacement into the basic avionic multiprocessor backplane. The optical backplane interconnect switch is also suitable for direct replacement of the PI bus traffic switch and at the same time, suitable for supporting pipelining of the processor and memory. The 32 bidirectional switchable interconnects are configured with broadcast capability for controls, reconfiguration, and messages. The approach described here can handle a serial interconnection of data processors or a line-to-link interconnection of data processors. An optical fiber demonstration of this approach is presented.

  18. Conversion of LARSYS III.1 to an IBM 370 computer

    NASA Technical Reports Server (NTRS)

    Williams, G. N.; Leggett, J.; Hascall, G. A.

    1975-01-01

    A software system for processing multispectral aircraft or satellite data (LARSYS) was designed and written at the Laboratory for Applications of Remote Sensing at Purdue University. This system, being implemented on an IBM 360/67 computer utilizing the Cambridge Monitor System, is of an interactive nature. TAMU LARSYS maintains the essential capabilities of Purdue's LARSYS. The machine configuration for which it has been converted is an IBM-compatible Amdahl 470V/6 computer utilizing the time sharing option of the currently implemented OS/VS2 Operating System. Due to TSO limitations, the NASA-JSC deliverable TAMU LARSYS is comprised of two parts. Part one is a TSO Control Card Checker for LARSYS control cards, and part two is a batch version of LARSYS. Used together, they afford most of the capabilities of the original LARSYS III.1. Additionally, two programs have been written by TAMU to support LARSYS processing. The first is an ERTS-to-MIST conversion program used to convert ERTS data to the LARSYS input form, the MIST tape. The second is a system runtable code which maintains tape/file location information for the MIST data sets.

  19. A Method for Transferring Photoelectric Photometry Data from Apple II+ to IBM PC

    NASA Astrophysics Data System (ADS)

    Powell, Harry D.; Miller, James R.; Stephenson, Kipp

    1989-06-01

    A method is presented for transferring photoelectric photometry data files from an Apple II computer to an IBM PC computer in a form which is compatible with the AAVSO Photoelectric Photometry data collection process.

  20. High Resolution Displays In The Apple Macintosh And IBM PC Environments

    NASA Astrophysics Data System (ADS)

    Winegarden, Steven

    1989-07-01

    High resolution displays are one of the key elements that distinguish user oriented document finishing or publishing stations. A number of factors have been involved in bringing these to the desktop environment. At Sigma Designs we have concentrated on enhancing the capabilites of IBM PCs and compatibles and Apple Macintosh computer systems.

  1. Methanol Fuel Cell

    NASA Technical Reports Server (NTRS)

    Voecks, G. E.

    1985-01-01

    In proposed fuel-cell system, methanol converted to hydrogen in two places. External fuel processor converts only part of methanol. Remaining methanol converted in fuel cell itself, in reaction at anode. As result, size of fuel processor reduced, system efficiency increased, and cost lowered.

  2. An Efficient Functional Test Generation Method For Processors Using Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Hudec, Ján; Gramatová, Elena

    2015-07-01

    The paper presents a new functional test generation method for processors testing based on genetic algorithms and evolutionary strategies. The tests are generated over an instruction set architecture and a processor description. Such functional tests belong to the software-oriented testing. Quality of the tests is evaluated by code coverage of the processor description using simulation. The presented test generation method uses VHDL models of processors and the professional simulator ModelSim. The rules, parameters and fitness functions were defined for various genetic algorithms used in automatic test generation. Functionality and effectiveness were evaluated using the RISC type processor DP32.

  3. Mission and data operations IBM 360 user's guide

    NASA Technical Reports Server (NTRS)

    Balakirsky, J.

    1973-01-01

    The M and DO computer systems are introduced and supplemented. The hardware and software status is discussed, along with standard processors and user libraries. Data management techniques are presented, as well as machine independence, debugging facilities, and overlay considerations.

  4. 98. View of IBM digital computer model 7090 magnet core ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    98. View of IBM digital computer model 7090 magnet core installation. ITT Artic Services, Inc., Official photograph BMEWS Site II, Clear, AK, by unknown photographer, 17 September 1965. BMEWS, clear as negative no. A-6606. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK

  5. APRON: A Cellular Processor Array Simulation and Hardware Design Tool

    NASA Astrophysics Data System (ADS)

    Barr, David R. W.; Dudek, Piotr

    2009-12-01

    We present a software environment for the efficient simulation of cellular processor arrays (CPAs). This software (APRON) is used to explore algorithms that are designed for massively parallel fine-grained processor arrays, topographic multilayer neural networks, vision chips with SIMD processor arrays, and related architectures. The software uses a highly optimised core combined with a flexible compiler to provide the user with tools for the design of new processor array hardware architectures and the emulation of existing devices. We present performance benchmarks for the software processor array implemented on standard commodity microprocessors. APRON can be configured to use additional processing hardware if necessary and can be used as a complete graphical user interface and development environment for new or existing CPA systems, allowing more users to develop algorithms for CPA systems.

  6. Flood replenishment: a new method of processor control.

    PubMed

    Frank, E D; Gray, J E; Wilken, D A

    1980-01-01

    In mechanized radiographic film processors that process medium to low volumes of film, roll films, and those that process single-emulsion films from nuclear medicine scans, computed tomography, and ultrasound, it is difficult to maintain the developer solution at a stable processing level. We describe our experience using flood replenishment, which is a method in which developer replenisher containing starter solution is introduced in the processor at timed intervals, independent of the number of films being processed. By this process, a stable level of developer activity is maintained in a processor used to develop a medium to low volume of single-emulsion film.

  7. Control structures for high speed processors

    NASA Technical Reports Server (NTRS)

    Maki, G. K.; Mankin, R.; Owsley, P. A.; Kim, G. M.

    1982-01-01

    A special processor was designed to function as a Reed Solomon decoder with throughput data rate in the Mhz range. This data rate is significantly greater than is possible with conventional digital architectures. To achieve this rate, the processor design includes sequential, pipelined, distributed, and parallel processing. The processor was designed using a high level language register transfer language. The RTL can be used to describe how the different processes are implemented by the hardware. One problem of special interest was the development of dependent processes which are analogous to software subroutines. For greater flexibility, the RTL control structure was implemented in ROM. The special purpose hardware required approximately 1000 SSI and MSI components. The data rate throughput is 2.5 megabits/second. This data rate is achieved through the use of pipelined and distributed processing. This data rate can be compared with 800 kilobits/second in a recently proposed very large scale integration design of a Reed Solomon encoder.

  8. Low Cost Design of an Advanced Encryption Standard (AES) Processor Using a New Common-Subexpression-Elimination Algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Ming-Chih; Hsiao, Shen-Fu

    In this paper, we propose an area-efficient design of Advanced Encryption Standard (AES) processor by applying a new common-expression-elimination (CSE) method to the sub-functions of various transformations required in AES. The proposed method reduces the area cost of realizing the sub-functions by extracting the common factors in the bit-level XOR/AND-based sum-of-product expressions of these sub-functions using a new CSE algorithm. Cell-based implementation results show that the AES processor with our proposed CSE method has significant area improvement compared with previous designs.

  9. Graphics Processor Units (GPUs)

    NASA Technical Reports Server (NTRS)

    Wyrwas, Edward J.

    2017-01-01

    This presentation will include information about Graphics Processor Units (GPUs) technology, NASA Electronic Parts and Packaging (NEPP) tasks, The test setup, test parameter considerations, lessons learned, collaborations, a roadmap, NEPP partners, results to date, and future plans.

  10. Accuracy requirements of optical linear algebra processors in adaptive optics imaging systems

    NASA Technical Reports Server (NTRS)

    Downie, John D.; Goodman, Joseph W.

    1989-01-01

    The accuracy requirements of optical processors in adaptive optics systems are determined by estimating the required accuracy in a general optical linear algebra processor (OLAP) that results in a smaller average residual aberration than that achieved with a conventional electronic digital processor with some specific computation speed. Special attention is given to an error analysis of a general OLAP with regard to the residual aberration that is created in an adaptive mirror system by the inaccuracies of the processor, and to the effect of computational speed of an electronic processor on the correction. Results are presented on the ability of an OLAP to compete with a digital processor in various situations.

  11. SAP- FORTRAN STATIC SOURCE CODE ANALYZER PROGRAM (IBM VERSION)

    NASA Technical Reports Server (NTRS)

    Manteufel, R.

    1994-01-01

    The FORTRAN Static Source Code Analyzer program, SAP, was developed to automatically gather statistics on the occurrences of statements and structures within a FORTRAN program and to provide for the reporting of those statistics. Provisions have been made for weighting each statistic and to provide an overall figure of complexity. Statistics, as well as figures of complexity, are gathered on a module by module basis. Overall summed statistics are also accumulated for the complete input source file. SAP accepts as input syntactically correct FORTRAN source code written in the FORTRAN 77 standard language. In addition, code written using features in the following languages is also accepted: VAX-11 FORTRAN, IBM S/360 FORTRAN IV Level H Extended; and Structured FORTRAN. The SAP program utilizes two external files in its analysis procedure. A keyword file allows flexibility in classifying statements and in marking a statement as either executable or non-executable. A statistical weight file allows the user to assign weights to all output statistics, thus allowing the user flexibility in defining the figure of complexity. The SAP program is written in FORTRAN IV for batch execution and has been implemented on a DEC VAX series computer under VMS and on an IBM 370 series computer under MVS. The SAP program was developed in 1978 and last updated in 1985.

  12. SPROC: A multiple-processor DSP IC

    NASA Technical Reports Server (NTRS)

    Davis, R.

    1991-01-01

    A large, single-chip, multiple-processor, digital signal processing (DSP) integrated circuit (IC) fabricated in HP-Cmos34 is presented. The innovative architecture is best suited for analog and real-time systems characterized by both parallel signal data flows and concurrent logic processing. The IC is supported by a powerful development system that transforms graphical signal flow graphs into production-ready systems in minutes. Automatic compiler partitioning of tasks among four on-chip processors gives the IC the signal processing power of several conventional DSP chips.

  13. Fault tolerant, radiation hard, high performance digital signal processor

    NASA Technical Reports Server (NTRS)

    Holmann, Edgar; Linscott, Ivan R.; Maurer, Michael J.; Tyler, G. L.; Libby, Vibeke

    1990-01-01

    An architecture has been developed for a high-performance VLSI digital signal processor that is highly reliable, fault-tolerant, and radiation-hard. The signal processor, part of a spacecraft receiver designed to support uplink radio science experiments at the outer planets, organizes the connections between redundant arithmetic resources, register files, and memory through a shuffle exchange communication network. The configuration of the network and the state of the processor resources are all under microprogram control, which both maps the resources according to algorithmic needs and reconfigures the processing should a failure occur. In addition, the microprogram is reloadable through the uplink to accommodate changes in the science objectives throughout the course of the mission. The processor will be implemented with silicon compiler tools, and its design will be verified through silicon compilation simulation at all levels from the resources to full functionality. By blending reconfiguration with redundancy the processor implementation is fault-tolerant and reliable, and possesses the long expected lifetime needed for a spacecraft mission to the outer planets.

  14. Transfer of numeric ASCII data files between Apple and IBM personal computers.

    PubMed

    Allan, R W; Bermejo, R; Houben, D

    1986-01-01

    Listings for programs designed to transfer numeric ASCII data files between Apple and IBM personal computers are provided with accompanying descriptions of how the software operates. Details of the hardware used are also given. The programs may be easily adapted for transferring data between other microcomputers.

  15. UFO: A THREE-DIMENSIONAL NEUTRON DIFFUSION CODE FOR THE IBM 704

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Auerbach, E.H.; Jewett, J.P.; Ketchum, M.A.

    A description of UFO, a code for the solution of the fewgroup neutron diffusion equation in three-dimensional Cartesian coordinates on the IBM 704, is given. An accelerated Liebmann flux iteration scheme is used, and optimum parameters can be calculated by the code whenever they are required. The theory and operation of the program are discussed. (auth)

  16. Processing techniques for software based SAR processors

    NASA Technical Reports Server (NTRS)

    Leung, K.; Wu, C.

    1983-01-01

    Software SAR processing techniques defined to treat Shuttle Imaging Radar-B (SIR-B) data are reviewed. The algorithms are devised for the data processing procedure selection, SAR correlation function implementation, multiple array processors utilization, cornerturning, variable reference length azimuth processing, and range migration handling. The Interim Digital Processor (IDP) originally implemented for handling Seasat SAR data has been adapted for the SIR-B, and offers a resolution of 100 km using a processing procedure based on the Fast Fourier Transformation fast correlation approach. Peculiarities of the Seasat SAR data processing requirements are reviewed, along with modifications introduced for the SIR-B. An Advanced Digital SAR Processor (ADSP) is under development for use with the SIR-B in the 1986 time frame as an upgrade for the IDP, which will be in service in 1984-5.

  17. The emerging conceptualization of groups as information processors.

    PubMed

    Hinsz, V B; Tindale, R S; Vollrath, D A

    1997-01-01

    A selective review of research highlights the emerging view of groups as information processors. In this review, the authors include research on processing objectives, attention, encoding, storage, retrieval, processing, response, feedback, and learning in small interacting task groups. The groups as information processors perspective underscores several characteristic dimensions of variability in group performance of cognitive tasks, namely, commonality-uniqueness of information, convergence-diversity of ideas, accentuation-attenuation of cognitive processes, and belongingness-distinctiveness of members. A combination of contributions framework provides an additional conceptualization of information processing in groups. The authors also address implications, caveats, and questions for future research and theory regarding groups as information processors.

  18. Multitask neurovision processor with extensive feedback and feedforward connections

    NASA Astrophysics Data System (ADS)

    Gupta, Madan M.; Knopf, George K.

    1991-11-01

    A multi-task neuro-vision parameter which performs a variety of information processing operations associated with the early stages of biological vision is presented. The network architecture of this neuro-vision processor, called the positive-negative (PN) neural processor, is loosely based on the neural activity fields exhibited by thalamic and cortical nervous tissue layers. The computational operation performed by the processor arises from the strength of the recurrent feedback among the numerous positive and negative neural computing units. By adjusting the feedback connections it is possible to generate diverse dynamic behavior that may be used for short-term visual memory (STVM), spatio-temporal filtering (STF), and pulse frequency modulation (PFM). The information attributes that are to be processes may be regulated by modifying the feedforward connections from the signal space to the neural processor.

  19. A unified approach to VLSI layout automation and algorithm mapping on processor arrays

    NASA Technical Reports Server (NTRS)

    Venkateswaran, N.; Pattabiraman, S.; Srinivasan, Vinoo N.

    1993-01-01

    Development of software tools for designing supercomputing systems is highly complex and cost ineffective. To tackle this a special purpose PAcube silicon compiler which integrates different design levels from cell to processor arrays has been proposed. As a part of this, we present in this paper a novel methodology which unifies the problems of Layout Automation and Algorithm Mapping.

  20. Software-defined reconfigurable microwave photonics processor.

    PubMed

    Pérez, Daniel; Gasulla, Ivana; Capmany, José

    2015-06-01

    We propose, for the first time to our knowledge, a software-defined reconfigurable microwave photonics signal processor architecture that can be integrated on a chip and is capable of performing all the main functionalities by suitable programming of its control signals. The basic configuration is presented and a thorough end-to-end design model derived that accounts for the performance of the overall processor taking into consideration the impact and interdependencies of both its photonic and RF parts. We demonstrate the model versatility by applying it to several relevant application examples.

  1. Ssip-a processor interconnection simulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Navaux, P.; Weber, R.; Prezzi, J.

    1982-01-01

    Recent growing interest in multiple processor architectures has given rise to the study of procesor-memory interconnections for the determination of better architectures. This paper concerns the development of the SSIP-sistema simulador de interconexao de processadores (processor interconnection simulating system) which allows the evaluation of different interconnection structures comparing its performance in order to provide parameters which would help the designer to define an architcture. A wide spectrum of systems may be evaluated, and their behaviour observed due to the features incorporated into the simulator program. The system modelling and the simulator program implementation are described. Some results that can bemore » obtained are shown, along with the discussion of their usefulness. 12 references.« less

  2. DFT algorithms for bit-serial GaAs array processor architectures

    NASA Technical Reports Server (NTRS)

    Mcmillan, Gary B.

    1988-01-01

    Systems and Processes Engineering Corporation (SPEC) has developed an innovative array processor architecture for computing Fourier transforms and other commonly used signal processing algorithms. This architecture is designed to extract the highest possible array performance from state-of-the-art GaAs technology. SPEC's architectural design includes a high performance RISC processor implemented in GaAs, along with a Floating Point Coprocessor and a unique Array Communications Coprocessor, also implemented in GaAs technology. Together, these data processors represent the latest in technology, both from an architectural and implementation viewpoint. SPEC has examined numerous algorithms and parallel processing architectures to determine the optimum array processor architecture. SPEC has developed an array processor architecture with integral communications ability to provide maximum node connectivity. The Array Communications Coprocessor embeds communications operations directly in the core of the processor architecture. A Floating Point Coprocessor architecture has been defined that utilizes Bit-Serial arithmetic units, operating at very high frequency, to perform floating point operations. These Bit-Serial devices reduce the device integration level and complexity to a level compatible with state-of-the-art GaAs device technology.

  3. A high-accuracy optical linear algebra processor for finite element applications

    NASA Technical Reports Server (NTRS)

    Casasent, D.; Taylor, B. K.

    1984-01-01

    Optical linear processors are computationally efficient computers for solving matrix-matrix and matrix-vector oriented problems. Optical system errors limit their dynamic range to 30-40 dB, which limits their accuray to 9-12 bits. Large problems, such as the finite element problem in structural mechanics (with tens or hundreds of thousands of variables) which can exploit the speed of optical processors, require the 32 bit accuracy obtainable from digital machines. To obtain this required 32 bit accuracy with an optical processor, the data can be digitally encoded, thereby reducing the dynamic range requirements of the optical system (i.e., decreasing the effect of optical errors on the data) while providing increased accuracy. This report describes a new digitally encoded optical linear algebra processor architecture for solving finite element and banded matrix-vector problems. A linear static plate bending case study is described which quantities the processor requirements. Multiplication by digital convolution is explained, and the digitally encoded optical processor architecture is advanced.

  4. Implementing wavelet inverse-transform processor with surface acoustic wave device.

    PubMed

    Lu, Wenke; Zhu, Changchun; Liu, Qinghong; Zhang, Jingduan

    2013-02-01

    The objective of this research was to investigate the implementation schemes of the wavelet inverse-transform processor using surface acoustic wave (SAW) device, the length function of defining the electrodes, and the possibility of solving the load resistance and the internal resistance for the wavelet inverse-transform processor using SAW device. In this paper, we investigate the implementation schemes of the wavelet inverse-transform processor using SAW device. In the implementation scheme that the input interdigital transducer (IDT) and output IDT stand in a line, because the electrode-overlap envelope of the input IDT is identical with the one of the output IDT (i.e. the two transducers are identical), the product of the input IDT's frequency response and the output IDT's frequency response can be implemented, so that the wavelet inverse-transform processor can be fabricated. X-112(0)Y LiTaO(3) is used as a substrate material to fabricate the wavelet inverse-transform processor. The size of the wavelet inverse-transform processor using this implementation scheme is small, so its cost is low. First, according to the envelope function of the wavelet function, the length function of the electrodes is defined, then, the lengths of the electrodes can be calculated from the length function of the electrodes, finally, the input IDT and output IDT can be designed according to the lengths and widths for the electrodes. In this paper, we also present the load resistance and the internal resistance as the two problems of the wavelet inverse-transform processor using SAW devices. The solutions to these problems are achieved in this study. When the amplifiers are subjected to the input end and output end for the wavelet inverse-transform processor, they can eliminate the influence of the load resistance and the internal resistance on the output voltage of the wavelet inverse-transform processor using SAW device. Copyright © 2012 Elsevier B.V. All rights reserved.

  5. Parallel network simulations with NEURON.

    PubMed

    Migliore, M; Cannia, C; Lytton, W W; Markram, Henry; Hines, M L

    2006-10-01

    The NEURON simulation environment has been extended to support parallel network simulations. Each processor integrates the equations for its subnet over an interval equal to the minimum (interprocessor) presynaptic spike generation to postsynaptic spike delivery connection delay. The performance of three published network models with very different spike patterns exhibits superlinear speedup on Beowulf clusters and demonstrates that spike communication overhead is often less than the benefit of an increased fraction of the entire problem fitting into high speed cache. On the EPFL IBM Blue Gene, almost linear speedup was obtained up to 100 processors. Increasing one model from 500 to 40,000 realistic cells exhibited almost linear speedup on 2,000 processors, with an integration time of 9.8 seconds and communication time of 1.3 seconds. The potential for speed-ups of several orders of magnitude makes practical the running of large network simulations that could otherwise not be explored.

  6. Parallel Network Simulations with NEURON

    PubMed Central

    Migliore, M.; Cannia, C.; Lytton, W.W; Markram, Henry; Hines, M. L.

    2009-01-01

    The NEURON simulation environment has been extended to support parallel network simulations. Each processor integrates the equations for its subnet over an interval equal to the minimum (interprocessor) presynaptic spike generation to postsynaptic spike delivery connection delay. The performance of three published network models with very different spike patterns exhibits superlinear speedup on Beowulf clusters and demonstrates that spike communication overhead is often less than the benefit of an increased fraction of the entire problem fitting into high speed cache. On the EPFL IBM Blue Gene, almost linear speedup was obtained up to 100 processors. Increasing one model from 500 to 40,000 realistic cells exhibited almost linear speedup on 2000 processors, with an integration time of 9.8 seconds and communication time of 1.3 seconds. The potential for speed-ups of several orders of magnitude makes practical the running of large network simulations that could otherwise not be explored. PMID:16732488

  7. Launching applications on compute and service processors running under different operating systems in scalable network of processor boards with routers

    DOEpatents

    Tomkins, James L [Albuquerque, NM; Camp, William J [Albuquerque, NM

    2009-03-17

    A multiple processor computing apparatus includes a physical interconnect structure that is flexibly configurable to support selective segregation of classified and unclassified users. The physical interconnect structure also permits easy physical scalability of the computing apparatus. The computing apparatus can include an emulator which permits applications from the same job to be launched on processors that use different operating systems.

  8. IBM Academic Information Systems University AEP Conference: "Tools for Learning" (2nd, San Diego, California, April 5-8, 1986). Agenda [and Abstracts].

    ERIC Educational Resources Information Center

    International Business Machines Corp., Milford, CT. Academic Information Systems.

    This agenda lists activities scheduled for the second IBM (International Business Machines) Academic Information Systems University AEP (Advanced Education Projects) Conference, which was designed to afford the universities participating in the IBM-sponsored AEPs an opportunity to demonstrate their AEP experiments in educational computing. In…

  9. Satellite on-board real-time SAR processor prototype

    NASA Astrophysics Data System (ADS)

    Bergeron, Alain; Doucet, Michel; Harnisch, Bernd; Suess, Martin; Marchese, Linda; Bourqui, Pascal; Desnoyers, Nicholas; Legros, Mathieu; Guillot, Ludovic; Mercier, Luc; Châteauneuf, François

    2017-11-01

    A Compact Real-Time Optronic SAR Processor has been successfully developed and tested up to a Technology Readiness Level of 4 (TRL4), the breadboard validation in a laboratory environment. SAR, or Synthetic Aperture Radar, is an active system allowing day and night imaging independent of the cloud coverage of the planet. The SAR raw data is a set of complex data for range and azimuth, which cannot be compressed. Specifically, for planetary missions and unmanned aerial vehicle (UAV) systems with limited communication data rates this is a clear disadvantage. SAR images are typically processed electronically applying dedicated Fourier transformations. This, however, can also be performed optically in real-time. Originally the first SAR images were optically processed. The optical Fourier processor architecture provides inherent parallel computing capabilities allowing real-time SAR data processing and thus the ability for compression and strongly reduced communication bandwidth requirements for the satellite. SAR signal return data are in general complex data. Both amplitude and phase must be combined optically in the SAR processor for each range and azimuth pixel. Amplitude and phase are generated by dedicated spatial light modulators and superimposed by an optical relay set-up. The spatial light modulators display the full complex raw data information over a two-dimensional format, one for the azimuth and one for the range. Since the entire signal history is displayed at once, the processor operates in parallel yielding real-time performances, i.e. without resulting bottleneck. Processing of both azimuth and range information is performed in a single pass. This paper focuses on the onboard capabilities of the compact optical SAR processor prototype that allows in-orbit processing of SAR images. Examples of processed ENVISAT ASAR images are presented. Various SAR processor parameters such as processing capabilities, image quality (point target analysis), weight and

  10. Radio astronomy Explorer B antenna aspect processor

    NASA Technical Reports Server (NTRS)

    Miller, W. H.; Novello, J.; Reeves, C. C.

    1972-01-01

    The antenna aspect system used on the Radio Astronomy Explorer B spacecraft is described. This system consists of two facsimile cameras, a data encoder, and a data processor. Emphasis is placed on the discussion of the data processor, which contains a data compressor and a source encoder. With this compression scheme a compression ratio of 8 is achieved on a typical line of camera data. These compressed data are then convolutionally encoded.

  11. 97. View of International Business Machine (IBM) digital computer model ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    97. View of International Business Machine (IBM) digital computer model 7090 magnetic core installation, international telephone and telegraph (ITT) Artic Services Inc., Official photograph BMEWS site II, Clear, AK, by unknown photographer, 17 September 1965, BMEWS, clear as negative no. A-6604. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK

  12. Performance statistics of the FORTRAN 4 /H/ library for the IBM system/360

    NASA Technical Reports Server (NTRS)

    Clark, N. A.; Cody, W. J., Jr.; Hillstrom, K. E.; Thieleker, E. A.

    1969-01-01

    Test procedures and results for accuracy and timing tests of the basic IBM 360/50 FORTRAN 4 /H/ subroutine library are reported. The testing was undertaken to verify performance capability and as a prelude to providing some replacement routines of improved performance.

  13. Optimal partitioning of random programs across two processors

    NASA Technical Reports Server (NTRS)

    Nicol, D. M.

    1986-01-01

    The optimal partitioning of random distributed programs is discussed. It is concluded that the optimal partitioning of a homogeneous random program over a homogeneous distributed system either assigns all modules to a single processor, or distributes the modules as evenly as possible among all processors. The analysis rests heavily on the approximation which equates the expected maximum of a set of independent random variables with the set's maximum expectation. The results are strengthened by providing an approximation-free proof of this result for two processors under general conditions on the module execution time distribution. It is also shown that use of this approximation causes two of the previous central results to be false.

  14. Computing Legacy Software Behavior to Understand Functionality and Security Properties: An IBM/370 Demonstration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Linger, Richard C; Pleszkoch, Mark G; Prowell, Stacy J

    Organizations maintaining mainframe legacy software can benefit from code modernization and incorporation of security capabilities to address the current threat environment. Oak Ridge National Laboratory is developing the Hyperion system to compute the behavior of software as a means to gain understanding of software functionality and security properties. Computation of functionality is critical to revealing security attributes, which are in fact specialized functional behaviors of software. Oak Ridge is collaborating with MITRE Corporation to conduct a demonstration project to compute behavior of legacy IBM Assembly Language code for a federal agency. The ultimate goal is to understand functionality and securitymore » vulnerabilities as a basis for code modernization. This paper reports on the first phase, to define functional semantics for IBM Assembly instructions and conduct behavior computation experiments.« less

  15. CoNNeCT Baseband Processor Module

    NASA Technical Reports Server (NTRS)

    Yamamoto, Clifford K; Jedrey, Thomas C.; Gutrich, Daniel G.; Goodpasture, Richard L.

    2011-01-01

    A document describes the CoNNeCT Baseband Processor Module (BPM) based on an updated processor, memory technology, and field-programmable gate arrays (FPGAs). The BPM was developed from a requirement to provide sufficient computing power and memory storage to conduct experiments for a Software Defined Radio (SDR) to be implemented. The flight SDR uses the AT697 SPARC processor with on-chip data and instruction cache. The non-volatile memory has been increased from a 20-Mbit EEPROM (electrically erasable programmable read only memory) to a 4-Gbit Flash, managed by the RTAX2000 Housekeeper, allowing more programs and FPGA bit-files to be stored. The volatile memory has been increased from a 20-Mbit SRAM (static random access memory) to a 1.25-Gbit SDRAM (synchronous dynamic random access memory), providing additional memory space for more complex operating systems and programs to be executed on the SPARC. All memory is EDAC (error detection and correction) protected, while the SPARC processor implements fault protection via TMR (triple modular redundancy) architecture. Further capability over prior BPM designs includes the addition of a second FPGA to implement features beyond the resources of a single FPGA. Both FPGAs are implemented with Xilinx Virtex-II and are interconnected by a 96-bit bus to facilitate data exchange. Dedicated 1.25- Gbit SDRAMs are wired to each Xilinx FPGA to accommodate high rate data buffering for SDR applications as well as independent SpaceWire interfaces. The RTAX2000 manages scrub and configuration of each Xilinx.

  16. Concept of a programmable maintenance processor applicable to multiprocessing systems

    NASA Technical Reports Server (NTRS)

    Glover, Richard D.

    1988-01-01

    A programmable maintenance processor concept applicable to multiprocessing systems has been developed at the NASA Ames Research Center's Dryden Flight Research Facility. This stand-alone-processor is intended to provide support for system and application software testing as well as hardware diagnostics. An initial machanization has been incorporated into the extended aircraft interrogation and display system (XAIDS) which is multiprocessing general-purpose ground support equipment. The XAIDS maintenance processor has independent terminal and printer interfaces and a dedicated magnetic bubble memory that stores system test sequences entered from the terminal. This report describes the hardware and software embodied in this processor and shows a typical application in the check-out of a new XAIDS.

  17. SPECIAL ISSUE ON OPTICAL PROCESSING OF INFORMATION: Optoelectronic processors with scanning CCD photodetectors

    NASA Astrophysics Data System (ADS)

    Esepkina, N. A.; Lavrov, A. P.; Anan'ev, M. N.; Blagodarnyi, V. S.; Ivanov, S. I.; Mansyrev, M. I.; Molodyakov, S. A.

    1995-10-01

    Two new types of optoelectronic radio-signal processors were investigated. Charge-coupled device (CCD) photodetectors are used in these processors under continuous scanning conditions, i.e. in a time delay and storage mode. One of these processors is based on a CCD photodetector array with a reference-signal amplitude transparency and the other is an adaptive acousto-optical signal processor with linear frequency modulation. The processor with the transparency performs multichannel discrete—analogue convolution of an input signal with a corresponding kernel of the transformation determined by the transparency. If a light source is an array of light-emitting diodes of special (stripe) geometry, the optical stages of the processor can be made from optical fibre components and the whole processor then becomes a rigid 'sandwich' (a compact hybrid optoelectronic microcircuit). A report is given also of a study of a prototype processor with optical fibre components for the reception of signals from a system with antenna aperture synthesis, which forms a radio image of the Earth.

  18. Reconfigurable signal processor designs for advanced digital array radar systems

    NASA Astrophysics Data System (ADS)

    Suarez, Hernan; Zhang, Yan (Rockee); Yu, Xining

    2017-05-01

    The new challenges originated from Digital Array Radar (DAR) demands a new generation of reconfigurable backend processor in the system. The new FPGA devices can support much higher speed, more bandwidth and processing capabilities for the need of digital Line Replaceable Unit (LRU). This study focuses on using the latest Altera and Xilinx devices in an adaptive beamforming processor. The field reprogrammable RF devices from Analog Devices are used as analog front end transceivers. Different from other existing Software-Defined Radio transceivers on the market, this processor is designed for distributed adaptive beamforming in a networked environment. The following aspects of the novel radar processor will be presented: (1) A new system-on-chip architecture based on Altera's devices and adaptive processing module, especially for the adaptive beamforming and pulse compression, will be introduced, (2) Successful implementation of generation 2 serial RapidIO data links on FPGA, which supports VITA-49 radio packet format for large distributed DAR processing. (3) Demonstration of the feasibility and capabilities of the processor in a Micro-TCA based, SRIO switching backplane to support multichannel beamforming in real-time. (4) Application of this processor in ongoing radar system development projects, including OU's dual-polarized digital array radar, the planned new cylindrical array radars, and future airborne radars.

  19. Continued Development of Internet Protocols under the IBM OS/MVS Operating System

    DTIC Science & Technology

    1985-01-25

    developed a prototype TCP/IP implementation for an IBM MVS host under a previous DARPA contract’ as part of the Internet research effort on the design of...participated in the DARPA Internet research program which led to the present TCP and IP protocols. Development of a prototype implementation of TCP/IP

  20. The study of structure in 224-234 thorium nuclei within the framework IBM

    NASA Astrophysics Data System (ADS)

    Lee, Su Youn; Lee, Young Jun; Lee, J. H.

    2017-09-01

    An investigation has been made of the behaviour of nuclear structure as a function of an increase in neutron number from 224Th to 234Th. Thorium of mass number 234 is a typical rotor nucleus that can be explained by the SU(3) limit of the interacting boson model(IBM) in the algebraic nuclear model. Furthermore, 224-232Th lie on the path of the symmetry-breaking phase transition. Moreover, the nuclear structure of 224Th can be explained using X(5) symmetry. However, as 226-230Th nuclei are not fully symmetrical nuclei, they can be represented by adding a perturbed term to express symmetry breaking. Through the following three calculation steps, we identified the tendency of change in nuclear structure. Firstly, the structure of 232Th is described using the matrix elements of the Hamiltonian and the electric quadrupole operator between basis states of the SU(3) limit in IBM. Secondly, the low-lying energy levels and E2 transition ratios corresponding to the observable physical values are calculated by adding a perturbed term with the first-order Casimir operator of the U(5) limit to the SU(3) Hamiltonian in IBM. We compared the results with experimental data of 224-234Th. Lastly, the potential of the Bohr Hamiltonian is represented by a harmonic oscillator, as a result of which the structure of 224-234Th could be expressed in closed form by an approximate separation of variables. The results of these theoretical predictions clarify nuclear structure changes in Thorium nuclei over mass numbers of practical significance.

  1. Bill Lang's contributions to acoustics at International Business Machines Corporation (IBM), signal processing, international standards, and professionalism in noise control engineering

    NASA Astrophysics Data System (ADS)

    Maling, George C.

    2005-09-01

    Bill Lang joined IBM in the late 1950s with a mandate from Thomas Watson Jr. himself to establish an acoustics program at IBM. Bill created the facilities in Poughkeepsie, developed the local program, and was the leader in having other IBM locations with development and manufacturing responsibilities construct facilities and hire staff under the Interdivisional Liaison Program. He also directed IBMs acoustics technology program. In the mid-1960s, he led an IEEE standards group in Audio and Electroacoustics, and, with the help of James Cooley, Peter Welch, and others, introduced the fast Fourier transform to the acoustics community. He was the convenor of ISO TC 43 SC1 WG6 that began writing the 3740 series of standards in the 1970s. It was his suggestion to promote professionalism in noise control engineering, and, through meetings with Leo Beranek and others, led the founding of INCE/USA in 1971. He was also a leader of the team that founded International INCE in 1974, and he served as president from 1988 until 1999.

  2. Implementation and Assessment of Advanced Analog Vector-Matrix Processor

    NASA Technical Reports Server (NTRS)

    Gary, Charles K.; Bualat, Maria G.; Lum, Henry, Jr. (Technical Monitor)

    1994-01-01

    This paper discusses the design and implementation of an analog optical vecto-rmatrix coprocessor with a throughput of 128 Mops for a personal computer. Vector matrix calculations are inherently parallel, providing a promising domain for the use of optical calculators. However, to date, digital optical systems have proven too cumbersome to replace electronics, and analog processors have not demonstrated sufficient accuracy in large scale systems. The goal of the work described in this paper is to demonstrate a viable optical coprocessor for linear operations. The analog optical processor presented has been integrated with a personal computer to provide full functionality and is the first demonstration of an optical linear algebra processor with a throughput greater than 100 Mops. The optical vector matrix processor consists of a laser diode source, an acoustooptical modulator array to input the vector information, a liquid crystal spatial light modulator to input the matrix information, an avalanche photodiode array to read out the result vector of the vector matrix multiplication, as well as transport optics and the electronics necessary to drive the optical modulators and interface to the computer. The intent of this research is to provide a low cost, highly energy efficient coprocessor for linear operations. Measurements of the analog accuracy of the processor performing 128 Mops are presented along with an assessment of the implications for future systems. A range of noise sources, including cross-talk, source amplitude fluctuations, shot noise at the detector, and non-linearities of the optoelectronic components are measured and compared to determine the most significant source of error. The possibilities for reducing these sources of error are discussed. Also, the total error is compared with that expected from a statistical analysis of the individual components and their relation to the vector-matrix operation. The sufficiency of the measured accuracy of the

  3. Making IBM's Computer, Watson, Human

    PubMed Central

    Rachlin, Howard

    2012-01-01

    This essay uses the recent victory of an IBM computer (Watson) in the TV game, Jeopardy, to speculate on the abilities Watson would need, in addition to those it has, to be human. The essay's basic premise is that to be human is to behave as humans behave and to function in society as humans function. Alternatives to this premise are considered and rejected. The viewpoint of the essay is that of teleological behaviorism. Mental states are defined as temporally extended patterns of overt behavior. From this viewpoint (although Watson does not currently have them), essential human attributes such as consciousness, the ability to love, to feel pain, to sense, to perceive, and to imagine may all be possessed by a computer. Most crucially, a computer may possess self-control and may act altruistically. However, the computer's appearance, its ability to make specific movements, its possession of particular internal structures (e.g., whether those structures are organic or inorganic), and the presence of any nonmaterial “self,” are all incidental to its humanity. PMID:22942530

  4. Making IBM's Computer, Watson, Human.

    PubMed

    Rachlin, Howard

    2012-01-01

    This essay uses the recent victory of an IBM computer (Watson) in the TV game, Jeopardy, to speculate on the abilities Watson would need, in addition to those it has, to be human. The essay's basic premise is that to be human is to behave as humans behave and to function in society as humans function. Alternatives to this premise are considered and rejected. The viewpoint of the essay is that of teleological behaviorism. Mental states are defined as temporally extended patterns of overt behavior. From this viewpoint (although Watson does not currently have them), essential human attributes such as consciousness, the ability to love, to feel pain, to sense, to perceive, and to imagine may all be possessed by a computer. Most crucially, a computer may possess self-control and may act altruistically. However, the computer's appearance, its ability to make specific movements, its possession of particular internal structures (e.g., whether those structures are organic or inorganic), and the presence of any nonmaterial "self," are all incidental to its humanity.

  5. Optical systolic array processor using residue arithmetic

    NASA Technical Reports Server (NTRS)

    Jackson, J.; Casasent, D.

    1983-01-01

    The use of residue arithmetic to increase the accuracy and reduce the dynamic range requirements of optical matrix-vector processors is evaluated. It is determined that matrix-vector operations and iterative algorithms can be performed totally in residue notation. A new parallel residue quantizer circuit is developed which significantly improves the performance of the systolic array feedback processor. Results are presented of a computer simulation of this system used to solve a set of three simultaneous equations.

  6. Effect of poor control of film processors on mammographic image quality.

    PubMed

    Kimme-Smith, C; Sun, H; Bassett, L W; Gold, R H

    1992-11-01

    With the increasingly stringent standards of image quality in mammography, film processor quality control is especially important. Current methods are not sufficient for ensuring good processing. The authors used a sensitometer and densitometer system to evaluate the performance of 22 processors at 16 mammographic facilities. Standard sensitometric values of two films were established, and processor performance was assessed for variations from these standards. Developer chemistry of each processor was analyzed and correlated with its sensitometric values. Ten processors were retested, and nine were found to be out of calibration. The developer components of hydroquinone, sulfites, bromide, and alkalinity varied the most, and low concentrations of hydroquinone were associated with lower average gradients at two facilities. Use of the sensitometer and densitometer system helps identify out-of-calibration processors, but further study is needed to correlate sensitometric values with developer component values. The authors believe that present quality control would be improved if sensitometric or other tests could be used to identify developer components that are out of calibration.

  7. Benchmarking NWP Kernels on Multi- and Many-core Processors

    NASA Astrophysics Data System (ADS)

    Michalakes, J.; Vachharajani, M.

    2008-12-01

    Increased computing power for weather, climate, and atmospheric science has provided direct benefits for defense, agriculture, the economy, the environment, and public welfare and convenience. Today, very large clusters with many thousands of processors are allowing scientists to move forward with simulations of unprecedented size. But time-critical applications such as real-time forecasting or climate prediction need strong scaling: faster nodes and processors, not more of them. Moreover, the need for good cost- performance has never been greater, both in terms of performance per watt and per dollar. For these reasons, the new generations of multi- and many-core processors being mass produced for commercial IT and "graphical computing" (video games) are being scrutinized for their ability to exploit the abundant fine- grain parallelism in atmospheric models. We present results of our work to date identifying key computational kernels within the dynamics and physics of a large community NWP model, the Weather Research and Forecast (WRF) model. We benchmark and optimize these kernels on several different multi- and many-core processors. The goals are to (1) characterize and model performance of the kernels in terms of computational intensity, data parallelism, memory bandwidth pressure, memory footprint, etc. (2) enumerate and classify effective strategies for coding and optimizing for these new processors, (3) assess difficulties and opportunities for tool or higher-level language support, and (4) establish a continuing set of kernel benchmarks that can be used to measure and compare effectiveness of current and future designs of multi- and many-core processors for weather and climate applications.

  8. Processors for wavelet analysis and synthesis: NIFS and TI-C80 MVP

    NASA Astrophysics Data System (ADS)

    Brooks, Geoffrey W.

    1996-03-01

    Two processors are considered for image quadrature mirror filtering (QMF). The neuromorphic infrared focal-plane sensor (NIFS) is an existing prototype analog processor offering high speed spatio-temporal Gaussian filtering, which could be used for the QMF low- pass function, and difference of Gaussian filtering, which could be used for the QMF high- pass function. Although not designed specifically for wavelet analysis, the biologically- inspired system accomplishes the most computationally intensive part of QMF processing. The Texas Instruments (TI) TMS320C80 Multimedia Video Processor (MVP) is a 32-bit RISC master processor with four advanced digital signal processors (DSPs) on a single chip. Algorithm partitioning, memory management and other issues are considered for optimal performance. This paper presents these considerations with simulated results leading to processor implementation of high-speed QMF analysis and synthesis.

  9. Real-time trajectory optimization on parallel processors

    NASA Technical Reports Server (NTRS)

    Psiaki, Mark L.

    1993-01-01

    A parallel algorithm has been developed for rapidly solving trajectory optimization problems. The goal of the work has been to develop an algorithm that is suitable to do real-time, on-line optimal guidance through repeated solution of a trajectory optimization problem. The algorithm has been developed on an INTEL iPSC/860 message passing parallel processor. It uses a zero-order-hold discretization of a continuous-time problem and solves the resulting nonlinear programming problem using a custom-designed augmented Lagrangian nonlinear programming algorithm. The algorithm achieves parallelism of function, derivative, and search direction calculations through the principle of domain decomposition applied along the time axis. It has been encoded and tested on 3 example problems, the Goddard problem, the acceleration-limited, planar minimum-time to the origin problem, and a National Aerospace Plane minimum-fuel ascent guidance problem. Execution times as fast as 118 sec of wall clock time have been achieved for a 128-stage Goddard problem solved on 32 processors. A 32-stage minimum-time problem has been solved in 151 sec on 32 processors. A 32-stage National Aerospace Plane problem required 2 hours when solved on 32 processors. A speed-up factor of 7.2 has been achieved by using 32-nodes instead of 1-node to solve a 64-stage Goddard problem.

  10. Power processor for a 30cm ion thruster

    NASA Technical Reports Server (NTRS)

    Biess, J. J.; Inouye, L. Y.

    1974-01-01

    A thermal vacuum power processor for the NASA Lewis 30cm Mercury Ion Engine was designed, fabricated and tested to determine compliance with electrical specifications. The power processor breadboard used the silicon controlled rectifier (SCR) series resonant inverter as the basic power stage to process all the power to an ion engine. The power processor includes a digital interface unit to process all input commands and internal telemetry signals so that operation is compatible with a central computer system. The breadboard was tested in a thermal vacuum environment. Integration tests were performed with the ion engine and demonstrate operational compatibility and reliable operation without any component failures. Electromagnetic interference data were also recorded on the design to provide information on the interaction with total spacecraft.

  11. Fault detection and bypass in a sequence information signal processor

    NASA Technical Reports Server (NTRS)

    Peterson, John C. (Inventor); Chow, Edward T. (Inventor)

    1992-01-01

    The invention comprises a plurality of scan registers, each such register respectively associated with a processor element; an on-chip comparator, encoder and fault bypass register. Each scan register generates a unitary signal the logic state of which depends on the correctness of the input from the previous processor in the systolic array. These unitary signals are input to a common comparator which generates an output indicating whether or not an error has occurred. These unitary signals are also input to an encoder which identifies the location of any fault detected so that an appropriate multiplexer can be switched to bypass the faulty processor element. Input scan data can be readily programmed to fully exercise all of the processor elements so that no fault can remain undetected.

  12. [Improving speech comprehension using a new cochlear implant speech processor].

    PubMed

    Müller-Deile, J; Kortmann, T; Hoppe, U; Hessel, H; Morsnowski, A

    2009-06-01

    The aim of this multicenter clinical field study was to assess the benefits of the new Freedom 24 sound processor for cochlear implant (CI) users implanted with the Nucleus 24 cochlear implant system. The study included 48 postlingually profoundly deaf experienced CI users who demonstrated speech comprehension performance with their current speech processor on the Oldenburg sentence test (OLSA) in quiet conditions of at least 80% correct scores and who were able to perform adaptive speech threshold testing using the OLSA in noisy conditions. Following baseline measures of speech comprehension performance with their current speech processor, subjects were upgraded to the Freedom 24 speech processor. After a take-home trial period of at least 2 weeks, subject performance was evaluated by measuring the speech reception threshold with the Freiburg multisyllabic word test and speech intelligibility with the Freiburg monosyllabic word test at 50 dB and 70 dB in the sound field. The results demonstrated highly significant benefits for speech comprehension with the new speech processor. Significant benefits for speech comprehension were also demonstrated with the new speech processor when tested in competing background noise.In contrast, use of the Abbreviated Profile of Hearing Aid Benefit (APHAB) did not prove to be a suitably sensitive assessment tool for comparative subjective self-assessment of hearing benefits with each processor. Use of the preprocessing algorithm known as adaptive dynamic range optimization (ADRO) in the Freedom 24 led to additional improvements over the standard upgrade map for speech comprehension in quiet and showed equivalent performance in noise. Through use of the preprocessing beam-forming algorithm BEAM, subjects demonstrated a highly significant improved signal-to-noise ratio for speech comprehension thresholds (i.e., signal-to-noise ratio for 50% speech comprehension scores) when tested with an adaptive procedure using the Oldenburg

  13. A general natural-language text processor for clinical radiology.

    PubMed Central

    Friedman, C; Alderson, P O; Austin, J H; Cimino, J J; Johnson, S B

    1994-01-01

    OBJECTIVE: Development of a general natural-language processor that identifies clinical information in narrative reports and maps that information into a structured representation containing clinical terms. DESIGN: The natural-language processor provides three phases of processing, all of which are driven by different knowledge sources. The first phase performs the parsing. It identifies the structure of the text through use of a grammar that defines semantic patterns and a target form. The second phase, regularization, standardizes the terms in the initial target structure via a compositional mapping of multi-word phrases. The third phase, encoding, maps the terms to a controlled vocabulary. Radiology is the test domain for the processor and the target structure is a formal model for representing clinical information in that domain. MEASUREMENTS: The impression sections of 230 radiology reports were encoded by the processor. Results of an automated query of the resultant database for the occurrences of four diseases were compared with the analysis of a panel of three physicians to determine recall and precision. RESULTS: Without training specific to the four diseases, recall and precision of the system (combined effect of the processor and query generator) were 70% and 87%. Training of the query component increased recall to 85% without changing precision. PMID:7719797

  14. ELIPS: Toward a Sensor Fusion Processor on a Chip

    NASA Technical Reports Server (NTRS)

    Daud, Taher; Stoica, Adrian; Tyson, Thomas; Li, Wei-te; Fabunmi, James

    1998-01-01

    The paper presents the concept and initial tests from the hardware implementation of a low-power, high-speed reconfigurable sensor fusion processor. The Extended Logic Intelligent Processing System (ELIPS) processor is developed to seamlessly combine rule-based systems, fuzzy logic, and neural networks to achieve parallel fusion of sensor in compact low power VLSI. The first demonstration of the ELIPS concept targets interceptor functionality; other applications, mainly in robotics and autonomous systems are considered for the future. The main assumption behind ELIPS is that fuzzy, rule-based and neural forms of computation can serve as the main primitives of an "intelligent" processor. Thus, in the same way classic processors are designed to optimize the hardware implementation of a set of fundamental operations, ELIPS is developed as an efficient implementation of computational intelligence primitives, and relies on a set of fuzzy set, fuzzy inference and neural modules, built in programmable analog hardware. The hardware programmability allows the processor to reconfigure into different machines, taking the most efficient hardware implementation during each phase of information processing. Following software demonstrations on several interceptor data, three important ELIPS building blocks (a fuzzy set preprocessor, a rule-based fuzzy system and a neural network) have been fabricated in analog VLSI hardware and demonstrated microsecond-processing times.

  15. Current status of the joint Mayo Clinic-IBM PACS project

    NASA Astrophysics Data System (ADS)

    Hangiandreou, Nicholas J.; Williamson, Byrn, Jr.; Gehring, Dale G.; Persons, Kenneth R.; Reardon, Frank J.; Salutz, James R.; Felmlee, Joel P.; Loewen, M. D.; Forbes, Glenn S.

    1994-05-01

    A multi-phase collaboration between Mayo Clinic and IBM-Rochester was undertaken, with the goal of developing a picture archiving and communication system for routine clinical use in the Radiology Department. The initial phase of this project (phase 0) was started in 1988. The current system has been fully integrated into the clinical practice and, to date, over 6.5 million images from 16 imaging modalities have been archived. Phase 3 of this project has recently concluded.

  16. The Upper- to Middle-Crustal Section of the Alisitos Oceanic Arc, (Baja, Mexico): an Analog of the Izu-Bonin-Marianas (IBM) Arc

    NASA Astrophysics Data System (ADS)

    Medynski, S.; Busby, C.; DeBari, S. M.; Morris, R.; Andrews, G. D.; Brown, S. R.; Schmitt, A. K.

    2016-12-01

    The Rosario segment of the Cretaceous Alisitos arc in Baja California is an outstanding field analog for the Izu-Bonin-Mariana (IBM) arc, because it is structurally intact, unmetamorphosed, and has superior three-dimensional exposures of an upper- to middle-crustal section through an extensional oceanic arc. Previous work1, done in the pre-digital era, used geologic mapping to define two phases of arc evolution, with normal faulting in both phases: (1) extensional oceanic arc, with silicic calderas, and (2) oceanic arc rifting, with widespread diking and dominantly mafic effusions. Our new geochemical data match the extensional zone immediately behind the Izu arc front, and is different from the arc front and rear arc, consistent with geologic relations. Our study is developing a 3D oceanic arc crustal model, with geologic maps draped on Google Earth images, and GPS-located outcrop information linked to new geochemical, geochronological and petrographic data, with the goal of detailing the relationships between plutonic, hypabyssal, and volcanic rocks. This model will be used by scientists as a reference model for past (IBM-1, 2, 3) and proposed IBM (IBM-4) drilling activities. New single-crystal zircon analysis by TIMS supports the interpretation, based on batch SIMS analysis of chemically-abraded zircon1, that the entire upper-middle crustal section accumulated in about 1.5 Myr. Like the IBM, volcanic zircons are very sparse, but zircon chemistry on the plutonic rocks shows trace element compositions that overlap to those measured in IBM volcanic zircons by A. Schmitt (unpublished data). Zircons have U-Pb ages up to 20 Myr older than the eruptive age, suggesting remelting of older parts of the arc, similar to that proposed for IBM (using different evidence). Like IBM, some very old zircons are also present, indicating the presence of old crustal fragments, or sediments derived from them, in the basement. However, our geochemical data show that the magmas are

  17. CLIPS - C LANGUAGE INTEGRATED PRODUCTION SYSTEM (IBM PC VERSION)

    NASA Technical Reports Server (NTRS)

    Riley, G.

    1994-01-01

    The C Language Integrated Production System, CLIPS, is a shell for developing expert systems. It is designed to allow artificial intelligence research, development, and delivery on conventional computers. The primary design goals for CLIPS are portability, efficiency, and functionality. For these reasons, the program is written in C. CLIPS meets or outperforms most micro- and minicomputer based artificial intelligence tools. CLIPS is a forward chaining rule-based language. The program contains an inference engine and a language syntax that provide a framework for the construction of an expert system. It also includes tools for debugging an application. CLIPS is based on the Rete algorithm, which enables very efficient pattern matching. The collection of conditions and actions to be taken if the conditions are met is constructed into a rule network. As facts are asserted either prior to or during a session, CLIPS pattern-matches the number of fields. Wildcards and variables are supported for both single and multiple fields. CLIPS syntax allows the inclusion of externally defined functions (outside functions which are written in a language other than CLIPS). CLIPS itself can be embedded in a program such that the expert system is available as a simple subroutine call. Advanced features found in CLIPS version 4.3 include an integrated microEMACS editor, the ability to generate C source code from a CLIPS rule base to produce a dedicated executable, binary load and save capabilities for CLIPS rule bases, and the utility program CRSV (Cross-Reference, Style, and Verification) designed to facilitate the development and maintenance of large rule bases. Five machine versions are available. Each machine version includes the source and the executable for that machine. The UNIX version includes the source and binaries for IBM RS/6000, Sun3 series, and Sun4 series computers. The UNIX, DEC VAX, and DEC RISC Workstation versions are line oriented. The PC version and the Macintosh

  18. Spatial Phase Coding for Incoherent Optical Processors

    NASA Technical Reports Server (NTRS)

    Tigin, D. V.; Lavrentev, A. A.; Gary, C. K.

    1994-01-01

    In this paper we introduce spatial phase coding of incoherent optical signals for representing signed numbers in optical processors and present an experimental demonstration of this coding technique. If a diffraction grating, such as an acousto-optic cell, modulates a stream of light, the image of the grating can be recovered from the diffracted beam. The position of the grating image, or more precisely its phase, can be used to denote the sign of the number represented by the diffracted light. The intensity of the light represents the magnitude of the number. This technique is more economical than current methods in terms of the number of information channels required to represent a number and the amount of post processing required.

  19. On nonlinear finite element analysis in single-, multi- and parallel-processors

    NASA Technical Reports Server (NTRS)

    Utku, S.; Melosh, R.; Islam, M.; Salama, M.

    1982-01-01

    Numerical solution of nonlinear equilibrium problems of structures by means of Newton-Raphson type iterations is reviewed. Each step of the iteration is shown to correspond to the solution of a linear problem, therefore the feasibility of the finite element method for nonlinear analysis is established. Organization and flow of data for various types of digital computers, such as single-processor/single-level memory, single-processor/two-level-memory, vector-processor/two-level-memory, and parallel-processors, with and without sub-structuring (i.e. partitioning) are given. The effect of the relative costs of computation, memory and data transfer on substructuring is shown. The idea of assigning comparable size substructures to parallel processors is exploited. Under Cholesky type factorization schemes, the efficiency of parallel processing is shown to decrease due to the occasional shared data, just as that due to the shared facilities.

  20. The IBM PC as an Online Search Machine--Part 2: Physiology for Searchers.

    ERIC Educational Resources Information Center

    Kolner, Stuart J.

    1985-01-01

    Enumerates "hardware problems" associated with use of the IBM personal computer as an online search machine: purchase of machinery, unpacking of parts, and assembly into a properly functioning computer. Components that allow transformations of computer into a search machine (combination boards, printer, modem) and diagnostics software…

  1. A performance comparison of the IBM RS/6000 and the Astronautics ZS-1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, W.M.; Abraham, S.G.; Davidson, E.S.

    1991-01-01

    Concurrent uniprocessor architectures, of which vector and superscalar are two examples, are designed to capitalize on fine-grain parallelism. The authors have developed a performance evaluation method for comparing and improving these architectures, and in this article they present the methodology and a detailed case study of two machines. The runtime of many programs is dominated by time spent in loop constructs - for example, Fortran Do-loops. Loops generally comprise two logical processes: The access process generates addresses for memory operations while the execute process operates on floating-point data. Memory access patterns typically can be generated independently of the data inmore » the execute process. This independence allows the access process to slip ahead, thereby hiding memory latency. The IBM 360/91 was designed in 1967 to achieve slip dynamically, at runtime. One CPU unit executes integer operations while another handles floating-point operations. Other machines, including the VAX 9000 and the IBM RS/6000, use a similar approach.« less

  2. Multibus-based parallel processor for simulation

    NASA Technical Reports Server (NTRS)

    Ogrady, E. P.; Wang, C.-H.

    1983-01-01

    A Multibus-based parallel processor simulation system is described. The system is intended to serve as a vehicle for gaining hands-on experience, testing system and application software, and evaluating parallel processor performance during development of a larger system based on the horizontal/vertical-bus interprocessor communication mechanism. The prototype system consists of up to seven Intel iSBC 86/12A single-board computers which serve as processing elements, a multiple transmission controller (MTC) designed to support system operation, and an Intel Model 225 Microcomputer Development System which serves as the user interface and input/output processor. All components are interconnected by a Multibus/IEEE 796 bus. An important characteristic of the system is that it provides a mechanism for a processing element to broadcast data to other selected processing elements. This parallel transfer capability is provided through the design of the MTC and a minor modification to the iSBC 86/12A board. The operation of the MTC, the basic hardware-level operation of the system, and pertinent details about the iSBC 86/12A and the Multibus are described.

  3. A MAP fixed-point, packing-unpacking routine for the IBM 7094 computer

    Treesearch

    Robert S. Helfman

    1966-01-01

    Two MAP (Macro Assembly Program) computer routines for packing and unpacking fixed point data are described. Use of these routines with Fortran IV Programs provides speedy access to quantities of data which far exceed the normal storage capacity of IBM 7000-series computers. Many problems that could not be attempted because of the slow access-speed of tape...

  4. Meteorological Processors and Accessory Programs

    EPA Pesticide Factsheets

    Surface and upper air data, provided by NWS, are important inputs for air quality models. Before these data are used in some of the EPA dispersion models, meteorological processors are used to manipulate the data.

  5. ORNL Cray X1 evaluation status report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agarwal, P.K.; Alexander, R.A.; Apra, E.

    2004-05-01

    On August 15, 2002 the Department of Energy (DOE) selected the Center for Computational Sciences (CCS) at Oak Ridge National Laboratory (ORNL) to deploy a new scalable vector supercomputer architecture for solving important scientific problems in climate, fusion, biology, nanoscale materials and astrophysics. ''This program is one of the first steps in an initiative designed to provide U.S. scientists with the computational power that is essential to 21st century scientific leadership,'' said Dr. Raymond L. Orbach, director of the department's Office of Science. In FY03, CCS procured a 256-processor Cray X1 to evaluate the processors, memory subsystem, scalability of themore » architecture, software environment and to predict the expected sustained performance on key DOE applications codes. The results of the micro-benchmarks and kernel bench marks show the architecture of the Cray X1 to be exceptionally fast for most operations. The best results are shown on large problems, where it is not possible to fit the entire problem into the cache of the processors. These large problems are exactly the types of problems that are important for the DOE and ultra-scale simulation. Application performance is found to be markedly improved by this architecture: - Large-scale simulations of high-temperature superconductors run 25 times faster than on an IBM Power4 cluster using the same number of processors. - Best performance of the parallel ocean program (POP v1.4.3) is 50 percent higher than on Japan s Earth Simulator and 5 times higher than on an IBM Power4 cluster. - A fusion application, global GYRO transport, was found to be 16 times faster on the X1 than on an IBM Power3. The increased performance allowed simulations to fully resolve questions raised by a prior study. - The transport kernel in the AGILE-BOLTZTRAN astrophysics code runs 15 times faster than on an IBM Power4 cluster using the same number of processors. - Molecular dynamics simulations related to the

  6. The Engineer Topographic Laboratories /ETL/ hybrid optical/digital image processor

    NASA Astrophysics Data System (ADS)

    Benton, J. R.; Corbett, F.; Tuft, R.

    1980-01-01

    An optical-digital processor for generalized image enhancement and filtering is described. The optical subsystem is a two-PROM Fourier filter processor. Input imagery is isolated, scaled, and imaged onto the first PROM; this input plane acts like a liquid gate and serves as an incoherent-to-coherent converter. The image is transformed onto a second PROM which also serves as a filter medium; filters are written onto the second PROM with a laser scanner in real time. A solid state CCTV camera records the filtered image, which is then digitized and stored in a digital image processor. The operator can then manipulate the filtered image using the gray scale and color remapping capabilities of the video processor as well as the digital processing capabilities of the minicomputer.

  7. A word processor optimized for preparing journal articles and student papers.

    PubMed

    Wolach, A H; McHale, M A

    2001-11-01

    A new Windows-based word processor for preparing journal articles and student papers is described. In addition to standard features found in word processors, the present word processor provides specific help in preparing manuscripts. Clicking on "Reference Help (APA Form)" in the "File" menu provides a detailed help system for entering the references in a journal article. Clicking on "Examples and Explanations of APA Form" provides a help system with examples of the various sections of a review article, journal article that has one experiment, or journal article that has two or more experiments. The word processor can automatically place the manuscript page header and page number at the top of each page using the form required by APA and Psychonomic Society journals. The "APA Form" submenu of the "Help" menu provides detailed information about how the word processor is optimized for preparing articles and papers.

  8. Hazardous Waste Cleanup: IBM Corporation-TJ Watson Research Center in Yorktown Heights, New York

    EPA Pesticide Factsheets

    IBM Corporation -TJ Watson Research Center is located in southern Yorktown near the boundary separating the Town of Yorktown from the Town of New Castle. The site occupies an area of approximately 217 acres and adjoins land uses are predominantly residenti

  9. Extended performance electric propulsion power processor design study. Volume 2: Technical summary

    NASA Technical Reports Server (NTRS)

    Biess, J. J.; Inouye, L. Y.; Schoenfeld, A. D.

    1977-01-01

    Electric propulsion power processor technology has processed during the past decade to the point that it is considered ready for application. Several power processor design concepts were evaluated and compared. Emphasis was placed on a 30 cm ion thruster power processor with a beam power rating supply of 2.2KW to 10KW for the main propulsion power stage. Extension in power processor performance were defined and were designed in sufficient detail to determine efficiency, component weight, part count, reliability and thermal control. A detail design was performed on a microprocessor as the thyristor power processor controller. A reliability analysis was performed to evaluate the effect of the control electronics redesign. Preliminary electrical design, mechanical design and thermal analysis were performed on a 6KW power transformer for the beam supply. Bi-Mod mechanical, structural and thermal control configurations were evaluated for the power processor and preliminary estimates of mechanical weight were determined.

  10. Ethernet-Enabled Power and Communication Module for Embedded Processors

    NASA Technical Reports Server (NTRS)

    Perotti, Jose; Oostdyk, Rebecca

    2010-01-01

    The power and communications module is a printed circuit board (PCB) that has the capability of providing power to an embedded processor and converting Ethernet packets into serial data to transfer to the processor. The purpose of the new design is to address the shortcomings of previous designs, including limited bandwidth and program memory, lack of control over packet processing, and lack of support for timing synchronization. The new design of the module creates a robust serial-to-Ethernet conversion that is powered using the existing Ethernet cable. This innovation has a small form factor that allows it to power processors and transducers with minimal space requirements.

  11. Proceedings of the Seminar on the DoD Computer Security Initiative Program, National Bureau of Standards, Gaithersburg, Maryland, July 17-18, 1979.

    DTIC Science & Technology

    1979-01-01

    specifications have been prepared for a DoD communications processor on an IBM minicomputer, a minicomputer time sharing system for the DEC PDP-11 and...the Honeywell Level 6. a virtual machine monitor for the IBM 370, and Multics [10] for the Honeywell Level 68. MECHANISMS FOR KERNEL IMPLEMENTATION...HOL INA ZJO : ANERIONS g PROCESSORn , c ...THEOREMS 1 ITP I-THEOREMS PROOF EVIDENCE - p II KV./370 FORMAL DESIGN PROCESS M4ODULAR DECOMPOSITION * NON

  12. A Version of the Graphics-Oriented Interactive Finite Element Time-Sharing System (GIFTS) for an IBM with CP/CMS.

    DTIC Science & Technology

    1982-03-01

    POSTGRADUATE SCHOOL fMonterey, California THESIS A VERSION OF THE GRAPHICS-ORIENTED INTERACTIVE FINITE ELEMENT TIME-SHARING SYSTEM ( GIFTS ) FOR AN IBM...Master’s & Engineer’s active Finite Element Time-sharing System Thesis - March 1982 ( GIFTS ) for an IBM with CP/CMS 6. penromm.oOn. REPoRT MUlmiR 1. AUTHOIee...ss0in D dinuf 5W M memisi) ’A version of the Graphics-oriented, Interactive, Finite element, Time-sharing System ( GIFTS ) has been developed for, and

  13. Nonlinear Wave Simulation on the Xeon Phi Knights Landing Processor

    NASA Astrophysics Data System (ADS)

    Hristov, Ivan; Goranov, Goran; Hristova, Radoslava

    2018-02-01

    We consider an interesting from computational point of view standing wave simulation by solving coupled 2D perturbed Sine-Gordon equations. We make an OpenMP realization which explores both thread and SIMD levels of parallelism. We test the OpenMP program on two different energy equivalent Intel architectures: 2× Xeon E5-2695 v2 processors, (code-named "Ivy Bridge-EP") in the Hybrilit cluster, and Xeon Phi 7250 processor (code-named "Knights Landing" (KNL). The results show 2 times better performance on KNL processor.

  14. Cache Energy Optimization Techniques For Modern Processors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mittal, Sparsh

    2013-01-01

    Modern multicore processors are employing large last-level caches, for example Intel's E7-8800 processor uses 24MB L3 cache. Further, with each CMOS technology generation, leakage energy has been dramatically increasing and hence, leakage energy is expected to become a major source of energy dissipation, especially in last-level caches (LLCs). The conventional schemes of cache energy saving either aim at saving dynamic energy or are based on properties specific to first-level caches, and thus these schemes have limited utility for last-level caches. Further, several other techniques require offline profiling or per-application tuning and hence are not suitable for product systems. In thismore » book, we present novel cache leakage energy saving schemes for single-core and multicore systems; desktop, QoS, real-time and server systems. Also, we present cache energy saving techniques for caches designed with both conventional SRAM devices and emerging non-volatile devices such as STT-RAM (spin-torque transfer RAM). We present software-controlled, hardware-assisted techniques which use dynamic cache reconfiguration to configure the cache to the most energy efficient configuration while keeping the performance loss bounded. To profile and test a large number of potential configurations, we utilize low-overhead, micro-architecture components, which can be easily integrated into modern processor chips. We adopt a system-wide approach to save energy to ensure that cache reconfiguration does not increase energy consumption of other components of the processor. We have compared our techniques with state-of-the-art techniques and have found that our techniques outperform them in terms of energy efficiency and other relevant metrics. The techniques presented in this book have important applications in improving energy-efficiency of higher-end embedded, desktop, QoS, real-time, server processors and multitasking systems. This book is intended to be a valuable guide for both

  15. Finite elements and the method of conjugate gradients on a concurrent processor

    NASA Technical Reports Server (NTRS)

    Lyzenga, G. A.; Raefsky, A.; Hager, G. H.

    1985-01-01

    An algorithm for the iterative solution of finite element problems on a concurrent processor is presented. The method of conjugate gradients is used to solve the system of matrix equations, which is distributed among the processors of a MIMD computer according to an element-based spatial decomposition. This algorithm is implemented in a two-dimensional elastostatics program on the Caltech Hypercube concurrent processor. The results of tests on up to 32 processors show nearly linear concurrent speedup, with efficiencies over 90 percent for sufficiently large problems.

  16. Finite elements and the method of conjugate gradients on a concurrent processor

    NASA Technical Reports Server (NTRS)

    Lyzenga, G. A.; Raefsky, A.; Hager, B. H.

    1984-01-01

    An algorithm for the iterative solution of finite element problems on a concurrent processor is presented. The method of conjugate gradients is used to solve the system of matrix equations, which is distributed among the processors of a MIMD computer according to an element-based spatial decomposition. This algorithm is implemented in a two-dimensional elastostatics program on the Caltech Hypercube concurrent processor. The results of tests on up to 32 processors show nearly linear concurrent speedup, with efficiencies over 90% for sufficiently large problems.

  17. Extended performance electric propulsion power processor design study. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    Biess, J. J.; Inouye, L. Y.; Schoenfeld, A. D.

    1977-01-01

    Several power processor design concepts were evaluated and compared. Emphasis was placed on a 30cm ion thruster power processor with a beam supply rating of 2.2kW to 10kW. Extensions in power processor performance were defined and were designed in sufficient detail to determine efficiency, component weight, part count, reliability and thermal control. Preliminary electrical design, mechanical design, and thermal analysis were performed on a 6kW power transformer for the beam supply. Bi-Mod mechanical, structural, and thermal control configurations were evaluated for the power processor, and preliminary estimates of mechanical weight were determined. A program development plan was formulated that outlines the work breakdown structure for the development, qualification and fabrication of the power processor flight hardware.

  18. Conditions for space invariance in optical data processors used with coherent or noncoherent light.

    PubMed

    Arsenault, H R

    1972-10-01

    The conditions for space invariance in coherent and noncoherent optical processors are considered. All linear optical processors are shown to belong to one of two types. The conditions for space invariance are more stringent for noncoherent processors than for coherent processors, so that a system that is linear in coherent light may be nonlinear in noncoherent light. However, any processor that is linear in noncoherent light is also linear in the coherent limit.

  19. The IBM PC as an Online Search Machine. Part 5: Searching through Crosstalk.

    ERIC Educational Resources Information Center

    Kolner, Stuart J.

    1985-01-01

    This last of a five-part series on using the IBM personal computer for online searching highlights a brief review, search process, making the connection, switching between screens and modes, online transaction, capture buffer controls, coping with options, function keys, script files, processing downloaded information, note to TELEX users, and…

  20. The POIS (Parkland On-Line Information System) Implementation of the IBM Health Care Support/Patient Care System

    PubMed Central

    Mishelevich, David J.; Hudson, Betty G.; Van Slyke, Donald; Mize, Elaine I.; Robinson, Anna L.; Brieden, Helen C.; Atkinson, Jack; Robertson, James

    1980-01-01

    The installation of major components of a comprehensive Hospital Information System (HIS) called POIS, the Parkland On-line Information System, including identified success factors is described for the Dallas County Hospital District (DCHD) known also as the Parkland Memorial Hospital. Installation of the on-line IBM Health Care Support (HCS) Registration and Admissions Packages occurred in 1976 and implementation of the HCS Patient Care System (PCS) began in 1977 which includes on-line support of health care areas such as nursing stations and ancillary areas. The Duke Hospital Information System (DHIS) is marketed as the IBM HCS/Patient Care System (PCS). DCHD was the validation site. POIS has order entry, result reporting and work management components. While most of the patient care components are currently installed for the inpatient service, the Laboratories are being installed for the outpatient and Emergency areas as well. The Clinic Appointment System developed at the University of Michigan is also installed. The HCS family of programs use DL/1 and CICS and were installed in the OS versions, currently running under MVS on an IBM 370/168 Model 3 with 8 megabytes of main memory. ImagesFigure 1-AFigure 1-B

  1. Potential of minicomputer/array-processor system for nonlinear finite-element analysis

    NASA Technical Reports Server (NTRS)

    Strohkorb, G. A.; Noor, A. K.

    1983-01-01

    The potential of using a minicomputer/array-processor system for the efficient solution of large-scale, nonlinear, finite-element problems is studied. A Prime 750 is used as the host computer, and a software simulator residing on the Prime is employed to assess the performance of the Floating Point Systems AP-120B array processor. Major hardware characteristics of the system such as virtual memory and parallel and pipeline processing are reviewed, and the interplay between various hardware components is examined. Effective use of the minicomputer/array-processor system for nonlinear analysis requires the following: (1) proper selection of the computational procedure and the capability to vectorize the numerical algorithms; (2) reduction of input-output operations; and (3) overlapping host and array-processor operations. A detailed discussion is given of techniques to accomplish each of these tasks. Two benchmark problems with 1715 and 3230 degrees of freedom, respectively, are selected to measure the anticipated gain in speed obtained by using the proposed algorithms on the array processor.

  2. Demonstration of entanglement assisted invariance on IBM's quantum experience.

    PubMed

    Deffner, Sebastian

    2017-11-01

    Quantum entanglement is among the most fundamental, yet from classical intuition also most surprising properties of the fully quantum nature of physical reality. We report several experiments performed on IBM's Quantum Experience demonstrating envariance - entanglement assisted invariance. Envariance is a recently discovered symmetry of composite quantum systems, which is at the foundational origin of physics and a quantum phenomenon of pure states. These very easily reproducible and freely accessible experiments on Quantum Experience provide simple tools to study the properties of envariance, and we illustrate this for several cases with "quantum universes" consisting of up to five qubits.

  3. Scheduling time-critical graphics on multiple processors

    NASA Technical Reports Server (NTRS)

    Meyer, Tom W.; Hughes, John F.

    1995-01-01

    This paper describes an algorithm for the scheduling of time-critical rendering and computation tasks on single- and multiple-processor architectures, with minimal pipelining. It was developed to manage scientific visualization scenes consisting of hundreds of objects, each of which can be computed and displayed at thousands of possible resolution levels. The algorithm generates the time-critical schedule using progressive-refinement techniques; it always returns a feasible schedule and, when allowed to run to completion, produces a near-optimal schedule which takes advantage of almost the entire multiple-processor system.

  4. SSC 254 Screen-Based Word Processors: Production Tests. The Lanier Word Processor.

    ERIC Educational Resources Information Center

    Moyer, Ruth A.

    Designed for use in Trident Technical College's Secretarial Lab, this series of 12 production tests focuses on the use of the Lanier Word Processor for a variety of tasks. In tests 1 and 2, students are required to type and print out letters. Tests 3 through 8 require students to reformat a text; make corrections on a letter; divide and combine…

  5. Software for embedded processors: Problems and solutions

    NASA Astrophysics Data System (ADS)

    Bogaerts, J. A. C.

    1990-08-01

    Data Acquistion systems in HEP experiments use a wide spectrum of computers to cope with two major problems: high event rates and a large data volume. They do this by using special fast trigger processors at the source to reduce the event rate by several orders of magnitude. The next stage of a data acquisition system consists of a network of fast but conventional microprocessors which are embedded in high speed bus systems where data is still further reduced, filtered and merged. In the final stage complete events are farmed out to a another collection of processors, which reconstruct the events and perhaps achieve a further event rejection by a small factor, prior to recording onto magnetic tape. Detectors are monitored by analyzing a fraction of the data. This may be done for individual detectors at an early state of the data acquisition or it may be delayed till the complete events are available. A network of workstations is used for monitoring, displays and run control. Software for trigger processors must have a simple structure. Rejection algorithms are carefully optimized, and overheads introduced by system software cannot be tolerated. The embedded microprocessors have to co-operate, and need to be synchronized with the preceding and following stages. Real time kernels are typically used to solve synchronization and communication problems. Applications are usually coded in C, which is reasonably efficient and allows direct control over low level hardware functions. Event reconstruction software is very similar or even identical to offline software, predominantly written in FORTRAN. With the advent of powerful RISC processors, and with manufacturers tending to adopt open bus architectures, there is a move towards commercial processors and hence the introduction of the UNIX operating system. Building and controlling such a heterogeneous data acquisition system puts a heavy strain on the software. Communications is now as important as CPU capacity and I

  6. First Results of an “Artificial Retina” Processor Prototype

    DOE PAGES

    Cenci, Riccardo; Bedeschi, Franco; Marino, Pietro; ...

    2016-11-15

    We report on the performance of a specialized processor capable of reconstructing charged particle tracks in a realistic LHC silicon tracker detector, at the same speed of the readout and with sub-microsecond latency. The processor is based on an innovative pattern-recognition algorithm, called “artificial retina algorithm”, inspired from the vision system of mammals. A prototype of the processor has been designed, simulated, and implemented on Tel62 boards equipped with high-bandwidth Altera Stratix III FPGA devices. Also, the prototype is the first step towards a real-time track reconstruction device aimed at processing complex events of high-luminosity LHC experiments at 40 MHzmore » crossing rate.« less

  7. First Results of an “Artificial Retina” Processor Prototype

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cenci, Riccardo; Bedeschi, Franco; Marino, Pietro

    We report on the performance of a specialized processor capable of reconstructing charged particle tracks in a realistic LHC silicon tracker detector, at the same speed of the readout and with sub-microsecond latency. The processor is based on an innovative pattern-recognition algorithm, called “artificial retina algorithm”, inspired from the vision system of mammals. A prototype of the processor has been designed, simulated, and implemented on Tel62 boards equipped with high-bandwidth Altera Stratix III FPGA devices. Also, the prototype is the first step towards a real-time track reconstruction device aimed at processing complex events of high-luminosity LHC experiments at 40 MHzmore » crossing rate.« less

  8. Massively parallel quantum computer simulator

    NASA Astrophysics Data System (ADS)

    De Raedt, K.; Michielsen, K.; De Raedt, H.; Trieu, B.; Arnold, G.; Richter, M.; Lippert, Th.; Watanabe, H.; Ito, N.

    2007-01-01

    We describe portable software to simulate universal quantum computers on massive parallel computers. We illustrate the use of the simulation software by running various quantum algorithms on different computer architectures, such as a IBM BlueGene/L, a IBM Regatta p690+, a Hitachi SR11000/J1, a Cray X1E, a SGI Altix 3700 and clusters of PCs running Windows XP. We study the performance of the software by simulating quantum computers containing up to 36 qubits, using up to 4096 processors and up to 1 TB of memory. Our results demonstrate that the simulator exhibits nearly ideal scaling as a function of the number of processors and suggest that the simulation software described in this paper may also serve as benchmark for testing high-end parallel computers.

  9. Efficient Parallel Algorithms on Restartable Fail-Stop Processors

    DTIC Science & Technology

    1991-01-01

    resource (memory), and ( 3 ) that processors, memory and their interconnection must be The model of parallel computation known as the Par- perfectly...setting), arid ure an(I restart errors. We describe these arguments if] [AAtPS 871 (in a deterministic setting). Fault-tolerance Section 3 . of...grannmarity at the processor level --- for recent work on where Al is the nmber of failures during this step’s gate granilarities see [All 90, Pip 85

  10. 76 FR 21033 - International Business Machines (IBM), Sales and Distribution Business Unit, Global Sales...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-14

    ... of the negative determination regarding workers' eligibility to apply for Trade Adjustment Assistance (TAA) applicable to workers and former workers of International Business Machines (IBM), Sales and... was published in the Federal Register on November 17, 2010 (75 FR 70296). The workers supply computer...

  11. A Simple and Affordable TTL Processor for the Classroom

    ERIC Educational Resources Information Center

    Feinberg, Dave

    2007-01-01

    This paper presents a simple 4 bit computer processor design that may be built using TTL chips for less than $65. In addition to describing the processor itself in detail, we discuss our experience using the laboratory kit and its associated machine instruction set to teach computer architecture to high school students. (Contains 3 figures and 5…

  12. Interactive Digital Signal Processor

    NASA Technical Reports Server (NTRS)

    Mish, W. H.

    1985-01-01

    Interactive Digital Signal Processor, IDSP, consists of set of time series analysis "operators" based on various algorithms commonly used for digital signal analysis. Processing of digital signal time series to extract information usually achieved by applications of number of fairly standard operations. IDSP excellent teaching tool for demonstrating application for time series operators to artificially generated signals.

  13. Computer program documentation for the pasture/range condition assessment processor

    NASA Technical Reports Server (NTRS)

    Mcintyre, K. S.; Miller, T. G. (Principal Investigator)

    1982-01-01

    The processor which drives for the RANGE software allows the user to analyze LANDSAT data containing pasture and rangeland. Analysis includes mapping, generating statistics, calculating vegetative indexes, and plotting vegetative indexes. Routines for using the processor are given. A flow diagram is included.

  14. Safe and Efficient Support for Embeded Multi-Processors in ADA

    NASA Astrophysics Data System (ADS)

    Ruiz, Jose F.

    2010-08-01

    New software demands increasing processing power, and multi-processor platforms are spreading as the answer to achieve the required performance. Embedded real-time systems are also subject to this trend, but in the case of real-time mission-critical systems, the properties of reliability, predictability and analyzability are also paramount. The Ada 2005 language defined a subset of its tasking model, the Ravenscar profile, that provides the basis for the implementation of deterministic and time analyzable applications on top of a streamlined run-time system. This Ravenscar tasking profile, originally designed for single processors, has proven remarkably useful for modelling verifiable real-time single-processor systems. This paper proposes a simple extension to the Ravenscar profile to support multi-processor systems using a fully partitioned approach. The implementation of this scheme is simple, and it can be used to develop applications amenable to schedulability analysis.

  15. Three-wheel air turbocompressor for PEM fuel cell systems

    DOEpatents

    Rehg, Tim; Gee, Mark; Emerson, Terence P.; Ferrall, Joe; Sokolov, Pavel

    2003-08-19

    A fuel cell system comprises a compressor and a fuel processor downstream of the compressor. A fuel cell stack is in communication with the fuel processor and compressor. A combustor is downstream of the fuel cell stack. First and second turbines are downstream of the fuel processor and in parallel flow communication with one another. A distribution valve is in communication with the first and second turbines. The first and second turbines are mechanically engaged to the compressor. A bypass valve is intermediate the compressor and the second turbine, with the bypass valve enabling a compressed gas from the compressor to bypass the fuel processor.

  16. 7 CFR 989.13 - Processor.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 8 2012-01-01 2012-01-01 false Processor. 989.13 Section 989.13 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE RAISINS PRODUCED FROM GRAPES GROWN IN...

  17. 7 CFR 989.13 - Processor.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 8 2013-01-01 2013-01-01 false Processor. 989.13 Section 989.13 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE RAISINS PRODUCED FROM GRAPES GROWN IN...

  18. 7 CFR 989.13 - Processor.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Processor. 989.13 Section 989.13 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE RAISINS PRODUCED FROM GRAPES GROWN IN...

  19. 7 CFR 989.13 - Processor.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 8 2014-01-01 2014-01-01 false Processor. 989.13 Section 989.13 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE RAISINS PRODUCED FROM GRAPES GROWN IN...

  20. 7 CFR 989.13 - Processor.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 8 2011-01-01 2011-01-01 false Processor. 989.13 Section 989.13 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE RAISINS PRODUCED FROM GRAPES GROWN IN...

  1. 7 CFR 1435.306 - Allocation of marketing allotments to processors.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...) COMMODITY CREDIT CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Flexible Marketing Allotments For Sugar § 1435.306 Allocation of marketing allotments to processors. (a) Each sugar beet processor's allocation, other than a new entrant's, of the beet allotment will be...

  2. 7 CFR 1435.306 - Allocation of marketing allotments to processors.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...) COMMODITY CREDIT CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Flexible Marketing Allotments For Sugar § 1435.306 Allocation of marketing allotments to processors. (a) Each sugar beet processor's allocation, other than a new entrant's, of the beet allotment will be...

  3. 7 CFR 1435.306 - Allocation of marketing allotments to processors.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ...) COMMODITY CREDIT CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Flexible Marketing Allotments For Sugar § 1435.306 Allocation of marketing allotments to processors. (a) Each sugar beet processor's allocation, other than a new entrant's, of the beet allotment will be...

  4. 7 CFR 1435.306 - Allocation of marketing allotments to processors.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) COMMODITY CREDIT CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Flexible Marketing Allotments For Sugar § 1435.306 Allocation of marketing allotments to processors. (a) Each sugar beet processor's allocation, other than a new entrant's, of the beet allotment will be...

  5. 7 CFR 1435.306 - Allocation of marketing allotments to processors.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...) COMMODITY CREDIT CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Flexible Marketing Allotments For Sugar § 1435.306 Allocation of marketing allotments to processors. (a) Each sugar beet processor's allocation, other than a new entrant's, of the beet allotment will be...

  6. A Modular Pipelined Processor for High Resolution Gamma-Ray Spectroscopy

    NASA Astrophysics Data System (ADS)

    Veiga, Alejandro; Grunfeld, Christian

    2016-02-01

    The design of a digital signal processor for gamma-ray applications is presented in which a single ADC input can simultaneously provide temporal and energy characterization of gamma radiation for a wide range of applications. Applying pipelining techniques, the processor is able to manage and synchronize very large volumes of streamed real-time data. Its modular user interface provides a flexible environment for experimental design. The processor can fit in a medium-sized FPGA device operating at ADC sampling frequency, providing an efficient solution for multi-channel applications. Two experiments are presented in order to characterize its temporal and energy resolution.

  7. A distributed fault-tolerant signal processor /FTSP/

    NASA Astrophysics Data System (ADS)

    Bonneau, R. J.; Evett, R. C.; Young, M. J.

    1980-01-01

    A digital fault-tolerant signal processor (FTSP), an example of a self-repairing programmable system is analyzed. The design configuration is discussed in terms of fault tolerance, system-level fault detection, isolation and common memory. Special attention is given to the FDIR (fault detection isolation and reconfiguration) logic, noting that the reconfiguration decisions are based on configuration, summary status, end-around tests, and north marker/synchro data. Several mechanisms of fault detection are described which initiate reconfiguration at different levels. It is concluded that the reliability of a signal processor can be significantly enhanced by the use of fault-tolerant techniques.

  8. An innovative on-board processor for lightsats

    NASA Technical Reports Server (NTRS)

    Henshaw, R. M.; Ballard, B. W.; Hayes, J. R.; Lohr, D. A.

    1990-01-01

    The Applied Physics Laboratory (APL) has developed a flightworthy custom microprocessor that increases capability and reduces development costs of lightsat science instruments. This device, called the FRISC (FORTH Reduced Instruction Set Computer), directly executes the high-level language called FORTH, which is ideally suited to the multitasking control and data processing environment of a spaceborne instrument processor. The FRISC will be flown as the onboard processor in the Magnetic Field Experiment on the Freja satllite. APL has achieved a significant increase in onboard processing capability with no increase in cost when compared to the magnetometer instrument on Freja's predecessor, the Viking satellite.

  9. Spaceborne Processor Array

    NASA Technical Reports Server (NTRS)

    Chow, Edward T.; Schatzel, Donald V.; Whitaker, William D.; Sterling, Thomas

    2008-01-01

    A Spaceborne Processor Array in Multifunctional Structure (SPAMS) can lower the total mass of the electronic and structural overhead of spacecraft, resulting in reduced launch costs, while increasing the science return through dynamic onboard computing. SPAMS integrates the multifunctional structure (MFS) and the Gilgamesh Memory, Intelligence, and Network Device (MIND) multi-core in-memory computer architecture into a single-system super-architecture. This transforms every inch of a spacecraft into a sharable, interconnected, smart computing element to increase computing performance while simultaneously reducing mass. The MIND in-memory architecture provides a foundation for high-performance, low-power, and fault-tolerant computing. The MIND chip has an internal structure that includes memory, processing, and communication functionality. The Gilgamesh is a scalable system comprising multiple MIND chips interconnected to operate as a single, tightly coupled, parallel computer. The array of MIND components shares a global, virtual name space for program variables and tasks that are allocated at run time to the distributed physical memory and processing resources. Individual processor- memory nodes can be activated or powered down at run time to provide active power management and to configure around faults. A SPAMS system is comprised of a distributed Gilgamesh array built into MFS, interfaces into instrument and communication subsystems, a mass storage interface, and a radiation-hardened flight computer.

  10. Backend Control Processor for a Multi-Processor Relational Database Computer System.

    DTIC Science & Technology

    1984-12-01

    SCHOOL OF ENGI. UNCRSIFID MPONTIFF DEC 84 AFXT/GCS/ENG/84D-22 F/O 9/2 L ommhhhhmhhml mhhhommhhhhhm i-2 8 -- U0. 11111= Q. 2 111.8IIII- 1111111..6...THESIS Presented to the Faculty of the School of Engineering of the Air Force Institute of Technology Air University In Partial Fulfillment of the...development of a Backend Multi-Processor Relational Database Computer System. This thesis addresses a single component of this system, the Backend Control

  11. Self-checking self-repairing computer nodes using the mirror processor

    NASA Technical Reports Server (NTRS)

    Tamir, Yuval

    1992-01-01

    Circuitry added to fault-tolerant systems for concurrent error deduction usually reduces performance. Using a technique called micro rollback, it is possible to eliminate most of the performance penalty of concurrent error detection. Error detection is performed in parallel with intermodule communication, and erroneous state changes are later undone. The author reports on the design and implementation of a VLSI RISC microprocessor, called the Mirror Processor (MP), which is capable of micro rollback. In order to achieve concurrent error detection, two MP chips operate in lockstep, comparing external signals and a signature of internal signals every clock cycle. If a mismatch is detected, both processors roll back to the beginning of the cycle when the error occurred. In some cases the erroneous state is corrected by copying a value from the fault-free processor to the faulty processor. The architecture, microarchitecture, and VLSI implementation of the MP, emphasizing its error-detection, error-recovery, and self-diagnosis capabilities, are described.

  12. Mechanically verified hardware implementing an 8-bit parallel IO Byzantine agreement processor

    NASA Technical Reports Server (NTRS)

    Moore, J. Strother

    1992-01-01

    Consider a network of four processors that use the Oral Messages (Byzantine Generals) Algorithm of Pease, Shostak, and Lamport to achieve agreement in the presence of faults. Bevier and Young have published a functional description of a single processor that, when interconnected appropriately with three identical others, implements this network under the assumption that the four processors step in synchrony. By formalizing the original Pease, et al work, Bevier and Young mechanically proved that such a network achieves fault tolerance. We develop, formalize, and discuss a hardware design that has been mechanically proven to implement their processor. In particular, we formally define mapping functions from the abstract state space of the Bevier-Young processor to a concrete state space of a hardware module and state a theorem that expresses the claim that the hardware correctly implements the processor. We briefly discuss the Brock-Hunt Formal Hardware Description Language which permits designs both to be proved correct with the Boyer-Moore theorem prover and to be expressed in a commercially supported hardware description language for additional electrical analysis and layout. We briefly describe our implementation.

  13. GR712RC- Dual-Core Processor- Product Status

    NASA Astrophysics Data System (ADS)

    Sturesson, Fredrik; Habinc, Sandi; Gaisler, Jiri

    2012-08-01

    The GR712RC System-on-Chip (SoC) is a dual core LEON3FT system suitable for advanced high reliability space avionics. Fault tolerance features from Aeroflex Gaisler’s GRLIB IP library and an implementation using Ramon Chips RadSafe cell library enables superior radiation hardness.The GR712RC device has been designed to provide high processing power by including two LEON3FT 32- bit SPARC V8 processors, each with its own high- performance IEEE754 compliant floating-point-unit and SPARC reference memory management unit.This high processing power is combined with a large number of serial interfaces, ranging from high-speed links for data transfers to low-speed control buses for commanding and status acquisition.

  14. The 3D laser radar vision processor system

    NASA Astrophysics Data System (ADS)

    Sebok, T. M.

    1990-10-01

    Loral Defense Systems (LDS) developed a 3D Laser Radar Vision Processor system capable of detecting, classifying, and identifying small mobile targets as well as larger fixed targets using three dimensional laser radar imagery for use with a robotic type system. This processor system is designed to interface with the NASA Johnson Space Center in-house Extra Vehicular Activity (EVA) Retriever robot program and provide to it needed information so it can fetch and grasp targets in a space-type scenario.

  15. The 3D laser radar vision processor system

    NASA Technical Reports Server (NTRS)

    Sebok, T. M.

    1990-01-01

    Loral Defense Systems (LDS) developed a 3D Laser Radar Vision Processor system capable of detecting, classifying, and identifying small mobile targets as well as larger fixed targets using three dimensional laser radar imagery for use with a robotic type system. This processor system is designed to interface with the NASA Johnson Space Center in-house Extra Vehicular Activity (EVA) Retriever robot program and provide to it needed information so it can fetch and grasp targets in a space-type scenario.

  16. Treecode with a Special-Purpose Processor

    NASA Astrophysics Data System (ADS)

    Makino, Junichiro

    1991-08-01

    We describe an implementation of the modified Barnes-Hut tree algorithm for a gravitational N-body calculation on a GRAPE (GRAvity PipE) backend processor. GRAPE is a special-purpose computer for N-body calculations. It receives the positions and masses of particles from a host computer and then calculates the gravitational force at each coordinate specified by the host. To use this GRAPE processor with the hierarchical tree algorithm, the host computer must maintain a list of all nodes that exert force on a particle. If we create this list for each particle of the system at each timestep, the number of floating-point operations on the host and that on GRAPE would become comparable, and the increased speed obtained by using GRAPE would be small. In our modified algorithm, we create a list of nodes for many particles. Thus, the amount of the work required of the host is significantly reduced. This algorithm was originally developed by Barnes in order to vectorize the force calculation on a Cyber 205. With this algorithm, the computing time of the force calculation becomes comparable to that of the tree construction, if the GRAPE backend processor is sufficiently fast. The obtained speed-up factor is 30 to 50 for a RISC-based host computer and GRAPE-1A with a peak speed of 240 Mflops.

  17. Power processor for a 20CM ion thruster

    NASA Technical Reports Server (NTRS)

    Biess, J. J.; Schoenfeld, A. D.; Cohen, E.

    1973-01-01

    A power processor breadboard for the JPL 20CM Ion Engine was designed, fabricated, and tested to determine compliance with the electrical specification. The power processor breadboard used the silicon-controlled rectifier (SCR) series resonant inverter as the basic power stage to process all the power to the ion engine. The breadboard power processor was integrated with the JPL 20CM ion engine and complete testing was performed. The integration tests were performed without any silicon-controlled rectifier failure. This demonstrated the ruggedness of the series resonant inverter in protecting the switching elements during arcing in the ion engine. A method of fault clearing the ion engine and returning back to normal operation without elaborate sequencing and timing control logic was evolved. In this method, the main vaporizer was turned off and the discharge current limit was reduced when an overload existed on the screen/accelerator supply. After the high voltage returned to normal, both the main vaporizer and the discharge were returned to normal.

  18. Cost-benefit study of consumer product take-back programs using IBM's WIT reverse logistics optimization tool

    NASA Astrophysics Data System (ADS)

    Veerakamolmal, Pitipong; Lee, Yung-Joon; Fasano, J. P.; Hale, Rhea; Jacques, Mary

    2002-02-01

    In recent years, there has been increased focus by regulators, manufacturers, and consumers on the issue of product end of life management for electronics. This paper presents an overview of a conceptual study designed to examine the costs and benefits of several different Product Take Back (PTB) scenarios for used electronics equipment. The study utilized a reverse logistics supply chain model to examine the effects of several different factors in PTB programs. The model was done using the IBM supply chain optimization tool known as WIT (Watson Implosion Technology). Using the WIT tool, we were able to determine a theoretical optimal cost scenario for PTB programs. The study was designed to assist IBM internally in determining theoretical optimal Product Take Back program models and determining potential incentives for increasing participation rates.

  19. Merged ozone profiles from four MIPAS processors

    NASA Astrophysics Data System (ADS)

    Laeng, Alexandra; von Clarmann, Thomas; Stiller, Gabriele; Dinelli, Bianca Maria; Dudhia, Anu; Raspollini, Piera; Glatthor, Norbert; Grabowski, Udo; Sofieva, Viktoria; Froidevaux, Lucien; Walker, Kaley A.; Zehner, Claus

    2017-04-01

    The Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) was an infrared (IR) limb emission spectrometer on the Envisat platform. Currently, there are four MIPAS ozone data products, including the operational Level-2 ozone product processed at ESA, with the scientific prototype processor being operated at IFAC Florence, and three independent research products developed by the Istituto di Fisica Applicata Nello Carrara (ISAC-CNR)/University of Bologna, Oxford University, and the Karlsruhe Institute of Technology-Institute of Meteorology and Climate Research/Instituto de Astrofísica de Andalucía (KIT-IMK/IAA). Here we present a dataset of ozone vertical profiles obtained by merging ozone retrievals from four independent Level-2 MIPAS processors. We also discuss the advantages and the shortcomings of this merged product. As the four processors retrieve ozone in different parts of the spectra (microwindows), the source measurements can be considered as nearly independent with respect to measurement noise. Hence, the information content of the merged product is greater and the precision is better than those of any parent (source) dataset. The merging is performed on a profile per profile basis. Parent ozone profiles are weighted based on the corresponding error covariance matrices; the error correlations between different profile levels are taken into account. The intercorrelations between the processors' errors are evaluated statistically and are used in the merging. The height range of the merged product is 20-55 km, and error covariance matrices are provided as diagnostics. Validation of the merged dataset is performed by comparison with ozone profiles from ACE-FTS (Atmospheric Chemistry Experiment-Fourier Transform Spectrometer) and MLS (Microwave Limb Sounder). Even though the merging is not supposed to remove the biases of the parent datasets, around the ozone volume mixing ratio peak the merged product is found to have a smaller (up to 0.1 ppmv

  20. Using NetMaster to manage IBM networks

    NASA Technical Reports Server (NTRS)

    Ginsburg, Guss

    1991-01-01

    After defining a network and conveying its importance to support the activities at the JSC, the need for network management based on the size and complexity of the IBM SNA network at JSC is demonstrated. Network Management consists of being aware of component status and the ability to control resources to meet the availability and service needs of users. The concerns of the user are addressed as well as those of the staff responsible for managing the network. It is explained how NetMaster (a network management system for managing SNA networks) is used to enhance reliability and maximize service to SNA network users through automated procedures. The following areas are discussed: customization, problem and configuration management, and system measurement applications of NetMaster. Also, several examples are given that demonstrate NetMaster's ability to manage and control the network, integrate various product functions, as well as provide useful management information.

  1. Performance characteristics of the Mayo/IBM PACS

    NASA Astrophysics Data System (ADS)

    Persons, Kenneth R.; Gehring, Dale G.; Pavicic, Mark J.; Ding, Yingjai

    1991-07-01

    The Mayo Clinic and IBM (at Rochester, Minnesota) have jointly developed a picture archiving system for use with Mayo's MRI and Neuro CT imaging modalities. The communications backbone of the PACS is a portion of the Mayo institutional network: a series of 4-Mbps token rings interconnected by bridges and fiber optic extensions. The performance characteristics of this system are important to understand because they affect the response time a PACS user can expect, and the response time for non-PACS users competing for resources on the institutional network. The performance characteristics of each component and the average load levels of the network were measured for various load distributions. These data were used to quantify the response characteristics of the existing system and to tune a model developed by North Dakota State University Department of Computer Science for predicting response times of more complex topologies.

  2. Study of Even-Even/Odd-Even/Odd-Odd Nuclei in Zn-Ga-Ge Region in the Proton-Neutron IBM/IBFM/IBFFM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoshida, N.; Brant, S.; Zuffi, L.

    We study the even-even, odd-even and odd-odd nuclei in the region including Zn-Ga-Ge in the proton-neutron IBM and the models derived from it: IBM2, IBFM2, IBFFM2. We describe {sup 67}Ga, {sup 65}Zn, and {sup 68}Ga by coupling odd particles to a boson core {sup 66}Zn. We also calculate the beta{sup +}-decay rates among {sup 68}Ge, {sup 68}Ga and {sup 68}Zn.

  3. Demonstration of two-qubit algorithms with a superconducting quantum processor.

    PubMed

    DiCarlo, L; Chow, J M; Gambetta, J M; Bishop, Lev S; Johnson, B R; Schuster, D I; Majer, J; Blais, A; Frunzio, L; Girvin, S M; Schoelkopf, R J

    2009-07-09

    Quantum computers, which harness the superposition and entanglement of physical states, could outperform their classical counterparts in solving problems with technological impact-such as factoring large numbers and searching databases. A quantum processor executes algorithms by applying a programmable sequence of gates to an initialized register of qubits, which coherently evolves into a final state containing the result of the computation. Building a quantum processor is challenging because of the need to meet simultaneously requirements that are in conflict: state preparation, long coherence times, universal gate operations and qubit readout. Processors based on a few qubits have been demonstrated using nuclear magnetic resonance, cold ion trap and optical systems, but a solid-state realization has remained an outstanding challenge. Here we demonstrate a two-qubit superconducting processor and the implementation of the Grover search and Deutsch-Jozsa quantum algorithms. We use a two-qubit interaction, tunable in strength by two orders of magnitude on nanosecond timescales, which is mediated by a cavity bus in a circuit quantum electrodynamics architecture. This interaction allows the generation of highly entangled states with concurrence up to 94 per cent. Although this processor constitutes an important step in quantum computing with integrated circuits, continuing efforts to increase qubit coherence times, gate performance and register size will be required to fulfil the promise of a scalable technology.

  4. Advanced Avionics and Processor Systems for a Flexible Space Exploration Architecture

    NASA Technical Reports Server (NTRS)

    Keys, Andrew S.; Adams, James H.; Smith, Leigh M.; Johnson, Michael A.; Cressler, John D.

    2010-01-01

    The Advanced Avionics and Processor Systems (AAPS) project, formerly known as the Radiation Hardened Electronics for Space Environments (RHESE) project, endeavors to develop advanced avionic and processor technologies anticipated to be used by NASA s currently evolving space exploration architectures. The AAPS project is a part of the Exploration Technology Development Program, which funds an entire suite of technologies that are aimed at enabling NASA s ability to explore beyond low earth orbit. NASA s Marshall Space Flight Center (MSFC) manages the AAPS project. AAPS uses a broad-scoped approach to developing avionic and processor systems. Investment areas include advanced electronic designs and technologies capable of providing environmental hardness, reconfigurable computing techniques, software tools for radiation effects assessment, and radiation environment modeling tools. Near-term emphasis within the multiple AAPS tasks focuses on developing prototype components using semiconductor processes and materials (such as Silicon-Germanium (SiGe)) to enhance a device s tolerance to radiation events and low temperature environments. As the SiGe technology will culminate in a delivered prototype this fiscal year, the project emphasis shifts its focus to developing low-power, high efficiency total processor hardening techniques. In addition to processor development, the project endeavors to demonstrate techniques applicable to reconfigurable computing and partially reconfigurable Field Programmable Gate Arrays (FPGAs). This capability enables avionic architectures the ability to develop FPGA-based, radiation tolerant processor boards that can serve in multiple physical locations throughout the spacecraft and perform multiple functions during the course of the mission. The individual tasks that comprise AAPS are diverse, yet united in the common endeavor to develop electronics capable of operating within the harsh environment of space. Specifically, the AAPS tasks for

  5. Time Manager Software for a Flight Processor

    NASA Technical Reports Server (NTRS)

    Zoerne, Roger

    2012-01-01

    Data analysis is a process of inspecting, cleaning, transforming, and modeling data to highlight useful information and suggest conclusions. Accurate timestamps and a timeline of vehicle events are needed to analyze flight data. By moving the timekeeping to the flight processor, there is no longer a need for a redundant time source. If each flight processor is initially synchronized to GPS, they can freewheel and maintain a fairly accurate time throughout the flight with no additional GPS time messages received. How ever, additional GPS time messages will ensure an even greater accuracy. When a timestamp is required, a gettime function is called that immediately reads the time-base register.

  6. A Course on Reconfigurable Processors

    ERIC Educational Resources Information Center

    Shoufan, Abdulhadi; Huss, Sorin A.

    2010-01-01

    Reconfigurable computing is an established field in computer science. Teaching this field to computer science students demands special attention due to limited student experience in electronics and digital system design. This article presents a compact course on reconfigurable processors, which was offered at the Technische Universitat Darmstadt,…

  7. PATCH image processor user's manual

    NASA Technical Reports Server (NTRS)

    Nieves, M. J. (Principal Investigator)

    1980-01-01

    The patch image processor extracts patches in various size (32 x 32, 64 x 64, 128 x 128, and 256 x 256 pixels) from full frame LANDSAT imagery data. With the patches that are extracted, a patch image mosaic is created in the image processing system, IMDACS, format.

  8. RISC Processors and High Performance Computing

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    This tutorial will discuss the top five RISC microprocessors and the parallel systems in which they are used. It will provide a unique cross-machine comparison not available elsewhere. The effective performance of these processors will be compared by citing standard benchmarks in the context of real applications. The latest NAS Parallel Benchmarks, both absolute performance and performance per dollar, will be listed. The next generation of the NPB will be described. The tutorial will conclude with a discussion of future directions in the field. Technology Transfer Considerations: All of these computer systems are commercially available internationally. Information about these processors is available in the public domain, mostly from the vendors themselves. The NAS Parallel Benchmarks and their results have been previously approved numerous times for public release, beginning back in 1991.

  9. Watchdog activity monitor (WAM) for use wth high coverage processor self-test

    NASA Technical Reports Server (NTRS)

    Tulpule, Bhalchandra R. (Inventor); Crosset, III, Richard W. (Inventor); Versailles, Richard E. (Inventor)

    1988-01-01

    A high fault coverage, instruction modeled self-test for a signal processor in a user environment is disclosed. The self-test executes a sequence of sub-tests and issues a state transition signal upon the execution of each sub-test. The self-test may be combined with a watchdog activity monitor (WAM) which provides a test-failure signal in the presence of a counted number of state transitions not agreeing with an expected number. An independent measure of time may be provided in the WAM to increase fault coverage by checking the processor's clock. Additionally, redundant processor systems are protected from inadvertent unsevering of a severed processor using a unique unsever arming technique and apparatus.

  10. Electrical Prototype Power Processor for the 30-cm Mercury electric propulsion engine

    NASA Technical Reports Server (NTRS)

    Biess, J. J.; Frye, R. J.

    1978-01-01

    An Electrical Prototpye Power Processor has been designed to the latest electrical and performance requirements for a flight-type 30-cm ion engine and includes all the necessary power, command, telemetry and control interfaces for a typical electric propulsion subsystem. The power processor was configured into seven separate mechanical modules that would allow subassembly fabrication, test and integration into a complete power processor unit assembly. The conceptual mechanical packaging of the electrical prototype power processor unit demonstrated the relative location of power, high voltage and control electronic components to minimize electrical interactions and to provide adequate thermal control in a vacuum environment. Thermal control was accomplished with a heat pipe simulator attached to the base of the modules.

  11. Reduced power processor requirements for the 30-cm diameter HG ion thruster

    NASA Technical Reports Server (NTRS)

    Rawlin, V. K.

    1979-01-01

    The characteristics of power processors strongly impact the overall performance and cost of electric propulsion systems. A program was initiated to evaluate simplifications of the thruster-power processor interface requirements. The power processor requirements are mission dependent with major differences arising for those missions which require a nearly constant thruster operating point (typical of geocentric and some inbound planetary missions) and those requiring operation over a large range of input power (such as outbound planetary missions). This paper describes the results of tests which have indicated that as many as seven of the twelve power supplies may be eliminated from the present Functional Model Power Processor used with 30-cm diameter Hg ion thrusters.

  12. The Event Based Language and Its Multiple Processor Implementations.

    DTIC Science & Technology

    1980-01-01

    10 6.1 "Recursive" Linear Fibonacci ................................................ 105 6.2 The Readers Writers Problem...kinds. Examples of such systems are: C.mmp [Wu-72], Pluribus [He-73], Data Flow [ De -75], the boolean n-cube parallel machine [Su-77], and the MuNet [Wa...concurrency within programs; therefore, we hate concentrated on two types of systems which seem suitable: a processor network, and a data flow processor [ De -77

  13. Rational calculation accuracy in acousto-optical matrix-vector processor

    NASA Astrophysics Data System (ADS)

    Oparin, V. V.; Tigin, Dmitry V.

    1994-01-01

    The high speed of parallel computations for a comparatively small-size processor and acceptable power consumption makes the usage of acousto-optic matrix-vector multiplier (AOMVM) attractive for processing of large amounts of information in real time. The limited accuracy of computations is an essential disadvantage of such a processor. The reduced accuracy requirements allow for considerable simplification of the AOMVM architecture and the reduction of the demands on its components.

  14. Fabrication Security and Trust of Domain-Specific ASIC Processors

    DTIC Science & Technology

    2016-10-30

    embedded in the design. For example , an ASIC processor potentially has a 10-1,000X performance advantage over its FPGA and GPP counterparts, but...paper by summarizing our lessons learned from this project and suggests a few research directions. II. DOMAIN-SPECIFIC ASIC PROCESSORS As Figure 1 has...sponsored by the Assistant Secretary of Defense for Research & Engineering under Air Force Contract #FA8721-05-C-0002. Opinions, interpretations

  15. Validation of the 1/12 degrees Arctic Cap Nowcast/Forecast System (ACNFS)

    DTIC Science & Technology

    2010-11-04

    IBM Power 6 ( Davinci ) at NAVOCEANO with a 2 hr time step for the ice model and a 30 min time step for the ocean model. All model boundaries are...run using 320 processors on the Navy DSRC IBM Power 6 ( Davinci ) at NAVOCEANO. A typical one-day hindcast takes approximately 1.0 wall clock hour...meter. As more observations become available, further studies of ice draft will be used as a validation tool . The IABP program archived 102 Argos

  16. Validation of the 1/12 deg Arctic Cap Nowcast/Forecast System (ACNFS)

    DTIC Science & Technology

    2010-11-04

    IBM Power 6 ( Davinci ) at NAVOCEANO with a 2 hr time step for the ice model and a 30 min time step for the ocean model. All model boundaries are...run using 320 processors on the Navy DSRC IBM Power 6 ( Davinci ) at NAVOCEANO. A typical one-day hindcast takes approximately 1.0 wall clock hour...meter. As more observations become available, further studies of ice draft will be used as a validation tool . The IABP program archived 102 Argos

  17. Letter from Hong Kong: A Report on Chinese Food, Fake Apples, and IBM's Asian Strategy.

    ERIC Educational Resources Information Center

    Immel, A. Richard

    1984-01-01

    Notes that microcomputer use in Hong Kong's small business community does not reflect the growth of its high-tech electronics industry and discusses IBM's influence in Hong Kong and Asia, the counterfeiting of Apple microcomputers and software, and why Apple currently has no recourse. (MBR)

  18. Application of convolve-multiply-convolve SAW processor for satellite communications

    NASA Technical Reports Server (NTRS)

    Lie, Y. S.; Ching, M.

    1991-01-01

    There is a need for a satellite communications receiver than can perform simultaneous multi-channel processing of single channel per carrier (SCPC) signals originating from various small (mobile or fixed) earth stations. The number of ground users can be as many as 1000. Conventional techniques of simultaneously processing these signals is by employing as many RF-bandpass filters as the number of channels. Consequently, such an approach would result in a bulky receiver, which becomes impractical for satellite applications. A unique approach utilizing a realtime surface acoustic wave (SAW) chirp transform processor is presented. The application of a Convolve-Multiply-Convolve (CMC) chirp transform processor is described. The CMC processor transforms each input channel into a unique timeslot, while preserving its modulation content (in this case QPSK). Subsequently, each channel is individually demodulated without the need of input channel filters. Circuit complexity is significantly reduced, because the output frequency of the CMC processor is common for all input channel frequencies. The results of theoretical analysis and experimental results are in good agreement.

  19. Slime mould processors, logic gates and sensors.

    PubMed

    Adamatzky, A

    2015-07-28

    A heterotic, or hybrid, computation implies that two or more substrates of different physical nature are merged into a single device with indistinguishable parts. These hybrid devices then undertake coherent acts on programmable and sensible processing of information. We study the potential of heterotic computers using slime mould acting under the guidance of chemical, mechanical and optical stimuli. Plasmodium of acellular slime mould Physarum polycephalum is a gigantic single cell visible to the unaided eye. The cell shows a rich spectrum of behavioural morphological patterns in response to changing environmental conditions. Given data represented by chemical or physical stimuli, we can employ and modify the behaviour of the slime mould to make it solve a range of computing and sensing tasks. We overview results of laboratory experimental studies on prototyping of the slime mould morphological processors for approximation of Voronoi diagrams, planar shapes and solving mazes, and discuss logic gates implemented via collision of active growing zones and tactile responses of P. polycephalum. We also overview a range of electronic components--memristor, chemical, tactile and colour sensors-made of the slime mould. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  20. IBM system/360 assembly language interval arithmetic software

    NASA Technical Reports Server (NTRS)

    Phillips, E. J.

    1972-01-01

    Computer software designed to perform interval arithmetic is described. An interval is defined as the set of all real numbers between two given numbers including or excluding one or both endpoints. Interval arithmetic consists of the various elementary arithmetic operations defined on the set of all intervals, such as interval addition, subtraction, union, etc. One of the main applications of interval arithmetic is in the area of error analysis of computer calculations. For example, it has been used sucessfully to compute bounds on sounding errors in the solution of linear algebraic systems, error bounds in numerical solutions of ordinary differential equations, as well as integral equations and boundary value problems. The described software enables users to implement algorithms of the type described in references efficiently on the IBM 360 system.

  1. Reconfigurable lattice mesh designs for programmable photonic processors.

    PubMed

    Pérez, Daniel; Gasulla, Ivana; Capmany, José; Soref, Richard A

    2016-05-30

    We propose and analyse two novel mesh design geometries for the implementation of tunable optical cores in programmable photonic processors. These geometries are the hexagonal and the triangular lattice. They are compared here to a previously proposed square mesh topology in terms of a series of figures of merit that account for metrics that are relevant to on-chip integration of the mesh. We find that that the hexagonal mesh is the most suitable option of the three considered for the implementation of the reconfigurable optical core in the programmable processor.

  2. Ring-array processor distribution topology for optical interconnects

    NASA Technical Reports Server (NTRS)

    Li, Yao; Ha, Berlin; Wang, Ting; Wang, Sunyu; Katz, A.; Lu, X. J.; Kanterakis, E.

    1992-01-01

    The existing linear and rectangular processor distribution topologies for optical interconnects, although promising in many respects, cannot solve problems such as clock skews, the lack of supporting elements for efficient optical implementation, etc. The use of a ring-array processor distribution topology, however, can overcome these problems. Here, a study of the ring-array topology is conducted with an aim of implementing various fast clock rate, high-performance, compact optical networks for digital electronic multiprocessor computers. Practical design issues are addressed. Some proof-of-principle experimental results are included.

  3. Interactive high-resolution isosurface ray casting on multicore processors.

    PubMed

    Wang, Qin; JaJa, Joseph

    2008-01-01

    We present a new method for the interactive rendering of isosurfaces using ray casting on multi-core processors. This method consists of a combination of an object-order traversal that coarsely identifies possible candidate 3D data blocks for each small set of contiguous pixels, and an isosurface ray casting strategy tailored for the resulting limited-size lists of candidate 3D data blocks. While static screen partitioning is widely used in the literature, our scheme performs dynamic allocation of groups of ray casting tasks to ensure almost equal loads among the different threads running on multi-cores while maintaining spatial locality. We also make careful use of memory management environment commonly present in multi-core processors. We test our system on a two-processor Clovertown platform, each consisting of a Quad-Core 1.86-GHz Intel Xeon Processor, for a number of widely different benchmarks. The detailed experimental results show that our system is efficient and scalable, and achieves high cache performance and excellent load balancing, resulting in an overall performance that is superior to any of the previous algorithms. In fact, we achieve an interactive isosurface rendering on a 1024(2) screen for all the datasets tested up to the maximum size of the main memory of our platform.

  4. 77 FR 124 - Biological Processors of Alabama; Decatur, Morgan County, AL; Notice of Settlement

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-03

    ... ENVIRONMENTAL PROTECTION AGENCY [FRL-9612-9] Biological Processors of Alabama; Decatur, Morgan... reimbursement of past response costs concerning the Biological Processors of Alabama Superfund Site located in... Ms. Paula V. Painter. Submit your comments by Site name Biological Processors of Alabama Superfund...

  5. A hierarchical, automated target recognition algorithm for a parallel analog processor

    NASA Technical Reports Server (NTRS)

    Woodward, Gail; Padgett, Curtis

    1997-01-01

    A hierarchical approach is described for an automated target recognition (ATR) system, VIGILANTE, that uses a massively parallel, analog processor (3DANN). The 3DANN processor is capable of performing 64 concurrent inner products of size 1x4096 every 250 nanoseconds.

  6. Hypercluster Parallel Processor

    NASA Technical Reports Server (NTRS)

    Blech, Richard A.; Cole, Gary L.; Milner, Edward J.; Quealy, Angela

    1992-01-01

    Hypercluster computer system includes multiple digital processors, operation of which coordinated through specialized software. Configurable according to various parallel-computing architectures of shared-memory or distributed-memory class, including scalar computer, vector computer, reduced-instruction-set computer, and complex-instruction-set computer. Designed as flexible, relatively inexpensive system that provides single programming and operating environment within which one can investigate effects of various parallel-computing architectures and combinations on performance in solution of complicated problems like those of three-dimensional flows in turbomachines. Hypercluster software and architectural concepts are in public domain.

  7. NJE; VAX-VMS IBM NJE network protocol emulator. [DEC VAX11/780; VAX-11 FORTRAN 77 (99%) and MACRO-11 (1%)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engert, D.E.; Raffenetti, C.

    NJE is communications software developed to enable a VAX VMS system to participate as an end-node in a standard IBM network by emulating the Network Job Entry (NJE) protocol. NJE supports job networking for the operating systems used on most large IBM-compatible computers (e.g., VM/370, MVS with JES2 or JES3, SVS, MVT with ASP or HASP). Files received by the VAX can be printed or saved in user-selected disk files. Files sent to the network can be routed to any network node for printing, punching, or job submission, or to a VM/370 user's virtual reader. Files sent from the VAXmore » are queued and transmitted asynchronously. No changes are required to the IBM software.DEC VAX11/780; VAX-11 FORTRAN 77 (99%) and MACRO-11 (1%); VMS 2.5; VAX11/780 with DUP-11 UNIBUS interface and 9600 baud synchronous modem..« less

  8. Software design and documentation language, revision 1

    NASA Technical Reports Server (NTRS)

    Kleine, H.

    1979-01-01

    The Software Design and Documentation Language (SDDL) developed to provide an effective communications medium to support the design and documentation of complex software applications is described. Features of the system include: (1) a processor which can convert design specifications into an intelligible, informative machine-reproducible document; (2) a design and documentation language with forms and syntax that are simple, unrestrictive, and communicative; and (3) methodology for effective use of the language and processor. The SDDL processor is written in the SIMSCRIPT II programming language and is implemented on the UNIVAC 1108, the IBM 360/370, and Control Data machines.

  9. A simple integrative method for presenting head-contingent motion parallax and disparity cues on intel x86 processor-based machines.

    PubMed

    Szatmary, J; Hadani, I; Julesz, B

    1997-01-01

    Rogers and Graham (1979) developed a system to show that head-movement-contingent motion parallax produces monocular depth perception in random dot patterns. Their display system comprised an oscilloscope driven by function generators or a special graphics board that triggered the X and Y deflection of the raster scan signal. Replication of this system required costly hardware that is no longer on the market. In this paper the Rogers-Graham method is reproduced with an Intel processor based IBM PC compatible machine with no additional hardware cost. An adapted joystick sampled through the standard game-port can serve as a provisional head-movement sensor. Monitor resolution for displaying motion is effectively enhanced 16 times by the use of anti-aliasing, enabling the display of thousands of random dots in real-time with a refresh rate of 60 Hz or above. A color monitor enables the use of the anaglyph method, thus combining stereoscopic and monocular parallax on a single display without the loss of speed. The power of this system is demonstrated by a psychophysical measurement in which subjects nulled head-movement-contingent illusory parallax, evoked by a static stereogram, with real parallax. The amount of real parallax required to null the illusory stereoscopic parallax monotonically increased with disparity.

  10. Phase coherence adaptive processor for automatic signal detection and identification

    NASA Astrophysics Data System (ADS)

    Wagstaff, Ronald A.

    2006-05-01

    A continuously adapting acoustic signal processor with an automatic detection/decision aid is presented. Its purpose is to preserve the signals of tactical interest, and filter out other signals and noise. It utilizes single sensor or beamformed spectral data and transforms the signal and noise phase angles into "aligned phase angles" (APA). The APA increase the phase temporal coherence of signals and leave the noise incoherent. Coherence thresholds are set, which are representative of the type of source "threat vehicle" and the geographic area or volume in which it is operating. These thresholds separate signals, based on the "quality" of their APA coherence. An example is presented in which signals from a submerged source in the ocean are preserved, while clutter signals from ships and noise are entirely eliminated. Furthermore, the "signals of interest" were identified by the processor's automatic detection aid. Similar performance is expected for air and ground vehicles. The processor's equations are formulated in such a manner that they can be tuned to eliminate noise and exploit signal, based on the "quality" of their APA temporal coherence. The mathematical formulation for this processor is presented, including the method by which the processor continuously self-adapts. Results show nearly complete elimination of noise, with only the selected category of signals remaining, and accompanying enhancements in spectral and spatial resolution. In most cases, the concept of signal-to-noise ratio looses significance, and "adaptive automated /decision aid" is more relevant.

  11. Materials accounting system for an IBM PC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bearse, R.C.; Thomas, R.J.; Henslee, S.P.

    1986-01-01

    We have adapted the Los Alamos MASS accounting system for use on an IBM PC/AT at the Fuels Manufacturing Facility (FMF) at Argonne National Laboratory-West (ANL-WEST) in Idaho Falls, Idaho. Cost of hardware and proprietary software was less than $10,000 per station. The system consists of three stations between which accounting information is transferred using floppy disks accompanying special nuclear material shipments. The programs were implemented in dBASEIII and were compiled using the proprietary software CLIPPER. Modifications to the inventory can be posted in just a few minutes, and operator/computer interaction is nearly instantaneous. After the records are built bymore » the user, it takes 4 to 5 seconds to post the results to the database files. A version of this system was specially adapted and is currently in use at the FMF facility at Argonne National Laboratory in Idaho Falls. Initial satisfaction is adequate and software and hardware problems are minimal.« less

  12. MOLECULAR DESIGNER: an interactive program for the display of protein structure on the IBM-PC.

    PubMed

    Hannon, G J; Jentoft, J E

    1985-09-01

    A BASIC interactive graphics program has been developed for the IBM-PC which utilizes the graphics capabilities of that computer to display and manipulate protein structure from coordinates. Structures may be generated from typed files, or from Brookhaven National Laboratories' Protein Data Bank data tapes. Once displayed, images may be rotated, translated and expanded to any desired size. Figures may be viewed as ball-and-stick or space-filling models. Calculated multiple-point perspective may also be added to the display. Docking manipulations are possible since more than a single figure may be displayed and manipulated simultaneously. Further, stereo images and red/blue three-dimensional images may be generated using the accompanying DESIPLOT program and an HP-7475A plotter. A version of the program is also currently available for the Apple Macintosh. Full implementation on the Macintosh requires 512 K and at least one disk drive. Otherwise this version is essentially identical to the IBM-PC version described herein.

  13. Prototype Focal-Plane-Array Optoelectronic Image Processor

    NASA Technical Reports Server (NTRS)

    Fang, Wai-Chi; Shaw, Timothy; Yu, Jeffrey

    1995-01-01

    Prototype very-large-scale integrated (VLSI) planar array of optoelectronic processing elements combines speed of optical input and output with flexibility of reconfiguration (programmability) of electronic processing medium. Basic concept of processor described in "Optical-Input, Optical-Output Morphological Processor" (NPO-18174). Performs binary operations on binary (black and white) images. Each processing element corresponds to one picture element of image and located at that picture element. Includes input-plane photodetector in form of parasitic phototransistor part of processing circuit. Output of each processing circuit used to modulate one picture element in output-plane liquid-crystal display device. Intended to implement morphological processing algorithms that transform image into set of features suitable for high-level processing; e.g., recognition.

  14. 17 CFR 242.609 - Registration of securities information processors: form of application and amendments.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... information processors: form of application and amendments. 242.609 Section 242.609 Commodity and Securities....609 Registration of securities information processors: form of application and amendments. (a) An application for the registration of a securities information processor shall be filed on Form SIP (§ 249.1001...

  15. 17 CFR 242.609 - Registration of securities information processors: form of application and amendments.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... information processors: form of application and amendments. 242.609 Section 242.609 Commodity and Securities....609 Registration of securities information processors: form of application and amendments. (a) An application for the registration of a securities information processor shall be filed on Form SIP (§ 249.1001...

  16. ARTS III/Parallel Processor Design Study

    DOT National Transportation Integrated Search

    1975-04-01

    It was the purpose of this design study to investigate the feasibility, suitability, and cost-effectiveness of augmenting the ARTS III failsafe/failsoft multiprocessor system with a form of parallel processor to accomodate a large growth in air traff...

  17. Matching the Word Processor to the Job.

    ERIC Educational Resources Information Center

    Synder, Carin

    1982-01-01

    The intelligent purchase of school office equipment, specifically word processors, typewriters, calculators, and furniture, requires analysis of present needs and a realistic evaluation of future needs. (MLF)

  18. IMPLEMENTATION OF THE SMOKE EMISSION DATA PROCESSOR AND SMOKE TOOL INPUT DATA PROCESSOR IN MODELS-3

    EPA Science Inventory

    The U.S. Environmental Protection Agency has implemented Version 1.3 of SMOKE (Sparse Matrix Object Kernel Emission) processor for preparation of area, mobile, point, and biogenic sources emission data within Version 4.1 of the Models-3 air quality modeling framework. The SMOK...

  19. PREMAQ: A NEW PRE-PROCESSOR TO CMAQ FOR AIR-QUALITY FORECASTING

    EPA Science Inventory

    A new pre-processor to CMAQ (PREMAQ) has been developed as part of the national air-quality forecasting system. PREMAQ combines the functionality of MCIP and parts of SMOKE in a single real-time processor. PREMAQ was specifically designed to link NCEP's Eta model with CMAQ, and...

  20. Ground Terminal Processor Interface Board for Skynet Uplink Synchronization Trials

    DTIC Science & Technology

    1997-11-01

    I1 National DMfense Defence nationale GROUND TERMINAL PROCESSOR INTERFACE BOARD FOR SKYNET UPLINK SYNCHRONIZATION TRIALS by Caroline Tom 19980126...National D6fense Defence nationale GROUND TERMINAL PROCESSOR INTERFACE BOARD FOR SKYNET UPLINK SYNCHRONIZATION TRIALS by Caroline Tom MilSat...aspects of uplink synchronization for extremely-high-frequency (EHF) spread spectrum satellite communications (SATCOM). Requirements of the GT subsystem

  1. Processor Emulator with Benchmark Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lloyd, G. Scott; Pearce, Roger; Gokhale, Maya

    2015-11-13

    A processor emulator and a suite of benchmark applications have been developed to assist in characterizing the performance of data-centric workloads on current and future computer architectures. Some of the applications have been collected from other open source projects. For more details on the emulator and an example of its usage, see reference [1].

  2. Fast, Massively Parallel Data Processors

    NASA Technical Reports Server (NTRS)

    Heaton, Robert A.; Blevins, Donald W.; Davis, ED

    1994-01-01

    Proposed fast, massively parallel data processor contains 8x16 array of processing elements with efficient interconnection scheme and options for flexible local control. Processing elements communicate with each other on "X" interconnection grid with external memory via high-capacity input/output bus. This approach to conditional operation nearly doubles speed of various arithmetic operations.

  3. Scan line graphics generation on the massively parallel processor

    NASA Technical Reports Server (NTRS)

    Dorband, John E.

    1988-01-01

    Described here is how researchers implemented a scan line graphics generation algorithm on the Massively Parallel Processor (MPP). Pixels are computed in parallel and their results are applied to the Z buffer in large groups. To perform pixel value calculations, facilitate load balancing across the processors and apply the results to the Z buffer efficiently in parallel requires special virtual routing (sort computation) techniques developed by the author especially for use on single-instruction multiple-data (SIMD) architectures.

  4. Processor farming in two-level analysis of historical bridge

    NASA Astrophysics Data System (ADS)

    Krejčí, T.; Kruis, J.; Koudelka, T.; Šejnoha, M.

    2017-11-01

    This contribution presents a processor farming method in connection with a multi-scale analysis. In this method, each macro-scopic integration point or each finite element is connected with a certain meso-scopic problem represented by an appropriate representative volume element (RVE). The solution of a meso-scale problem provides then effective parameters needed on the macro-scale. Such an analysis is suitable for parallel computing because the meso-scale problems can be distributed among many processors. The application of the processor farming method to a real world masonry structure is illustrated by an analysis of Charles bridge in Prague. The three-dimensional numerical model simulates the coupled heat and moisture transfer of one half of arch No. 3. and it is a part of a complex hygro-thermo-mechanical analysis which has been developed to determine the influence of climatic loading on the current state of the bridge.

  5. Contextual classification on a CDC Flexible Processor system. [for photomapped remote sensing data

    NASA Technical Reports Server (NTRS)

    Smith, B. W.; Siegel, H. J.; Swain, P. H.

    1981-01-01

    A potential hardware organization for the Flexible Processor Array is presented. An algorithm that implements a contextual classifier for remote sensing data analysis is given, along with uniprocessor classification algorithms. The Flexible Processor algorithm is provided, as are simulated timings for contextual classifiers run on the Flexible Processor Array and another system. The timings are analyzed for context neighborhoods of sizes three and nine.

  6. FANTOM: Algorithm-Architecture Codesign for High-Performance Embedded Signal and Image Processing Systems

    DTIC Science & Technology

    2013-05-25

    graphics processors by IBM, AMD, and nVIDIA . They are between general-purpose pro- cessors and special-purpose processors. In Phase II. 3.10 Measure of...particular, Dr. Kevin Irick started a company Silicon Scapes and he has been the CEO. 5 Implications for Related/Future Research We speculate that...final project report in Jan. 2011. At the test and validation stage of the project. FANTOM’s partner at Raytheon quit from his company and hence from

  7. 30/20 GHz communications systems baseband processor development

    NASA Astrophysics Data System (ADS)

    Brown, L.; Sabourin, D.; Stilwell, J.; McCallister, R.; Borota, M.

    The architecture and system design concepts for a commercial satellite communications system planned for the 1990's has been developed. The system provides data communications between the individual users via trunking and customer premise service terminals utilizing a central switching satellite operating in a time-division multiple-access mode. Baseband processing is employed to route and control traffic on an individual message basis while providing significant advantages in improved link margins and system flexibility. Key technology developments required to prove the flight readiness of the baseband processor design are being verified in the baseband processor proof-of-concept model described herein.

  8. Embedded Data Processor and Portable Computer Technology testbeds

    NASA Technical Reports Server (NTRS)

    Alena, Richard; Liu, Yuan-Kwei; Goforth, Andre; Fernquist, Alan R.

    1993-01-01

    Attention is given to current activities in the Embedded Data Processor and Portable Computer Technology testbed configurations that are part of the Advanced Data Systems Architectures Testbed at the Information Sciences Division at NASA Ames Research Center. The Embedded Data Processor Testbed evaluates advanced microprocessors for potential use in mission and payload applications within the Space Station Freedom Program. The Portable Computer Technology (PCT) Testbed integrates and demonstrates advanced portable computing devices and data system architectures. The PCT Testbed uses both commercial and custom-developed devices to demonstrate the feasibility of functional expansion and networking for portable computers in flight missions.

  9. 30/20 GHz communications systems baseband processor development

    NASA Technical Reports Server (NTRS)

    Brown, L.; Sabourin, D.; Stilwell, J.; Mccallister, R.; Borota, M.

    1982-01-01

    The architecture and system design concepts for a commercial satellite communications system planned for the 1990's has been developed. The system provides data communications between the individual users via trunking and customer premise service terminals utilizing a central switching satellite operating in a time-division multiple-access mode. Baseband processing is employed to route and control traffic on an individual message basis while providing significant advantages in improved link margins and system flexibility. Key technology developments required to prove the flight readiness of the baseband processor design are being verified in the baseband processor proof-of-concept model described herein.

  10. Design of an integrated fuel processor for residential PEMFCs applications

    NASA Astrophysics Data System (ADS)

    Seo, Yu Taek; Seo, Dong Joo; Jeong, Jin Hyeok; Yoon, Wang Lai

    KIER has been developing a novel fuel processing system to provide hydrogen rich gas to residential PEMFCs system. For the effective design of a compact hydrogen production system, each unit process for steam reforming and water gas shift, has a steam generator and internal heat exchangers which are thermally and physically integrated into a single packaged hardware system. The newly designed fuel processor (prototype II) showed a thermal efficiency of 78% as a HHV basis with methane conversion of 89%. The preferential oxidation unit with two staged cascade reactors, reduces, the CO concentration to below 10 ppm without complicated temperature control hardware, which is the prerequisite CO limit for the PEMFC stack. After we achieve the initial performance of the fuel processor, partial load operation was carried out to test the performance and reliability of the fuel processor at various loads. The stability of the fuel processor was also demonstrated for three successive days with a stable composition of product gas and thermal efficiency. The CO concentration remained below 10 ppm during the test period and confirmed the stable performance of the two-stage PrOx reactors.

  11. Efficient Multicriteria Protein Structure Comparison on Modern Processor Architectures

    PubMed Central

    Manolakos, Elias S.

    2015-01-01

    Fast increasing computational demand for all-to-all protein structures comparison (PSC) is a result of three confounding factors: rapidly expanding structural proteomics databases, high computational complexity of pairwise protein comparison algorithms, and the trend in the domain towards using multiple criteria for protein structures comparison (MCPSC) and combining results. We have developed a software framework that exploits many-core and multicore CPUs to implement efficient parallel MCPSC in modern processors based on three popular PSC methods, namely, TMalign, CE, and USM. We evaluate and compare the performance and efficiency of the two parallel MCPSC implementations using Intel's experimental many-core Single-Chip Cloud Computer (SCC) as well as Intel's Core i7 multicore processor. We show that the 48-core SCC is more efficient than the latest generation Core i7, achieving a speedup factor of 42 (efficiency of 0.9), making many-core processors an exciting emerging technology for large-scale structural proteomics. We compare and contrast the performance of the two processors on several datasets and also show that MCPSC outperforms its component methods in grouping related domains, achieving a high F-measure of 0.91 on the benchmark CK34 dataset. The software implementation for protein structure comparison using the three methods and combined MCPSC, along with the developed underlying rckskel algorithmic skeletons library, is available via GitHub. PMID:26605332

  12. Efficient Multicriteria Protein Structure Comparison on Modern Processor Architectures.

    PubMed

    Sharma, Anuj; Manolakos, Elias S

    2015-01-01

    Fast increasing computational demand for all-to-all protein structures comparison (PSC) is a result of three confounding factors: rapidly expanding structural proteomics databases, high computational complexity of pairwise protein comparison algorithms, and the trend in the domain towards using multiple criteria for protein structures comparison (MCPSC) and combining results. We have developed a software framework that exploits many-core and multicore CPUs to implement efficient parallel MCPSC in modern processors based on three popular PSC methods, namely, TMalign, CE, and USM. We evaluate and compare the performance and efficiency of the two parallel MCPSC implementations using Intel's experimental many-core Single-Chip Cloud Computer (SCC) as well as Intel's Core i7 multicore processor. We show that the 48-core SCC is more efficient than the latest generation Core i7, achieving a speedup factor of 42 (efficiency of 0.9), making many-core processors an exciting emerging technology for large-scale structural proteomics. We compare and contrast the performance of the two processors on several datasets and also show that MCPSC outperforms its component methods in grouping related domains, achieving a high F-measure of 0.91 on the benchmark CK34 dataset. The software implementation for protein structure comparison using the three methods and combined MCPSC, along with the developed underlying rckskel algorithmic skeletons library, is available via GitHub.

  13. Development of a Novel, Two-Processor Architecture for a Small UAV Autopilot System,

    DTIC Science & Technology

    2006-07-26

    is, and the control laws the user implements to control it. The flight control system board will contain the processor selected for this system...Unit (IMU). The IMU contains solid-state gyros and accelerometers and uses these to determine the attitude of the UAV within the three dimensions of...multiple-UAV swarming for combat support operations. The mission processor board will contain the processor selected to execute the mission

  14. Multiprocessing MCNP on an IBM RS/6000 cluster

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKinney, G.W.; West, J.T.

    1993-01-01

    The advent of high-performance computer systems has brought to maturity programming concepts like vectorization, multiprocessing, and multitasking. While there are many schools of thought as to the most significant factor in obtaining order-of-magnitude increases in performance, such speedup can only be achieved by integrating the computer system and application code. Vectorization leads to faster manipulation of arrays by overlapping instruction CPU cycles. Discrete ordinates codes, which require the solving of large matrices, have proved to be major benefactors of vectorization. Monte Carlo transport, on the other hand, typically contains numerous logic statements and requires extensive redevelopment to benefit from vectorization.more » Multiprocessing and multitasking provide additional CPU cycles via multiple processors. Such systems are generally designed with either common memory access (multitasking) or distributed memory access. In both cases, theoretical speedup, as a function of the number of processors (P) and the fraction of task time that multiprocesses (f), can be formulated using Amdahl's Law S ((f,P) = 1 f + f/P). However, for most applications this theoretical limit cannot be achieved, due to additional terms not included in Amdahl's Law. Monte Carlo transport is a natural candidate for multiprocessing, since the particle tracks are generally independent and the precision of the result increases as the square root of the number of particles tracked.« less

  15. Multiprocessing MCNP on an IBM RS/6000 cluster

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKinney, G.W.; West, J.T.

    1993-03-01

    The advent of high-performance computer systems has brought to maturity programming concepts like vectorization, multiprocessing, and multitasking. While there are many schools of thought as to the most significant factor in obtaining order-of-magnitude increases in performance, such speedup can only be achieved by integrating the computer system and application code. Vectorization leads to faster manipulation of arrays by overlapping instruction CPU cycles. Discrete ordinates codes, which require the solving of large matrices, have proved to be major benefactors of vectorization. Monte Carlo transport, on the other hand, typically contains numerous logic statements and requires extensive redevelopment to benefit from vectorization.more » Multiprocessing and multitasking provide additional CPU cycles via multiple processors. Such systems are generally designed with either common memory access (multitasking) or distributed memory access. In both cases, theoretical speedup, as a function of the number of processors (P) and the fraction of task time that multiprocesses (f), can be formulated using Amdahl`s Law S ((f,P) = 1 f + f/P). However, for most applications this theoretical limit cannot be achieved, due to additional terms not included in Amdahl`s Law. Monte Carlo transport is a natural candidate for multiprocessing, since the particle tracks are generally independent and the precision of the result increases as the square root of the number of particles tracked.« less

  16. CLIPS - C LANGUAGE INTEGRATED PRODUCTION SYSTEM (IBM PC VERSION WITH CLIPSITS)

    NASA Technical Reports Server (NTRS)

    Riley, , .

    1994-01-01

    The C Language Integrated Production System, CLIPS, is a shell for developing expert systems. It is designed to allow artificial intelligence research, development, and delivery on conventional computers. The primary design goals for CLIPS are portability, efficiency, and functionality. For these reasons, the program is written in C. CLIPS meets or outperforms most micro- and minicomputer based artificial intelligence tools. CLIPS is a forward chaining rule-based language. The program contains an inference engine and a language syntax that provide a framework for the construction of an expert system. It also includes tools for debugging an application. CLIPS is based on the Rete algorithm, which enables very efficient pattern matching. The collection of conditions and actions to be taken if the conditions are met is constructed into a rule network. As facts are asserted either prior to or during a session, CLIPS pattern-matches the number of fields. Wildcards and variables are supported for both single and multiple fields. CLIPS syntax allows the inclusion of externally defined functions (outside functions which are written in a language other than CLIPS). CLIPS itself can be embedded in a program such that the expert system is available as a simple subroutine call. Advanced features found in CLIPS version 4.3 include an integrated microEMACS editor, the ability to generate C source code from a CLIPS rule base to produce a dedicated executable, binary load and save capabilities for CLIPS rule bases, and the utility program CRSV (Cross-Reference, Style, and Verification) designed to facilitate the development and maintenance of large rule bases. Five machine versions are available. Each machine version includes the source and the executable for that machine. The UNIX version includes the source and binaries for IBM RS/6000, Sun3 series, and Sun4 series computers. The UNIX, DEC VAX, and DEC RISC Workstation versions are line oriented. The PC version and the Macintosh

  17. Digital data, composite video multiplexer and demultiplexer boards for an IBM PC/AT compatible computer

    NASA Technical Reports Server (NTRS)

    Smith, Dean Lance

    1993-01-01

    Work continued on the design of two IBM PC/AT compatible computer interface boards. The boards will permit digital data to be transmitted over a composite video channel from the Orbiter. One board combines data with a composite video signal. The other board strips the data from the video signal.

  18. Application of Advanced Multi-Core Processor Technologies to Oceanographic Research

    DTIC Science & Technology

    2013-09-30

    STM32 NXP LPC series No Proprietary Microchip PIC32/DSPIC No > 500 mW; < 5 W ARM Cortex TI OMAP TI Sitara Broadcom BCM2835 Varies FPGA...1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Application of Advanced Multi-Core Processor Technologies...state-of-the-art information processing architectures. OBJECTIVES Next-generation processor architectures (multi-core, multi-threaded) hold the

  19. IBM Demonstrates a General-Purpose, High-Performance, High-Availability Cloud-Hosted Data Distribution System With Live GOES-16 Weather Satellite Data

    NASA Astrophysics Data System (ADS)

    Snyder, P. L.; Brown, V. W.

    2017-12-01

    IBM has created a general purpose, data-agnostic solution that provides high performance, low data latency, high availability, scalability, and persistent access to the captured data, regardless of source or type. This capability is hosted on commercially available cloud environments and uses much faster, more efficient, reliable, and secure data transfer protocols than the more typically used FTP. The design incorporates completely redundant data paths at every level, including at the cloud data center level, in order to provide the highest assurance of data availability to the data consumers. IBM has been successful in building and testing a Proof of Concept instance on our IBM Cloud platform to receive and disseminate actual GOES-16 data as it is being downlinked. This solution leverages the inherent benefits of a cloud infrastructure configured and tuned for continuous, stable, high-speed data dissemination to data consumers worldwide at the downlink rate. It also is designed to ingest data from multiple simultaneous sources and disseminate data to multiple consumers. Nearly linear scalability is achieved by adding servers and storage.The IBM Proof of Concept system has been tested with our partners to achieve in excess of 5 Gigabits/second over public internet infrastructure. In tests with live GOES-16 data, the system routinely achieved 2.5 Gigabits/second pass-through to The Weather Company from the University of Wisconsin-Madison SSEC. Simulated data was also transferred from the Cooperative Institute for Climate and Satellites — North Carolina to The Weather Company, as well. The storage node allocated to our Proof of Concept system as tested was sized at 480 Terabytes of RAID protected disk as a worst case sizing to accommodate the data from four GOES-16 class satellites for 30 days in a circular buffer. This shows that an abundance of performance and capacity headroom exists in the IBM design that can be applied to additional missions.

  20. Automated error correction in IBM quantum computer and explicit generalization

    NASA Astrophysics Data System (ADS)

    Ghosh, Debjit; Agarwal, Pratik; Pandey, Pratyush; Behera, Bikash K.; Panigrahi, Prasanta K.

    2018-06-01

    Construction of a fault-tolerant quantum computer remains a challenging problem due to unavoidable noise and fragile quantum states. However, this goal can be achieved by introducing quantum error-correcting codes. Here, we experimentally realize an automated error correction code and demonstrate the nondestructive discrimination of GHZ states in IBM 5-qubit quantum computer. After performing quantum state tomography, we obtain the experimental results with a high fidelity. Finally, we generalize the investigated code for maximally entangled n-qudit case, which could both detect and automatically correct any arbitrary phase-change error, or any phase-flip error, or any bit-flip error, or combined error of all types of error.

  1. Homomorphic encryption experiments on IBM's cloud quantum computing platform

    NASA Astrophysics Data System (ADS)

    Huang, He-Liang; Zhao, You-Wei; Li, Tan; Li, Feng-Guang; Du, Yu-Tao; Fu, Xiang-Qun; Zhang, Shuo; Wang, Xiang; Bao, Wan-Su

    2017-02-01

    Quantum computing has undergone rapid development in recent years. Owing to limitations on scalability, personal quantum computers still seem slightly unrealistic in the near future. The first practical quantum computer for ordinary users is likely to be on the cloud. However, the adoption of cloud computing is possible only if security is ensured. Homomorphic encryption is a cryptographic protocol that allows computation to be performed on encrypted data without decrypting them, so it is well suited to cloud computing. Here, we first applied homomorphic encryption on IBM's cloud quantum computer platform. In our experiments, we successfully implemented a quantum algorithm for linear equations while protecting our privacy. This demonstration opens a feasible path to the next stage of development of cloud quantum information technology.

  2. Micromechanical Signal Processors

    NASA Astrophysics Data System (ADS)

    Nguyen, Clark Tu-Cuong

    Completely monolithic high-Q micromechanical signal processors constructed of polycrystalline silicon and integrated with CMOS electronics are described. The signal processors implemented include an oscillator, a bandpass filter, and a mixer + filter--all of which are components commonly required for up- and down-conversion in communication transmitters and receivers, and all of which take full advantage of the high Q of micromechanical resonators. Each signal processor is designed, fabricated, then studied with particular attention to the performance consequences associated with miniaturization of the high-Q element. The fabrication technology which realizes these components merges planar integrated circuit CMOS technologies with those of polysilicon surface micromachining. The technologies are merged in a modular fashion, where the CMOS is processed in the first module, the microstructures in a following separate module, and at no point in the process sequence are steps from each module intermixed. Although the advantages of such modularity include flexibility in accommodating new module technologies, the developed process constrained the CMOS metallization to a high temperature refractory metal (tungsten metallization with TiSi _2 contact barriers) and constrained the micromachining process to long-term temperatures below 835^circC. Rapid-thermal annealing (RTA) was used to relieve residual stress in the mechanical structures. To reduce the complexity involved with developing this merged process, capacitively transduced resonators are utilized. High-Q single resonator and spring-coupled micromechanical resonator filters are also investigated, with particular attention to noise performance, bandwidth control, and termination design. The noise in micromechanical filters is found to be fairly high due to poor electromechanical coupling on the micro-scale with present-day technologies. Solutions to this high series resistance problem are suggested, including smaller

  3. Next Generation Security for the 10,240 Processor Columbia System

    NASA Technical Reports Server (NTRS)

    Hinke, Thomas; Kolano, Paul; Shaw, Derek; Keller, Chris; Tweton, Dave; Welch, Todd; Liu, Wen (Betty)

    2005-01-01

    This presentation includes a discussion of the Columbia 10,240-processor system located at the NASA Advanced Supercomputing (NAS) division at the NASA Ames Research Center which supports each of NASA's four missions: science, exploration systems, aeronautics, and space operations. It is comprised of 20 Silicon Graphics nodes, each consisting of 512 Itanium II processors. A 64 processor Columbia front-end system supports users as they prepare their jobs and then submits them to the PBS system. Columbia nodes and front-end systems use the Linux OS. Prior to SC04, the Columbia system was used to attain a processing speed of 51.87 TeraFlops, which made it number two on the Top 500 list of the world's supercomputers and the world's fastest "operational" supercomputer since it was fully engaged in supporting NASA users.

  4. Sentinel-2 Level 2A Prototype Processor: Architecture, Algorithms And First Results

    NASA Astrophysics Data System (ADS)

    Muller-Wilm, Uwe; Louis, Jerome; Richter, Rudolf; Gascon, Ferran; Niezette, Marc

    2013-12-01

    Sen2Core is a prototype processor for Sentinel-2 Level 2A product processing and formatting. The processor is developed for and with ESA and performs the tasks of Atmospheric Correction and Scene Classification of Level 1C input data. Level 2A outputs are: Bottom-Of- Atmosphere (BOA) corrected reflectance images, Aerosol Optical Thickness-, Water Vapour-, Scene Classification maps and Quality indicators, including cloud and snow probabilities. The Level 2A Product Formatting performed by the processor follows the specification of the Level 1C User Product.

  5. A concept for magazine Bimat processor

    NASA Technical Reports Server (NTRS)

    Park, C. E.

    1969-01-01

    Concept utilizes existing film magazines to process photographic film as the film is exposed. A standard magazine can be converted to a Bimat processor by adding three stainless steel rollers. All chemicals required for processing and fixing the negative are contained in the Bimat film.

  6. Multipurpose silicon photonics signal processor core.

    PubMed

    Pérez, Daniel; Gasulla, Ivana; Crudgington, Lee; Thomson, David J; Khokhar, Ali Z; Li, Ke; Cao, Wei; Mashanovich, Goran Z; Capmany, José

    2017-09-21

    Integrated photonics changes the scaling laws of information and communication systems offering architectural choices that combine photonics with electronics to optimize performance, power, footprint, and cost. Application-specific photonic integrated circuits, where particular circuits/chips are designed to optimally perform particular functionalities, require a considerable number of design and fabrication iterations leading to long development times. A different approach inspired by electronic Field Programmable Gate Arrays is the programmable photonic processor, where a common hardware implemented by a two-dimensional photonic waveguide mesh realizes different functionalities through programming. Here, we report the demonstration of such reconfigurable waveguide mesh in silicon. We demonstrate over 20 different functionalities with a simple seven hexagonal cell structure, which can be applied to different fields including communications, chemical and biomedical sensing, signal processing, multiprocessor networks, and quantum information systems. Our work is an important step toward this paradigm.Integrated optical circuits today are typically designed for a few special functionalities and require complex design and development procedures. Here, the authors demonstrate a reconfigurable but simple silicon waveguide mesh with different functionalities.

  7. Program for Calculating the Cubic and Fifth Roots of a Number of Newton's Method (610 IBM Electronic Computer); PROGRAMMA PER IL CALCOLO DELLE RADICI TERZA E QUINTA DI UN NUMERO, COL METODO DI NEWTON (CALCOLATORE ELETTRONICO IBM 610)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giucci, D.

    1963-01-01

    A program was devised for calculating the cubic and fifth roots of a number of Newton's method using the 610 IBM electronic computer. For convenience a program was added for obtaining n/sup th/ roots by the logarithmic method. (auth)

  8. A techno-economic comparison of fuel processors utilizing diesel for solid oxide fuel cell auxiliary power units

    NASA Astrophysics Data System (ADS)

    Nehter, Pedro; Hansen, John Bøgild; Larsen, Peter Koch

    Ultra-low sulphur diesel (ULSD) is the preferred fuel for mobile auxiliary power units (APU). The commercial available technologies in the kW-range are combustion engine based gensets, achieving system efficiencies about 20%. Solid oxide fuel cells (SOFC) promise improvements with respect to efficiency and emission, particularly for the low power range. Fuel processing methods i.e., catalytic partial oxidation, autothermal reforming and steam reforming have been demonstrated to operate on diesel with various sulphur contents. The choice of fuel processing method strongly affects the SOFC's system efficiency and power density. This paper investigates the impact of fuel processing methods on the economical potential in SOFC APUs, taking variable and capital cost into account. Autonomous concepts without any external water supply are compared with anode recycle configurations. The cost of electricity is very sensitive on the choice of the O/C ratio and the temperature conditions of the fuel processor. A sensitivity analysis is applied to identify the most cost effective concept for different economic boundary conditions. The favourite concepts are discussed with respect to technical challenges and requirements operating in the presence of sulphur.

  9. Linear circuit analysis program for IBM 1620 Monitor 2, 1311/1443 data processing system /CIRCS/

    NASA Technical Reports Server (NTRS)

    Hatfield, J.

    1967-01-01

    CIRCS is modification of IBSNAP Circuit Analysis Program, for use on smaller systems. This data processing system retains the basic dc, transient analysis, and FORTRAN 2 formats. It can be used on the IBM 1620/1311 Monitor I Mod 5 system, and solves a linear network containing 15 nodes and 45 branches.

  10. Proton exchange membrane fuel cell technology for transportation applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swathirajan, S.

    1996-04-01

    Proton Exchange Membrane (PEM) fuel cells are extremely promising as future power plants in the transportation sector to achieve an increase in energy efficiency and eliminate environmental pollution due to vehicles. GM is currently involved in a multiphase program with the US Department of Energy for developing a proof-of-concept hybrid vehicle based on a PEM fuel cell power plant and a methanol fuel processor. Other participants in the program are Los Alamos National Labs, Dow Chemical Co., Ballard Power Systems and DuPont Co., In the just completed phase 1 of the program, a 10 kW PEM fuel cell power plantmore » was built and tested to demonstrate the feasibility of integrating a methanol fuel processor with a PEM fuel cell stack. However, the fuel cell power plant must overcome stiff technical and economic challenges before it can be commercialized for light duty vehicle applications. Progress achieved in phase I on the use of monolithic catalyst reactors in the fuel processor, managing CO impurity in the fuel cell stack, low-cost electrode-membrane assembles, and on the integration of the fuel processor with a Ballard PEM fuel cell stack will be presented.« less

  11. A VME-based software trigger system using UNIX processors

    NASA Astrophysics Data System (ADS)

    Atmur, Robert; Connor, David F.; Molzon, William

    1997-02-01

    We have constructed a distributed computing platform with eight processors to assemble and filter data from digitization crates. The filtered data were transported to a tape-writing UNIX computer via ethernet. Each processor ran a UNIX operating system and was installed in its own VME crate. Each VME crate contained dual-port memories which interfaced with the digitizers. Using standard hardware and software (VME and UNIX) allows us to select from a wide variety of non-proprietary products and makes upgrades simpler, if they are necessary.

  12. Flight design system level C requirements. Solid rocket booster and external tank impact prediction processors. [space transportation system

    NASA Technical Reports Server (NTRS)

    Seale, R. H.

    1979-01-01

    The prediction of the SRB and ET impact areas requires six separate processors. The SRB impact prediction processor computes the impact areas and related trajectory data for each SRB element. Output from this processor is stored on a secure file accessible by the SRB impact plot processor which generates the required plots. Similarly the ET RTLS impact prediction processor and the ET RTLS impact plot processor generates the ET impact footprints for return-to-launch-site (RTLS) profiles. The ET nominal/AOA/ATO impact prediction processor and the ET nominal/AOA/ATO impact plot processor generate the ET impact footprints for non-RTLS profiles. The SRB and ET impact processors compute the size and shape of the impact footprints by tabular lookup in a stored footprint dispersion data base. The location of each footprint is determined by simulating a reference trajectory and computing the reference impact point location. To insure consistency among all flight design system (FDS) users, much input required by these processors will be obtained from the FDS master data base.

  13. Broadband set-top box using MAP-CA processor

    NASA Astrophysics Data System (ADS)

    Bush, John E.; Lee, Woobin; Basoglu, Chris

    2001-12-01

    Advances in broadband access are expected to exert a profound impact in our everyday life. It will be the key to the digital convergence of communication, computer and consumer equipment. A common thread that facilitates this convergence comprises digital media and Internet. To address this market, Equator Technologies, Inc., is developing the Dolphin broadband set-top box reference platform using its MAP-CA Broadband Signal ProcessorT chip. The Dolphin reference platform is a universal media platform for display and presentation of digital contents on end-user entertainment systems. The objective of the Dolphin reference platform is to provide a complete set-top box system based on the MAP-CA processor. It includes all the necessary hardware and software components for the emerging broadcast and the broadband digital media market based on IP protocols. Such reference design requires a broadband Internet access and high-performance digital signal processing. By using the MAP-CA processor, the Dolphin reference platform is completely programmable, allowing various codecs to be implemented in software, such as MPEG-2, MPEG-4, H.263 and proprietary codecs. The software implementation also enables field upgrades to keep pace with evolving technology and industry demands.

  14. Quality of red blood cells washed using the ACP 215 cell processor: assessment of optimal pre- and postwash storage times and conditions.

    PubMed

    Hansen, Adele; Yi, Qi-Long; Acker, Jason P

    2013-08-01

    Washing of red blood cell concentrates (RCCs) is required for potassium-sensitive transfusion recipients, including neonates in need of large-volume transfusions. When open, nonsterile washing systems are used, postwash outdate time is limited to 24 hours, often leading to problems providing the component to the patient before expiry. A closed, automated cell processor, the ACP 215 from Haemonetics Corporation, was used to wash RCCs and determine optimal pre- and postwash storage times. Two postwash storage solutions, additive solution (AS)-3 and saline-adenine-glucose-mannitol (SAGM), were compared. The in vitro quality of leukoreduced RCCs, prepared from citrate-phosphate-dextrose-anticoagulated whole blood, was determined postwash and compared to existing guidelines for RCC quality (hemoglobin content, hematocrit, and hemolysis) and predetermined criteria for ATP and supernatant potassium levels. A criterion for visual hemolysis was also applied. The prewash storage time, postwash storage time, and the postwash resuspension solution all contributed to RCC quality postwash. Levels of hemolysis were greater when washed RCCs were resuspended in SAGM (p = 0.01), while AS-3 proved worse at maintaining ATP levels postwash (p < 0.01). Immediately postwash, all units had supernatant K+ levels below the detection limit of the instrument (<1 mmol/L), but these increased to above acceptable levels within 14 days. Based on all acceptance criteria, a maximum 14-day prewash storage period and 7-day postwash storage period in SAGM preservative was found to be optimal. The longer outdate time postwashing should help lessen challenges in providing components to patients before expiry. © 2013 American Association of Blood Banks.

  15. A Non-Cut Cell Immersed Boundary Method for Use in Icing Simulations

    NASA Technical Reports Server (NTRS)

    Sarofeen, Christian M.; Noack, Ralph W.; Kreeger, Richard E.

    2013-01-01

    This paper describes a computational fluid dynamic method used for modelling changes in aircraft geometry due to icing. While an aircraft undergoes icing, the accumulated ice results in a geometric alteration of the aerodynamic surfaces. In computational simulations for icing, it is necessary that the corresponding geometric change is taken into consideration. The method used, herein, for the representation of the geometric change due to icing is a non-cut cell Immersed Boundary Method (IBM). Computational cells that are in a body fitted grid of a clean aerodynamic geometry that are inside a predicted ice formation are identified. An IBM is then used to change these cells from being active computational cells to having properties of viscous solid bodies. This method has been implemented in the NASA developed node centered, finite volume computational fluid dynamics code, FUN3D. The presented capability is tested for two-dimensional airfoils including a clean airfoil, an iced airfoil, and an airfoil in harmonic pitching motion about its quarter chord. For these simulations velocity contours, pressure distributions, coefficients of lift, coefficients of drag, and coefficients of pitching moment about the airfoil's quarter chord are computed and used for comparison against experimental results, a higher order panel method code with viscous effects, XFOIL, and the results from FUN3D's original solution process. The results of the IBM simulations show that the accuracy of the IBM compares satisfactorily with the experimental results, XFOIL results, and the results from FUN3D's original solution process.

  16. LERC-SLAM - THE NASA LEWIS RESEARCH CENTER SATELLITE LINK ATTENUATION MODEL PROGRAM (IBM PC VERSION)

    NASA Technical Reports Server (NTRS)

    Manning, R. M.

    1994-01-01

    antenna required to establish a link with the satellite, the statistical parameters that characterize the rainrate process at the terminal site, the length of the propagation path within the potential rain region, and its projected length onto the local horizontal. The IBM PC version of LeRC-SLAM (LEW-14979) is written in Microsoft QuickBASIC for an IBM PC compatible computer with a monitor and printer capable of supporting an 80-column format. The IBM PC version is available on a 5.25 inch MS-DOS format diskette. The program requires about 30K RAM. The source code and executable are included. The Macintosh version of LeRC-SLAM (LEW-14977) is written in Microsoft Basic, Binary (b) v2.00 for Macintosh II series computers running MacOS. This version requires 400K RAM and is available on a 3.5 inch 800K Macintosh format diskette, which includes source code only. The Macintosh version was developed in 1987 and the IBM PC version was developed in 1989. IBM PC is a trademark of International Business Machines. MS-DOS is a registered trademark of Microsoft Corporation. Macintosh is a registered trademark of Apple Computer, Inc.

  17. Design of a dataway processor for a parallel image signal processing system

    NASA Astrophysics Data System (ADS)

    Nomura, Mitsuru; Fujii, Tetsuro; Ono, Sadayasu

    1995-04-01

    Recently, demands for high-speed signal processing have been increasing especially in the field of image data compression, computer graphics, and medical imaging. To achieve sufficient power for real-time image processing, we have been developing parallel signal-processing systems. This paper describes a communication processor called 'dataway processor' designed for a new scalable parallel signal-processing system. The processor has six high-speed communication links (Dataways), a data-packet routing controller, a RISC CORE, and a DMA controller. Each communication link operates at 8-bit parallel in a full duplex mode at 50 MHz. Moreover, data routing, DMA, and CORE operations are processed in parallel. Therefore, sufficient throughput is available for high-speed digital video signals. The processor is designed in a top- down fashion using a CAD system called 'PARTHENON.' The hardware is fabricated using 0.5-micrometers CMOS technology, and its hardware is about 200 K gates.

  18. Digital signal processor and processing method for GPS receivers

    NASA Technical Reports Server (NTRS)

    Thomas, Jr., Jess B. (Inventor)

    1989-01-01

    A digital signal processor and processing method therefor for use in receivers of the NAVSTAR/GLOBAL POSITIONING SYSTEM (GPS) employs a digital carrier down-converter, digital code correlator and digital tracking processor. The digital carrier down-converter and code correlator consists of an all-digital, minimum bit implementation that utilizes digital chip and phase advancers, providing exceptional control and accuracy in feedback phase and in feedback delay. Roundoff and commensurability errors can be reduced to extremely small values (e.g., less than 100 nanochips and 100 nanocycles roundoff errors and 0.1 millichip and 1 millicycle commensurability errors). The digital tracking processor bases the fast feedback for phase and for group delay in the C/A, P.sub.1, and P.sub.2 channels on the L.sub.1 C/A carrier phase thereby maintaining lock at lower signal-to-noise ratios, reducing errors in feedback delays, reducing the frequency of cycle slips and in some cases obviating the need for quadrature processing in the P channels. Simple and reliable methods are employed for data bit synchronization, data bit removal and cycle counting. Improved precision in averaged output delay values is provided by carrier-aided data-compression techniques. The signal processor employs purely digital operations in the sense that exactly the same carrier phase and group delay measurements are obtained, to the last decimal place, every time the same sampled data (i.e., exactly the same bits) are processed.

  19. Method for operating a combustor in a fuel cell system

    DOEpatents

    Chalfant, Robert W.; Clingerman, Bruce J.

    2002-01-01

    A method of operating a combustor to heat a fuel processor in a fuel cell system, in which the fuel processor generates a hydrogen-rich stream a portion of which is consumed in a fuel cell stack and a portion of which is discharged from the fuel cell stack and supplied to the combustor, and wherein first and second streams are supplied to the combustor, the first stream being a hydrocarbon fuel stream and the second stream consisting of said hydrogen-rich stream, the method comprising the steps of monitoring the temperature of the fuel processor; regulating the quantity of the first stream to the combustor according to the temperature of the fuel processor; and comparing said quantity of said first stream to a predetermined value or range of predetermined values.

  20. A microcomputer based frequency-domain processor for laser Doppler anemometry

    NASA Technical Reports Server (NTRS)

    Horne, W. Clifton; Adair, Desmond

    1988-01-01

    A prototype multi-channel laser Doppler anemometry (LDA) processor was assembled using a wideband transient recorder and a microcomputer with an array processor for fast Fourier transform (FFT) computations. The prototype instrument was used to acquire, process, and record signals from a three-component wind tunnel LDA system subject to various conditions of noise and flow turbulence. The recorded data was used to evaluate the effectiveness of burst acceptance criteria, processing algorithms, and selection of processing parameters such as record length. The recorded signals were also used to obtain comparative estimates of signal-to-noise ratio between time-domain and frequency-domain signal detection schemes. These comparisons show that the FFT processing scheme allows accurate processing of signals for which the signal-to-noise ratio is 10 to 15 dB less than is practical using counter processors.