Enabling Future Robotic Missions with Multicore Processors
NASA Technical Reports Server (NTRS)
Powell, Wesley A.; Johnson, Michael A.; Wilmot, Jonathan; Some, Raphael; Gostelow, Kim P.; Reeves, Glenn; Doyle, Richard J.
2011-01-01
Recent commercial developments in multicore processors (e.g. Tilera, Clearspeed, HyperX) have provided an option for high performance embedded computing that rivals the performance attainable with FPGA-based reconfigurable computing architectures. Furthermore, these processors offer more straightforward and streamlined application development by allowing the use of conventional programming languages and software tools in lieu of hardware design languages such as VHDL and Verilog. With these advantages, multicore processors can significantly enhance the capabilities of future robotic space missions. This paper will discuss these benefits, along with onboard processing applications where multicore processing can offer advantages over existing or competing approaches. This paper will also discuss the key artchitecural features of current commercial multicore processors. In comparison to the current art, the features and advancements necessary for spaceflight multicore processors will be identified. These include power reduction, radiation hardening, inherent fault tolerance, and support for common spacecraft bus interfaces. Lastly, this paper will explore how multicore processors might evolve with advances in electronics technology and how avionics architectures might evolve once multicore processors are inserted into NASA robotic spacecraft.
RASSP signal processing architectures
NASA Astrophysics Data System (ADS)
Shirley, Fred; Bassett, Bob; Letellier, J. P.
1995-06-01
The rapid prototyping of application specific signal processors (RASSP) program is an ARPA/tri-service effort to dramatically improve the process by which complex digital systems, particularly embedded signal processors, are specified, designed, documented, manufactured, and supported. The domain of embedded signal processing was chosen because it is important to a variety of military and commercial applications as well as for the challenge it presents in terms of complexity and performance demands. The principal effort is being performed by two major contractors, Lockheed Sanders (Nashua, NH) and Martin Marietta (Camden, NJ). For both, improvements in methodology are to be exercised and refined through the performance of individual 'Demonstration' efforts. The Lockheed Sanders' Demonstration effort is to develop an infrared search and track (IRST) processor. In addition, both contractors' results are being measured by a series of externally administered (by Lincoln Labs) six-month Benchmark programs that measure process improvement as a function of time. The first two Benchmark programs are designing and implementing a synthetic aperture radar (SAR) processor. Our demonstration team is using commercially available VME modules from Mercury Computer to assemble a multiprocessor system scalable from one to hundreds of Intel i860 microprocessors. Custom modules for the sensor interface and display driver are also being developed. This system implements either proprietary or Navy owned algorithms to perform the compute-intensive IRST function in real time in an avionics environment. Our Benchmark team is designing custom modules using commercially available processor ship sets, communication submodules, and reconfigurable logic devices. One of the modules contains multiple vector processors optimized for fast Fourier transform processing. Another module is a fiberoptic interface that accepts high-rate input data from the sensors and provides video-rate output data to a display. This paper discusses the impact of simulation on choosing signal processing algorithms and architectures, drawing from the experiences of the Demonstration and Benchmark inter-company teams at Lockhhed Sanders, Motorola, Hughes, and ISX.
Fast Fourier Transform Co-Processor (FFTC)- Towards Embedded GFLOPs
NASA Astrophysics Data System (ADS)
Kuehl, Christopher; Liebstueckel, Uwe; Tejerina, Isaac; Uemminghaus, Michael; Wite, Felix; Kolb, Michael; Suess, Martin; Weigand, Roland
2012-08-01
Many signal processing applications and algorithms perform their operations on the data in the transform domain to gain efficiency. The Fourier Transform Co- Processor has been developed with the aim to offload General Purpose Processors from performing these transformations and therefore to boast the overall performance of a processing module. The IP of the commercial PowerFFT processor has been selected and adapted to meet the constraints of the space environment.In frame of the ESA activity “Fast Fourier Transform DSP Co-processor (FFTC)” (ESTEC/Contract No. 15314/07/NL/LvH/ma) the objectives were the following:Production of prototypes of a space qualified version of the commercial PowerFFT chip called FFTC based on the PowerFFT IP.The development of a stand-alone FFTC Accelerator Board (FTAB) based on the FFTC including the Controller FPGA and SpaceWire Interfaces to verify the FFTC function and performance.The FFTC chip performs its calculations with floating point precision. Stand alone it is capable computing FFTs of up to 1K complex samples in length in only 10μsec. This corresponds to an equivalent processing performance of 4.7 GFlops. In this mode the maximum sustained data throughput reaches 6.4Gbit/s. When connected to up to 4 EDAC protected SDRAM memory banks the FFTC can perform long FFTs with up to 1M complex samples in length or multidimensional FFT- based processing tasks.A Controller FPGA on the FTAB takes care of the SDRAM addressing. The instructions commanded via the Controller FPGA are used to set up the data flow and generate the memory addresses.The presentation will give and overview on the project, including the results of the validation of the FFTC ASIC prototypes.
Fast Fourier Transform Co-processor (FFTC), towards embedded GFLOPs
NASA Astrophysics Data System (ADS)
Kuehl, Christopher; Liebstueckel, Uwe; Tejerina, Isaac; Uemminghaus, Michael; Witte, Felix; Kolb, Michael; Suess, Martin; Weigand, Roland; Kopp, Nicholas
2012-10-01
Many signal processing applications and algorithms perform their operations on the data in the transform domain to gain efficiency. The Fourier Transform Co-Processor has been developed with the aim to offload General Purpose Processors from performing these transformations and therefore to boast the overall performance of a processing module. The IP of the commercial PowerFFT processor has been selected and adapted to meet the constraints of the space environment. In frame of the ESA activity "Fast Fourier Transform DSP Co-processor (FFTC)" (ESTEC/Contract No. 15314/07/NL/LvH/ma) the objectives were the following: • Production of prototypes of a space qualified version of the commercial PowerFFT chip called FFTC based on the PowerFFT IP. • The development of a stand-alone FFTC Accelerator Board (FTAB) based on the FFTC including the Controller FPGA and SpaceWire Interfaces to verify the FFTC function and performance. The FFTC chip performs its calculations with floating point precision. Stand alone it is capable computing FFTs of up to 1K complex samples in length in only 10μsec. This corresponds to an equivalent processing performance of 4.7 GFlops. In this mode the maximum sustained data throughput reaches 6.4Gbit/s. When connected to up to 4 EDAC protected SDRAM memory banks the FFTC can perform long FFTs with up to 1M complex samples in length or multidimensional FFT-based processing tasks. A Controller FPGA on the FTAB takes care of the SDRAM addressing. The instructions commanded via the Controller FPGA are used to set up the data flow and generate the memory addresses. The paper will give an overview on the project, including the results of the validation of the FFTC ASIC prototypes.
RISC Processors and High Performance Computing
NASA Technical Reports Server (NTRS)
Bailey, David H.; Saini, Subhash; Craw, James M. (Technical Monitor)
1995-01-01
This tutorial will discuss the top five RISC microprocessors and the parallel systems in which they are used. It will provide a unique cross-machine comparison not available elsewhere. The effective performance of these processors will be compared by citing standard benchmarks in the context of real applications. The latest NAS Parallel Benchmarks, both absolute performance and performance per dollar, will be listed. The next generation of the NPB will be described. The tutorial will conclude with a discussion of future directions in the field. Technology Transfer Considerations: All of these computer systems are commercially available internationally. Information about these processors is available in the public domain, mostly from the vendors themselves. The NAS Parallel Benchmarks and their results have been previously approved numerous times for public release, beginning back in 1991.
Fault-Tolerant, Radiation-Hard DSP
NASA Technical Reports Server (NTRS)
Czajkowski, David
2011-01-01
Commercial digital signal processors (DSPs) for use in high-speed satellite computers are challenged by the damaging effects of space radiation, mainly single event upsets (SEUs) and single event functional interrupts (SEFIs). Innovations have been developed for mitigating the effects of SEUs and SEFIs, enabling the use of very-highspeed commercial DSPs with improved SEU tolerances. Time-triple modular redundancy (TTMR) is a method of applying traditional triple modular redundancy on a single processor, exploiting the VLIW (very long instruction word) class of parallel processors. TTMR improves SEU rates substantially. SEFIs are solved by a SEFI-hardened core circuit, external to the microprocessor. It monitors the health of the processor, and if a SEFI occurs, forces the processor to return to performance through a series of escalating events. TTMR and hardened-core solutions were developed for both DSPs and reconfigurable field-programmable gate arrays (FPGAs). This includes advancement of TTMR algorithms for DSPs and reconfigurable FPGAs, plus a rad-hard, hardened-core integrated circuit that services both the DSP and FPGA. Additionally, a combined DSP and FPGA board architecture was fully developed into a rad-hard engineering product. This technology enables use of commercial off-the-shelf (COTS) DSPs in computers for satellite and other space applications, allowing rapid deployment at a much lower cost. Traditional rad-hard space computers are very expensive and typically have long lead times. These computers are either based on traditional rad-hard processors, which have extremely low computational performance, or triple modular redundant (TMR) FPGA arrays, which suffer from power and complexity issues. Even more frustrating is that the TMR arrays of FPGAs require a fixed, external rad-hard voting element, thereby causing them to lose much of their reconfiguration capability and in some cases significant speed reduction. The benefits of COTS high-performance signal processing include significant increase in onboard science data processing, enabling orders of magnitude reduction in required communication bandwidth for science data return, orders of magnitude improvement in onboard mission planning and critical decision making, and the ability to rapidly respond to changing mission environments, thus enabling opportunistic science and orders of magnitude reduction in the cost of mission operations through reduction of required staff. Additional benefits of COTS-based, high-performance signal processing include the ability to leverage considerable commercial and academic investments in advanced computing tools, techniques, and infra structure, and the familiarity of the science and IT community with these computing environments.
Recall Performance for Content-Addressable Memory Using Adiabatic Quantum Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Imam, Neena; Humble, Travis S.; McCaskey, Alex
A content-addressable memory (CAM) stores key-value associations such that the key is recalled by providing its associated value. While CAM recall is traditionally performed using recurrent neural network models, we show how to solve this problem using adiabatic quantum optimization. Our approach maps the recurrent neural network to a commercially available quantum processing unit by taking advantage of the common underlying Ising spin model. We then assess the accuracy of the quantum processor to store key-value associations by quantifying recall performance against an ensemble of problem sets. We observe that different learning rules from the neural network community influence recallmore » accuracy but performance appears to be limited by potential noise in the processor. The strong connection established between quantum processors and neural network problems supports the growing intersection of these two ideas.« less
2014-10-01
44 Table 19: Raspberry Pi Information...boards – These are single board devices targeted to education and embedding, the best known being the Raspberry Pi ; and 3. Development boards – These...popular, as it has high performance processor (perhaps 4 times the power of a Raspberry Pi ) with dual core processors running at 1.6 GHz and the cost is
Next Generation Space Telescope Integrated Science Module Data System
NASA Technical Reports Server (NTRS)
Schnurr, Richard G.; Greenhouse, Matthew A.; Jurotich, Matthew M.; Whitley, Raymond; Kalinowski, Keith J.; Love, Bruce W.; Travis, Jeffrey W.; Long, Knox S.
1999-01-01
The Data system for the Next Generation Space Telescope (NGST) Integrated Science Module (ISIM) is the primary data interface between the spacecraft, telescope, and science instrument systems. This poster includes block diagrams of the ISIM data system and its components derived during the pre-phase A Yardstick feasibility study. The poster details the hardware and software components used to acquire and process science data for the Yardstick instrument compliment, and depicts the baseline external interfaces to science instruments and other systems. This baseline data system is a fully redundant, high performance computing system. Each redundant computer contains three 150 MHz power PC processors. All processors execute a commercially available real time multi-tasking operating system supporting, preemptive multi-tasking, file management and network interfaces. These six processors in the system are networked together. The spacecraft interface baseline is an extension of the network, which links the six processors. The final selection for Processor busses, processor chips, network interfaces, and high-speed data interfaces will be made during mid 2002.
NASA Technical Reports Server (NTRS)
Fura, David A.; Windley, Phillip J.; Cohen, Gerald C.
1993-01-01
This technical report contains the HOL listings of the specification of the design and major portions of the requirements for a commercially developed processor interface unit (or PIU). The PIU is an interface chip performing memory interface, bus interface, and additional support services for a commercial microprocessor within a fault-tolerant computer system. This system, the Fault-Tolerant Embedded Processor (FTEP), is targeted towards applications in avionics and space requiring extremely high levels of mission reliability, extended maintenance-free operation, or both. This report contains the actual HOL listings of the PIU specification as it currently exists. Section two of this report contains general-purpose HOL theories that support the PIU specification. These theories include definitions for the hardware components used in the PIU, our implementation of bit words, and our implementation of temporal logic. Section three contains the HOL listings for the PIU design specification. Aside from the PIU internal bus (I-Bus), this specification is complete. Section four contains the HOL listings for a major portion of the PIU requirements specification. Specifically, it contains most of the definition for the PIU behavior associated with memory accesses initiated by the local processor.
The design of an adaptive predictive coder using a single-chip digital signal processor
NASA Astrophysics Data System (ADS)
Randolph, M. A.
1985-01-01
A speech coding processor architecture design study has been performed in which Texas Instruments TMS32010 has been selected from among three commercially available digital signal processing integrated circuits and evaluated in an implementation study of real-time Adaptive Predictive Coding (APC). The TMS32010 has been compared with AR&T Bell Laboratories DSP I and Nippon Electric Co. PD7720 and was found to be most suitable for a single chip implementation of APC. A preliminary design system based on TMS32010 has been performed, and several of the hardware and software design issues are discussed. Particular attention was paid to the design of an external memory controller which permits rapid sequential access of external RAM. As a result, it has been determined that a compact hardware implementation of the APC algorithm is feasible based of the TSM32010. Originator-supplied keywords include: vocoders, speech compression, adaptive predictive coding, digital signal processing microcomputers, speech processor architectures, and special purpose processor.
NASA Technical Reports Server (NTRS)
Fura, David A.; Windley, Phillip J.; Cohen, Gerald C.
1993-01-01
This technical report contains the Higher-Order Logic (HOL) listings of the partial verification of the requirements and design for a commercially developed processor interface unit (PIU). The PIU is an interface chip performing memory interface, bus interface, and additional support services for a commercial microprocessor within a fault tolerant computer system. This system, the Fault Tolerant Embedded Processor (FTEP), is targeted towards applications in avionics and space requiring extremely high levels of mission reliability, extended maintenance-free operation, or both. This report contains the actual HOL listings of the PIU verification as it currently exists. Section two of this report contains general-purpose HOL theories and definitions that support the PIU verification. These include arithmetic theories dealing with inequalities and associativity, and a collection of tactics used in the PIU proofs. Section three contains the HOL listings for the completed PIU design verification. Section 4 contains the HOL listings for the partial requirements verification of the P-Port.
Implementation and Performance Analysis of Parallel Assignment Algorithms on a Hypercube Computer.
1987-12-01
coupled pro- cessors because of the degree of interaction between processors imposed by the global memory [HwB84]. Another sub-class of MIMD... interaction between the individual processors [MuA87]. Many of the commercial MIMD computers available today are loosely coupled [HwB84]. 2.1.3 The Hypercube...Alpha-beta is a method usually employed in the solution of two-person zero-sum games like chess and checkers [Qui87]. The ha sic approach of the alpha
Operation of commercial R3000 processors in the low earth orbit (LEO) space environment
NASA Astrophysics Data System (ADS)
Kaschmitter, J. L.; Shaeffer, D. L.; Colella, N. J.; McKnett, C. L.; Coakley, P. G.
1991-12-01
Spacecraft processors must operate with minimal degradation of performance in the LEO radiation environment, which includes the effects of total accumulated ionizing dose and single event phenomena (SEP) caused by protons and cosmic rays. Commercially available microprocessors can offer a number of advantages relative to radiation-hardened devices but are not normally designed to tolerate effects induced by the LEO environment. Extensive testing of the MIPS R3000 Reduced Instruction Set Computer (RISC) microprocessor family for operation in LEO environments is reported. The authors have characterized total dose and SEP effects for altitudes and inclinations of interest to systems operating in LEO, and they postulate techniques for detection and alleviation of SEP effects based on experimental results.
Advanced satellite communication system
NASA Technical Reports Server (NTRS)
Staples, Edward J.; Lie, Sen
1992-01-01
The objective of this research program was to develop an innovative advanced satellite receiver/demodulator utilizing surface acoustic wave (SAW) chirp transform processor and coherent BPSK demodulation. The algorithm of this SAW chirp Fourier transformer is of the Convolve - Multiply - Convolve (CMC) type, utilizing off-the-shelf reflective array compressor (RAC) chirp filters. This satellite receiver, if fully developed, was intended to be used as an on-board multichannel communications repeater. The Advanced Communications Receiver consists of four units: (1) CMC processor, (2) single sideband modulator, (3) demodulator, and (4) chirp waveform generator and individual channel processors. The input signal is composed of multiple user transmission frequencies operating independently from remotely located ground terminals. This signal is Fourier transformed by the CMC Processor into a unique time slot for each user frequency. The CMC processor is driven by a waveform generator through a single sideband (SSB) modulator. The output of the coherent demodulator is composed of positive and negative pulses, which are the envelopes of the chirp transform processor output. These pulses correspond to the data symbols. Following the demodulator, a logic circuit reconstructs the pulses into data, which are subsequently differentially decoded to form the transmitted data. The coherent demodulation and detection of BPSK signals derived from a CMC chirp transform processor were experimentally demonstrated and bit error rate (BER) testing was performed. To assess the feasibility of such advanced receiver, the results were compared with the theoretical analysis and plotted for an average BER as a function of signal-to-noise ratio. Another goal of this SBIR program was the development of a commercial product. The commercial product developed was an arbitrary waveform generator. The successful sales have begun with the delivery of the first arbitrary waveform generator.
Benchmarking NWP Kernels on Multi- and Many-core Processors
NASA Astrophysics Data System (ADS)
Michalakes, J.; Vachharajani, M.
2008-12-01
Increased computing power for weather, climate, and atmospheric science has provided direct benefits for defense, agriculture, the economy, the environment, and public welfare and convenience. Today, very large clusters with many thousands of processors are allowing scientists to move forward with simulations of unprecedented size. But time-critical applications such as real-time forecasting or climate prediction need strong scaling: faster nodes and processors, not more of them. Moreover, the need for good cost- performance has never been greater, both in terms of performance per watt and per dollar. For these reasons, the new generations of multi- and many-core processors being mass produced for commercial IT and "graphical computing" (video games) are being scrutinized for their ability to exploit the abundant fine- grain parallelism in atmospheric models. We present results of our work to date identifying key computational kernels within the dynamics and physics of a large community NWP model, the Weather Research and Forecast (WRF) model. We benchmark and optimize these kernels on several different multi- and many-core processors. The goals are to (1) characterize and model performance of the kernels in terms of computational intensity, data parallelism, memory bandwidth pressure, memory footprint, etc. (2) enumerate and classify effective strategies for coding and optimizing for these new processors, (3) assess difficulties and opportunities for tool or higher-level language support, and (4) establish a continuing set of kernel benchmarks that can be used to measure and compare effectiveness of current and future designs of multi- and many-core processors for weather and climate applications.
System-Level Radiation Hardening
NASA Technical Reports Server (NTRS)
Ladbury, Ray
2014-01-01
Although system-level radiation hardening can enable the use of high-performance components and enhance the capabilities of a spacecraft, hardening techniques can be costly and can compromise the very performance designers sought from the high-performance components. Moreover, such techniques often result in a complicated design, especially if several complex commercial microcircuits are used, each posing its own hardening challenges. The latter risk is particularly acute for Commercial-Off-The-Shelf components since high-performance parts (e.g. double-data-rate synchronous dynamic random access memories - DDR SDRAMs) may require other high-performance commercial parts (e.g. processors) to support their operation. For these reasons, it is essential that system-level radiation hardening be a coordinated effort, from setting requirements through testing up to and including validation.
Efficient Sorting on the Tilera Manycore Architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morari, Alessandro; Tumeo, Antonino; Villa, Oreste
e present an efficient implementation of the radix sort algo- rithm for the Tilera TILEPro64 processor. The TILEPro64 is one of the first successful commercial manycore processors. It is com- posed of 64 tiles interconnected through multiple fast Networks- on-chip and features a fully coherent, shared distributed cache. The architecture has a large degree of flexibility, and allows various optimization strategies. We describe how we mapped the algorithm to this architecture. We present an in-depth analysis of the optimizations for each phase of the algorithm with respect to the processor’s sustained performance. We discuss the overall throughput reached by ourmore » radix sort implementation (up to 132 MK/s) and show that it provides comparable or better performance-per-watt with respect to state-of-the art implemen- tations on x86 processors and graphic processing units.« less
Color sensor and neural processor on one chip
NASA Astrophysics Data System (ADS)
Fiesler, Emile; Campbell, Shannon R.; Kempem, Lother; Duong, Tuan A.
1998-10-01
Low-cost, compact, and robust color sensor that can operate in real-time under various environmental conditions can benefit many applications, including quality control, chemical sensing, food production, medical diagnostics, energy conservation, monitoring of hazardous waste, and recycling. Unfortunately, existing color sensor are either bulky and expensive or do not provide the required speed and accuracy. In this publication we describe the design of an accurate real-time color classification sensor, together with preprocessing and a subsequent neural network processor integrated on a single complementary metal oxide semiconductor (CMOS) integrated circuit. This one-chip sensor and information processor will be low in cost, robust, and mass-producible using standard commercial CMOS processes. The performance of the chip and the feasibility of its manufacturing is proven through computer simulations based on CMOS hardware parameters. Comparisons with competing methodologies show a significantly higher performance for our device.
System Architecture For High Speed Sorting Of Potatoes
NASA Astrophysics Data System (ADS)
Marchant, J. A.; Onyango, C. M.; Street, M. J.
1989-03-01
This paper illustrates an industrial application of vision processing in which potatoes are sorted according to their size and shape at speeds of up to 40 objects per second. The result is a multi-processing approach built around the VME bus. A hardware unit has been designed and constructed to encode the boundary of the potatoes, to reducing the amount of data to be processed. A master 68000 processor is used to control this unit and to handle data transfers along the bus. Boundary data is passed to one of three 68010 slave processors each responsible for a line of potatoes across a conveyor belt. The slave processors calculate attributes such as shape, size and estimated weight of each potato and the master processor uses this data to operate the sorting mechanism. The system has been interfaced with a commercial grading machine and performance trials are now in progress.
High coherence plane breaking packaging for superconducting qubits.
Bronn, Nicholas T; Adiga, Vivekananda P; Olivadese, Salvatore B; Wu, Xian; Chow, Jerry M; Pappas, David P
2018-04-01
We demonstrate a pogo pin package for a superconducting quantum processor specifically designed with a nontrivial layout topology (e.g., a center qubit that cannot be accessed from the sides of the chip). Two experiments on two nominally identical superconducting quantum processors in pogo packages, which use commercially available parts and require modest machining tolerances, are performed at low temperature (10 mK) in a dilution refrigerator and both found to behave comparably to processors in standard planar packages with wirebonds where control and readout signals come in from the edges. Single- and two-qubit gate errors are also characterized via randomized benchmarking, exhibiting similar error rates as in standard packages, opening the possibility of integrating pogo pin packaging with extensible qubit architectures.
High coherence plane breaking packaging for superconducting qubits
NASA Astrophysics Data System (ADS)
Bronn, Nicholas T.; Adiga, Vivekananda P.; Olivadese, Salvatore B.; Wu, Xian; Chow, Jerry M.; Pappas, David P.
2018-04-01
We demonstrate a pogo pin package for a superconducting quantum processor specifically designed with a nontrivial layout topology (e.g., a center qubit that cannot be accessed from the sides of the chip). Two experiments on two nominally identical superconducting quantum processors in pogo packages, which use commercially available parts and require modest machining tolerances, are performed at low temperature (10 mK) in a dilution refrigerator and both found to behave comparably to processors in standard planar packages with wirebonds where control and readout signals come in from the edges. Single- and two-qubit gate errors are also characterized via randomized benchmarking, exhibiting similar error rates as in standard packages, opening the possibility of integrating pogo pin packaging with extensible qubit architectures.
Ultra-Reliable Digital Avionics (URDA) processor
NASA Astrophysics Data System (ADS)
Branstetter, Reagan; Ruszczyk, William; Miville, Frank
1994-10-01
Texas Instruments Incorporated (TI) developed the URDA processor design under contract with the U.S. Air Force Wright Laboratory and the U.S. Army Night Vision and Electro-Sensors Directorate. TI's approach couples advanced packaging solutions with advanced integrated circuit (IC) technology to provide a high-performance (200 MIPS/800 MFLOPS) modular avionics processor module for a wide range of avionics applications. TI's processor design integrates two Ada-programmable, URDA basic processor modules (BPM's) with a JIAWG-compatible PiBus and TMBus on a single F-22 common integrated processor-compatible form-factor SEM-E avionics card. A separate, high-speed (25-MWord/second 32-bit word) input/output bus is provided for sensor data. Each BPM provides a peak throughput of 100 MIPS scalar concurrent with 400-MFLOPS vector processing in a removable multichip module (MCM) mounted to a liquid-flowthrough (LFT) core and interfacing to a processor interface module printed wiring board (PWB). Commercial RISC technology coupled with TI's advanced bipolar complementary metal oxide semiconductor (BiCMOS) application specific integrated circuit (ASIC) and silicon-on-silicon packaging technologies are used to achieve the high performance in a miniaturized package. A Mips R4000-family reduced instruction set computer (RISC) processor and a TI 100-MHz BiCMOS vector coprocessor (VCP) ASIC provide, respectively, the 100 MIPS of a scalar processor throughput and 400 MFLOPS of vector processing throughput for each BPM. The TI Aladdim ASIC chipset was developed on the TI Aladdin Program under contract with the U.S. Army Communications and Electronics Command and was sponsored by the Advanced Research Projects Agency with technical direction from the U.S. Army Night Vision and Electro-Sensors Directorate.
High-performance computing for airborne applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quinn, Heather M; Manuzzato, Andrea; Fairbanks, Tom
2010-06-28
Recently, there has been attempts to move common satellite tasks to unmanned aerial vehicles (UAVs). UAVs are significantly cheaper to buy than satellites and easier to deploy on an as-needed basis. The more benign radiation environment also allows for an aggressive adoption of state-of-the-art commercial computational devices, which increases the amount of data that can be collected. There are a number of commercial computing devices currently available that are well-suited to high-performance computing. These devices range from specialized computational devices, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), to traditional computing platforms, such as microprocessors. Even thoughmore » the radiation environment is relatively benign, these devices could be susceptible to single-event effects. In this paper, we will present radiation data for high-performance computing devices in a accelerated neutron environment. These devices include a multi-core digital signal processor, two field-programmable gate arrays, and a microprocessor. From these results, we found that all of these devices are suitable for many airplane environments without reliability problems.« less
Multiphase complete exchange on Paragon, SP2 and CS-2
NASA Technical Reports Server (NTRS)
Bokhari, Shahid H.
1995-01-01
The overhead of interprocessor communication is a major factor in limiting the performance of parallel computer systems. The complete exchange is the severest communication pattern in that it requires each processor to send a distinct message to every other processor. This pattern is at the heart of many important parallel applications. On hypercubes, multiphase complete exchange has been developed and shown to provide optimal performance over varying message sizes. Most commercial multicomputer systems do not have a hypercube interconnect. However, they use special purpose hardware and dedicated communication processors to achieve very high performance communication and can be made to emulate the hypercube quite well. Multiphase complete exchange has been implemented on three contemporary parallel architectures: the Intel Paragon, IBM SP2 and Meiko CS-2. The essential features of these machines are described and their basic interprocessor communication overheads are discussed. The performance of multiphase complete exchange is evaluated on each machine. It is shown that the theoretical ideas developed for hypercubes are also applicable in practice to these machines and that multiphase complete exchange can lead to major savings in execution time over traditional solutions.
Development of flame resistant treatment for nomex fibrous structures
NASA Technical Reports Server (NTRS)
Toy, M. S.
1978-01-01
Technology which renders aramid fibrous structures flame resistant through chemical modification was developed. The project scaled up flame resistant treatment from laboratory fabric swatches of a few inches to efficiently producing ten yards of commercial width (41 inches) aromatic polyamide. The radiation intensity problem of the processor was resolved. Further improvement of the processor cooling system was recommended for two reasons: (1) To advance current technology of flame proofing Nomex fabric to higher oxygen enriched atmospheres; and (2) To adapt the processor for direct applicability to low cost commercial fabrics.
Techniques for the rapid display and manipulation of 3-D biomedical data.
Goldwasser, S M; Reynolds, R A; Talton, D A; Walsh, E S
1988-01-01
The use of fully interactive 3-D workstations with true real-time performance will become increasingly common as technology matures and economical commercial systems become available. This paper provides a comprehensive introduction to high speed approaches to the display and manipulation of 3-D medical objects obtained from tomographic data acquisition systems such as CT, MR, and PET. A variety of techniques are outlined including the use of software on conventional minicomputers, hardware assist devices such as array processors and programmable frame buffers, and special purpose computer architecture for dedicated high performance systems. While both algorithms and architectures are addressed, the major theme centers around the utilization of hardware-based approaches including parallel processors for the implementation of true real-time systems.
Prototyping the HPDP Chip on STM 65 NM Process
NASA Astrophysics Data System (ADS)
Papadas, C.; Dramitinos, G.; Syed, M.; Helfers, T.; Dedes, G.; Schoellkopf, J.-P.; Dugoujon, L.
2011-08-01
Currently Astrium GmbH is involved in the of the High Performance Data Processor (HPDP) development programme for telecommunication applications under a DLR contract. The HPDP project targets the implementation of the commercially available reconfigurable array processor IP (XPP from the company PACT XPP Technologies) in a radiation hardened technology.In the current complementary development phase funded under the Greek Industry Incentive scheme, it is planned to prototype the HPDP chip in commercial STM 65 nm technology. In addition it is also planned to utilise the preliminary radiation hardened components of this library wherever possible.This abstract gives an overview of the HPDP chip architecture, the basic details of the STM 65 nm process and the design flow foreseen for the prototyping. The paper will discuss the development and integration issues involved in using the STM 65 nm process (also including the available preliminary radiation hardened components) for designs targeted to be used in space applications.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-15
... (coop) programs for the at-sea mothership and catcher/processor trawl fleets (whiting only). Since that... permit holder (vessel owner) to change their vessel ownership, 9. Clarify that the processor obligation..., Mothership Coop (MS) Program--Whiting At-sea Trawl Fishery, and Catcher-Processor (C/P) Coop Program--Whiting...
CSP: A Multifaceted Hybrid Architecture for Space Computing
NASA Technical Reports Server (NTRS)
Rudolph, Dylan; Wilson, Christopher; Stewart, Jacob; Gauvin, Patrick; George, Alan; Lam, Herman; Crum, Gary Alex; Wirthlin, Mike; Wilson, Alex; Stoddard, Aaron
2014-01-01
Research on the CHREC Space Processor (CSP) takes a multifaceted hybrid approach to embedded space computing. Working closely with the NASA Goddard SpaceCube team, researchers at the National Science Foundation (NSF) Center for High-Performance Reconfigurable Computing (CHREC) at the University of Florida and Brigham Young University are developing hybrid space computers that feature an innovative combination of three technologies: commercial-off-the-shelf (COTS) devices, radiation-hardened (RadHard) devices, and fault-tolerant computing. Modern COTS processors provide the utmost in performance and energy-efficiency but are susceptible to ionizing radiation in space, whereas RadHard processors are virtually immune to this radiation but are more expensive, larger, less energy-efficient, and generations behind in speed and functionality. By featuring COTS devices to perform the critical data processing, supported by simpler RadHard devices that monitor and manage the COTS devices, and augmented with novel uses of fault-tolerant hardware, software, information, and networking within and between COTS devices, the resulting system can maximize performance and reliability while minimizing energy consumption and cost. NASA Goddard has adopted the CSP concept and technology with plans underway to feature flight-ready CSP boards on two upcoming space missions.
NASA Technical Reports Server (NTRS)
Irom, Farokh; Farmanesh, Farhad; Kouba, Coy K.
2006-01-01
SEU from heavy-ions is measured for SOI PowerPC microprocessors. Results for 0.13 micron PowerPC with 1.1V core voltages increases over 1.3V versions. This suggests that improvement in SEU for scaled devices may be reversed. In recent years there has been interest in the possible use of unhardened commercial microprocessors in space because of their superior performance compared to hardened processors. However, unhardened devices are susceptible to upset from radiation space. More information is needed on how they respond to radiation before they can be used in space. Only a limited number of advanced microprocessors have been subjected to radiation tests, which are designed with lower clock frequencies and higher internal core voltage voltages than recent devices [1-6]. However the trend for commercial Silicon-on-insulator (SOI) microprocessors is to reduce feature size and internal core voltage and increase the clock frequency. Commercial microprocessors with the PowerPC architecture are now available that use partially depleted SOI processes with feature size of 90 nm and internal core voltage as low as 1.0 V and clock frequency in the GHz range. Previously, we reported SEU measurements for SOI commercial PowerPCs with feature size of 0.18 and 0.13 m [7, 8]. The results showed an order of magnitude reduction in saturated cross section compared to CMOS bulk counterparts. This paper examines SEUs in advanced commercial SOI microprocessors, focusing on SEU sensitivity of D-Cache and hangs with feature size and internal core voltage. Results are presented for the Motorola SOI processor with feature sizes of 0.13 microns and internal core voltages of 1.3 and 1.1 V. These results are compared with results for the Motorola SOI processors with feature size of 0.18 microns and internal core voltage of 1.6 and 1.3 V.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-19
.../processor trawl fleets (whiting only). Since that time, the Pacific Fishery Management Council (Council) and.... Clarify that the processor obligation may be to more than one MS permit, 10. Revise the mothership catcher... (MS) and Catcher/Processor (C/P) sectors of the Pacific Coast Groundfish fishery. This item was...
Scalable Multiprocessor for High-Speed Computing in Space
NASA Technical Reports Server (NTRS)
Lux, James; Lang, Minh; Nishimoto, Kouji; Clark, Douglas; Stosic, Dorothy; Bachmann, Alex; Wilkinson, William; Steffke, Richard
2004-01-01
A report discusses the continuing development of a scalable multiprocessor computing system for hard real-time applications aboard a spacecraft. "Hard realtime applications" signifies applications, like real-time radar signal processing, in which the data to be processed are generated at "hundreds" of pulses per second, each pulse "requiring" millions of arithmetic operations. In these applications, the digital processors must be tightly integrated with analog instrumentation (e.g., radar equipment), and data input/output must be synchronized with analog instrumentation, controlled to within fractions of a microsecond. The scalable multiprocessor is a cluster of identical commercial-off-the-shelf generic DSP (digital-signal-processing) computers plus generic interface circuits, including analog-to-digital converters, all controlled by software. The processors are computers interconnected by high-speed serial links. Performance can be increased by adding hardware modules and correspondingly modifying the software. Work is distributed among the processors in a parallel or pipeline fashion by means of a flexible master/slave control and timing scheme. Each processor operates under its own local clock; synchronization is achieved by broadcasting master time signals to all the processors, which compute offsets between the master clock and their local clocks.
Dynamic behavior of gasoline fuel cell electric vehicles
NASA Astrophysics Data System (ADS)
Mitchell, William; Bowers, Brian J.; Garnier, Christophe; Boudjemaa, Fabien
As we begin the 21st century, society is continuing efforts towards finding clean power sources and alternative forms of energy. In the automotive sector, reduction of pollutants and greenhouse gas emissions from the power plant is one of the main objectives of car manufacturers and innovative technologies are under active consideration to achieve this goal. One technology that has been proposed and vigorously pursued in the past decade is the proton exchange membrane (PEM) fuel cell, an electrochemical device that reacts hydrogen with oxygen to produce water, electricity and heat. Since today there is no existing extensive hydrogen infrastructure and no commercially viable hydrogen storage technology for vehicles, there is a continuing debate as to how the hydrogen for these advanced vehicles will be supplied. In order to circumvent the above issues, power systems based on PEM fuel cells can employ an on-board fuel processor that has the ability to convert conventional fuels such as gasoline into hydrogen for the fuel cell. This option could thereby remove the fuel infrastructure and storage issues. However, for these fuel processor/fuel cell vehicles to be commercially successful, issues such as start time and transient response must be addressed. This paper discusses the role of transient response of the fuel processor power plant and how it relates to the battery sizing for a gasoline fuel cell vehicle. In addition, results of fuel processor testing from a current Renault/Nuvera Fuel Cells project are presented to show the progress in transient performance.
Construction of a parallel processor for simulating manipulators and other mechanical systems
NASA Technical Reports Server (NTRS)
Hannauer, George
1991-01-01
This report summarizes the results of NASA Contract NAS5-30905, awarded under phase 2 of the SBIR Program, for a demonstration of the feasibility of a new high-speed parallel simulation processor, called the Real-Time Accelerator (RTA). The principal goals were met, and EAI is now proceeding with phase 3: development of a commercial product. This product is scheduled for commercial introduction in the second quarter of 1992.
NASA Technical Reports Server (NTRS)
Szabo, Carl M., Jr.; Duncan, Adam; LaBel, Kenneth A.; Kay, Matt; Bruner, Pat; Krzesniak, Mike; Dong, Lei
2015-01-01
Hardness assurance test results of Intel state-of-the-art 14nm Broadwell U-series processor System-on-a-Chip (SoC) for total dose are presented, along with first-look exploratory results from trials at a medical proton facility. Test method builds upon previous efforts by utilizing commercial laptop motherboards and software stress applications as opposed to more traditional automated test equipment (ATE).
Software Graphics Processing Unit (sGPU) for Deep Space Applications
NASA Technical Reports Server (NTRS)
McCabe, Mary; Salazar, George; Steele, Glen
2015-01-01
A graphics processing capability will be required for deep space missions and must include a range of applications, from safety-critical vehicle health status to telemedicine for crew health. However, preliminary radiation testing of commercial graphics processing cards suggest they cannot operate in the deep space radiation environment. Investigation into an Software Graphics Processing Unit (sGPU)comprised of commercial-equivalent radiation hardened/tolerant single board computers, field programmable gate arrays, and safety-critical display software shows promising results. Preliminary performance of approximately 30 frames per second (FPS) has been achieved. Use of multi-core processors may provide a significant increase in performance.
Code of Federal Regulations, 2010 CFR
2010-01-01
... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE POTATO RESEARCH AND PROMOTION PLAN... appear in Subpart—Potato Research and Promotion Plan. (b) Processor. Processor means any person who commercially processes potatoes into potato products, including, but not restricted to, frozen, dehydrated, or...
Code of Federal Regulations, 2011 CFR
2011-01-01
... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE POTATO RESEARCH AND PROMOTION PLAN... appear in Subpart—Potato Research and Promotion Plan. (b) Processor. Processor means any person who commercially processes potatoes into potato products, including, but not restricted to, frozen, dehydrated, or...
Code of Federal Regulations, 2013 CFR
2013-01-01
... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE POTATO RESEARCH AND PROMOTION PLAN... appear in Subpart—Potato Research and Promotion Plan. (b) Processor. Processor means any person who commercially processes potatoes into potato products, including, but not restricted to, frozen, dehydrated, or...
NASA Technical Reports Server (NTRS)
Bartram, Peter N.
1989-01-01
The current Life Sciences Laboratory Equipment (LSLE) microcomputer for life sciences experiment data acquisition is now obsolete. Among the weaknesses of the current microcomputer are small memory size, relatively slow analog data sampling rates, and the lack of a bulk data storage device. While life science investigators normally prefer data to be transmitted to Earth as it is taken, this is not always possible. No down-link exists for experiments performed in the Shuttle middeck region. One important aspect of a replacement microcomputer is provision for in-flight storage of experimental data. The Write Once, Read Many (WORM) optical disk was studied because of its high storage density, data integrity, and the availability of a space-qualified unit. In keeping with the goals for a replacement microcomputer based upon commercially available components and standard interfaces, the system studied includes a Small Computer System Interface (SCSI) for interfacing the WORM drive. The system itself is designed around the STD bus, using readily available boards. Configurations examined were: (1) master processor board and slave processor board with the SCSI interface; (2) master processor with SCSI interface; (3) master processor with SCSI and Direct Memory Access (DMA); (4) master processor controlling a separate STD bus SCSI board; and (5) master processor controlling a separate STD bus SCSI board with DMA.
Preliminary Radiation Testing of a State-of-the-Art Commercial 14nm CMOS Processor/System-on-a-Chip
NASA Technical Reports Server (NTRS)
Szabo, Carl M., Jr.; Duncan, Adam; LaBel, Kenneth A.; Kay, Matt; Bruner, Pat; Krzesniak, Mike; Dong, Lei
2015-01-01
Hardness assurance test results of Intel state-of-the-art 14nm “Broadwell” U-series processor / System-on-a-Chip (SoC) for total ionizing dose (TID) are presented, along with exploratory results from trials at a medical proton facility. Test method builds upon previous efforts [1] by utilizing commercial laptop motherboards and software stress applications as opposed to more traditional automated test equipment (ATE).
NASA Technical Reports Server (NTRS)
Szabo, Carl M., Jr.; Duncan, Adam R.; Label, Kenneth A.
2017-01-01
Testing of an Intel 14nm desktop processor was conducted under proton irradiation. We share lessons learned, demonstrating that complex devices beget further complex challenges requiring practical and theoretical investigative expertise to solve.
NASA Technical Reports Server (NTRS)
Shalkhauser, Mary JO; Quintana, Jorge A.; Soni, Nitin J.
1994-01-01
The NASA Lewis Research Center is developing a multichannel communication signal processing satellite (MCSPS) system which will provide low data rate, direct to user, commercial communications services. The focus of current space segment developments is a flexible, high-throughput, fault tolerant onboard information switching processor. This information switching processor (ISP) is a destination-directed packet switch which performs both space and time switching to route user information among numerous user ground terminals. Through both industry study contracts and in-house investigations, several packet switching architectures were examined. A contention-free approach, the shared memory per beam architecture, was selected for implementation. The shared memory per beam architecture, fault tolerance insertion, implementation, and demonstration plans are described.
Communication overhead on the Intel Paragon, IBM SP2 and Meiko CS-2
NASA Technical Reports Server (NTRS)
Bokhari, Shahid H.
1995-01-01
Interprocessor communication overhead is a crucial measure of the power of parallel computing systems-its impact can severely limit the performance of parallel programs. This report presents measurements of communication overhead on three contemporary commercial multicomputer systems: the Intel Paragon, the IBM SP2 and the Meiko CS-2. In each case the time to communicate between processors is presented as a function of message length. The time for global synchronization and memory access is discussed. The performance of these machines in emulating hypercubes and executing random pairwise exchanges is also investigated. It is shown that the interprocessor communication time depends heavily on the specific communication pattern required. These observations contradict the commonly held belief that communication overhead on contemporary machines is independent of the placement of tasks on processors. The information presented in this report permits the evaluation of the efficiency of parallel algorithm implementations against standard baselines.
NASA Technical Reports Server (NTRS)
1994-01-01
The objective of this contract was the investigation of the potential performance gains that would result from an upgrade of the Space Station Freedom (SSF) Data Management System (DMS) Embedded Data Processor (EDP) '386' design with the Intel Pentium (registered trade-mark of Intel Corp.) '586' microprocessor. The Pentium ('586') is the latest member of the industry standard Intel X86 family of CISC (Complex Instruction Set Computer) microprocessors. This contract was scheduled to run in parallel with an internal IBM Federal Systems Company (FSC) Internal Research and Development (IR&D) task that had the goal to generate a baseline flight design for an upgraded EDP using the Pentium. This final report summarizes the activities performed in support of Contract NAS2-13758. Our plan was to baseline performance analyses and measurements on the latest state-of-the-art commercially available Pentium processor, representative of the proposed space station design, and then phase to an IBM capital funded breadboard version of the flight design (if available from IR&D and Space Station work) for additional evaluation of results. Unfortunately, the phase-over to the flight design breadboard did not take place, since the IBM Data Management System (DMS) for the Space Station Freedom was terminated by NASA before the referenced capital funded EDP breadboard could be completed. The baseline performance analyses and measurements, however, were successfully completed, as planned, on the commercial Pentium hardware. The results of those analyses, evaluations, and measurements are presented in this final report.
The AIS-5000 parallel processor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmitt, L.A.; Wilson, S.S.
1988-05-01
The AIS-5000 is a commercially available massively parallel processor which has been designed to operate in an industrial environment. It has fine-grained parallelism with up to 1024 processing elements arranged in a single-instruction multiple-data (SIMD) architecture. The processing elements are arranged in a one-dimensional chain that, for computer vision applications, can be as wide as the image itself. This architecture has superior cost/performance characteristics than two-dimensional mesh-connected systems. The design of the processing elements and their interconnections as well as the software used to program the system allow a wide variety of algorithms and applications to be implemented. In thismore » paper, the overall architecture of the system is described. Various components of the system are discussed, including details of the processing elements, data I/O pathways and parallel memory organization. A virtual two-dimensional model for programming image-based algorithms for the system is presented. This model is supported by the AIS-5000 hardware and software and allows the system to be treated as a full-image-size, two-dimensional, mesh-connected parallel processor. Performance bench marks are given for certain simple and complex functions.« less
Long-Wavelength Beam Steerer Based on a Micro-Electromechanical Mirror
Kos, Anthony B; Gerecht, Eyal
2013-01-01
Commercially available mirrors for scanning long-wavelength beams are too large for high-speed imaging. There is a need for a smaller, more agile pointing apparatus to provide images in seconds, not minutes or hours. A fast long-wavelength beam steerer uses a commercial micro-electro-mechanical system (MEMS) mirror controlled by a high-performance digital signal processor (DSP). The DSP allows high-speed raster scanning of the incident radiation, which is focused to a small waist onto the 9mm2, gold-coated, MEMS mirror surface, while simultaneously acquiring an undistorted, high spatial-resolution image of an object. The beam steerer hardware, software and performance are described. The system can also serve as a miniaturized, high-performance long-wavelength beam chopper for lock-in detection. PMID:26401426
A distributed control system for the lower-hybrid current drive system on the Tokamak de Varennes
NASA Astrophysics Data System (ADS)
Bagdoo, J.; Guay, J. M.; Chaudron, G.-A.; Decoste, R.; Demers, Y.; Hubbard, A.
1990-08-01
An rf current drive system with an output power of 1 MW at 3.7 GHz is under development for the Tokamak de Varennes. The control system is based on an Ethernet local-area network of programmable logic controllers as front end, personal computers as consoles, and CAMAC-based DSP processors. The DSP processors ensure the PID control of the phase and rf power of each klystron, and the fast protection of high-power rf hardware, all within a 40 μs loop. Slower control and protection, event sequencing and the run-time database are provided by the programmable logic controllers, which communicate, via the LAN, with the consoles. The latter run a commercial process-control console software. The LAN protocol respects the first four layers of the ISO/OSI 802.3 standard. Synchronization with the tokamak control system is provided by commercially available CAMAC timing modules which trigger shot-related events and reference waveform generators. A detailed description of each subsystem and a performance evaluation of the system will be presented.
Shared performance monitor in a multiprocessor system
Chiu, George; Gara, Alan G.; Salapura, Valentina
2012-07-24
A performance monitoring unit (PMU) and method for monitoring performance of events occurring in a multiprocessor system. The multiprocessor system comprises a plurality of processor devices units, each processor device for generating signals representing occurrences of events in the processor device, and, a single shared counter resource for performance monitoring. The performance monitor unit is shared by all processor cores in the multiprocessor system. The PMU comprises: a plurality of performance counters each for counting signals representing occurrences of events from one or more the plurality of processor units in the multiprocessor system; and, a plurality of input devices for receiving the event signals from one or more processor devices of the plurality of processor units, the plurality of input devices programmable to select event signals for receipt by one or more of the plurality of performance counters for counting, wherein the PMU is shared between multiple processing units, or within a group of processors in the multiprocessing system. The PMU is further programmed to monitor event signals issued from non-processor devices.
Protect sensitive data with lightweight memory encryption
NASA Astrophysics Data System (ADS)
Zhou, Hongwei; Yuan, Jinhui; Xiao, Rui; Zhang, Kai; Sun, Jingyao
2018-04-01
Since current commercial processor is not able to deal with the data in the cipher text, the sensitive data have to be exposed in the memory. It leaves a window for the adversary. To protect the sensitive data, a direct idea is to encrypt the data when the processor does not access them. On the observation, we have developed a lightweight memory encryption, called LeMe, to protect the sensitive data in the application. LeMe marks the sensitive data in the memory with the page table entry, and encrypts the data in their free time. LeMe is built on the Linux with a 3.17.6 kernel, and provides four user interfaces as dynamic link library. Our evaluations show LeMe is effective to protect the sensitive data and incurs an acceptable performance overhead.
DeepX: Deep Learning Accelerator for Restricted Boltzmann Machine Artificial Neural Networks.
Kim, Lok-Won
2018-05-01
Although there have been many decades of research and commercial presence on high performance general purpose processors, there are still many applications that require fully customized hardware architectures for further computational acceleration. Recently, deep learning has been successfully used to learn in a wide variety of applications, but their heavy computation demand has considerably limited their practical applications. This paper proposes a fully pipelined acceleration architecture to alleviate high computational demand of an artificial neural network (ANN) which is restricted Boltzmann machine (RBM) ANNs. The implemented RBM ANN accelerator (integrating network size, using 128 input cases per batch, and running at a 303-MHz clock frequency) integrated in a state-of-the art field-programmable gate array (FPGA) (Xilinx Virtex 7 XC7V-2000T) provides a computational performance of 301-billion connection-updates-per-second and about 193 times higher performance than a software solution running on general purpose processors. Most importantly, the architecture enables over 4 times (12 times in batch learning) higher performance compared with a previous work when both are implemented in an FPGA device (XC2VP70).
Multiple Embedded Processors for Fault-Tolerant Computing
NASA Technical Reports Server (NTRS)
Bolotin, Gary; Watson, Robert; Katanyoutanant, Sunant; Burke, Gary; Wang, Mandy
2005-01-01
A fault-tolerant computer architecture has been conceived in an effort to reduce vulnerability to single-event upsets (spurious bit flips caused by impingement of energetic ionizing particles or photons). As in some prior fault-tolerant architectures, the redundancy needed for fault tolerance is obtained by use of multiple processors in one computer. Unlike prior architectures, the multiple processors are embedded in a single field-programmable gate array (FPGA). What makes this new approach practical is the recent commercial availability of FPGAs that are capable of having multiple embedded processors. A working prototype (see figure) consists of two embedded IBM PowerPC 405 processor cores and a comparator built on a Xilinx Virtex-II Pro FPGA. This relatively simple instantiation of the architecture implements an error-detection scheme. A planned future version, incorporating four processors and two comparators, would correct some errors in addition to detecting them.
IPAD: Integrated Programs for Aerospace-vehicle Design
NASA Technical Reports Server (NTRS)
Miller, R. E., Jr.
1985-01-01
Early work was performed to apply data base technology in support of the management of engineering data in the design and manufacturing environments. The principal objective of the IPAD project is to develop a computer software system for use in the design of aerospace vehicles. Two prototype systems are created for this purpose. Relational Information Manager (RIM) is a successful commercial product. The IPAD Information Processor (IPIP), a much more sophisticated system, is still under development.
Shared performance monitor in a multiprocessor system
Chiu, George; Gara, Alan G; Salapura, Valentina
2014-12-02
A performance monitoring unit (PMU) and method for monitoring performance of events occurring in a multiprocessor system. The multiprocessor system comprises a plurality of processor devices units, each processor device for generating signals representing occurrences of events in the processor device, and, a single shared counter resource for performance monitoring. The performance monitor unit is shared by all processor cores in the multiprocessor system. The PMU is further programmed to monitor event signals issued from non-processor devices.
A diesel fuel processor for fuel-cell-based auxiliary power unit applications
NASA Astrophysics Data System (ADS)
Samsun, Remzi Can; Krekel, Daniel; Pasel, Joachim; Prawitz, Matthias; Peters, Ralf; Stolten, Detlef
2017-07-01
Producing a hydrogen-rich gas from diesel fuel enables the efficient generation of electricity in a fuel-cell-based auxiliary power unit. In recent years, significant progress has been achieved in diesel reforming. One issue encountered is the stable operation of water-gas shift reactors with real reformates. A new fuel processor is developed using a commercial shift catalyst. The system is operated using optimized start-up and shut-down strategies. Experiments with diesel and kerosene fuels show slight performance drops in the shift reactor during continuous operation for 100 h. CO concentrations much lower than the target value are achieved during system operation in auxiliary power unit mode at partial loads of up to 60%. The regeneration leads to full recovery of the shift activity. Finally, a new operation strategy is developed whereby the gas hourly space velocity of the shift stages is re-designed. This strategy is validated using different diesel and kerosene fuels, showing a maximum CO concentration of 1.5% at the fuel processor outlet under extreme conditions, which can be tolerated by a high-temperature PEFC. The proposed operation strategy solves the issue of strong performance drop in the shift reactor and makes this technology available for reducing emissions in the transportation sector.
Energy consumption estimation of an OMAP-based Android operating system
NASA Astrophysics Data System (ADS)
González, Gabriel; Juárez, Eduardo; Castro, Juan José; Sanz, César
2011-05-01
System-level energy optimization of battery-powered multimedia embedded systems has recently become a design goal. The poor operational time of multimedia terminals makes computationally demanding applications impractical in real scenarios. For instance, the so-called smart-phones are currently unable to remain in operation longer than several hours. The OMAP3530 processor basically consists of two processing cores, a General Purpose Processor (GPP) and a Digital Signal Processor (DSP). The former, an ARM Cortex-A8 processor, is aimed to run a generic Operating System (OS) while the latter, a DSP core based on the C64x+, has architecture optimized for video processing. The BeagleBoard, a commercial prototyping board based on the OMAP processor, has been used to test the Android Operating System and measure its performance. The board has 128 MB of SDRAM external memory, 256 MB of Flash external memory and several interfaces. Note that the clock frequency of the ARM and DSP OMAP cores is 600 MHz and 430 MHz, respectively. This paper describes the energy consumption estimation of the processes and multimedia applications of an Android v1.6 (Donut) OS on the OMAP3530-Based BeagleBoard. In addition, tools to communicate the two processing cores have been employed. A test-bench to profile the OS resource usage has been developed. As far as the energy estimates concern, the OMAP processor energy consumption model provided by the manufacturer has been used. The model is basically divided in two energy components. The former, the baseline core energy, describes the energy consumption that is independent of any chip activity. The latter, the module active energy, describes the energy consumed by the active modules depending on resource usage.
Design and implementation of projects with Xilinx Zynq FPGA: a practical case
NASA Astrophysics Data System (ADS)
Travaglini, R.; D'Antone, I.; Meneghini, S.; Rignanese, L.; Zuffa, M.
The main advantage when using FPGAs with embedded processors is the availability of additional several high-performance resources in the same physical device. Moreover, the FPGA programmability allows for connect custom peripherals. Xilinx have designed a programmable device named Zynq-7000 (simply called Zynq in the following), which integrates programmable logic (identical to the other Xilinx "serie 7" devices) with a System on Chip (SOC) based on two embedded ARM processors. Since both parts are deeply connected, the designers benefit from performance of hardware SOC and flexibility of programmability as well. In this paper a design developed by the Electronic Design Department at the Bologna Division of INFN will be presented as a practical case of project based on Zynq device. It is developed by using a commercial board called ZedBoard hosting a FMC mezzanine with a 12-bit 500 MS/s ADC. The Zynq FPGA on the ZedBoard receives digital outputs from the ADC and send them to the acquisition PC, after proper formatting, through a Gigabit Ethernet link. The major focus of the paper will be about the methodology to develop a Zynq-based design with the Xilinx Vivado software, enlightening how to configure the SOC and connect it with the programmable logic. Firmware design techniques will be presented: in particular both VHDL and IP core based strategies will be discussed. Further, the procedure to develop software for the embedded processor will be presented. Finally, some debugging tools, like the embedded Logic Analyzer, will be shown. Advantages and disadvantages with respect to adopting FPGA without embedded processors will be discussed.
Messiah College Biodiesel Fuel Generation Project Final Technical Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zummo, Michael M; Munson, J; Derr, A
Many obvious and significant concerns arise when considering the concept of small-scale biodiesel production. Does the fuel produced meet the stringent requirements set by the commercial biodiesel industry? Is the process safe? How are small-scale producers collecting and transporting waste vegetable oil? How is waste from the biodiesel production process handled by small-scale producers? These concerns and many others were the focus of the research preformed in the Messiah College Biodiesel Fuel Generation project over the last three years. This project was a unique research program in which undergraduate engineering students at Messiah College set out to research the feasibilitymore » of small-biodiesel production for application on a campus of approximately 3000 students. This Department of Energy (DOE) funded research program developed out of almost a decade of small-scale biodiesel research and development work performed by students at Messiah College. Over the course of the last three years the research team focused on four key areas related to small-scale biodiesel production: Quality Testing and Assurance, Process and Processor Research, Process and Processor Development, and Community Education. The objectives for the Messiah College Biodiesel Fuel Generation Project included the following: 1. Preparing a laboratory facility for the development and optimization of processors and processes, ASTM quality assurance, and performance testing of biodiesel fuels. 2. Developing scalable processor and process designs suitable for ASTM certifiable small-scale biodiesel production, with the goals of cost reduction and increased quality. 3. Conduct research into biodiesel process improvement and cost optimization using various biodiesel feedstocks and production ingredients.« less
Implementation of kernels on the Maestro processor
NASA Astrophysics Data System (ADS)
Suh, Jinwoo; Kang, D. I. D.; Crago, S. P.
Currently, most microprocessors use multiple cores to increase performance while limiting power usage. Some processors use not just a few cores, but tens of cores or even 100 cores. One such many-core microprocessor is the Maestro processor, which is based on Tilera's TILE64 processor. The Maestro chip is a 49-core, general-purpose, radiation-hardened processor designed for space applications. The Maestro processor, unlike the TILE64, has a floating point unit (FPU) in each core for improved floating point performance. The Maestro processor runs at 342 MHz clock frequency. On the Maestro processor, we implemented several widely used kernels: matrix multiplication, vector add, FIR filter, and FFT. We measured and analyzed the performance of these kernels. The achieved performance was up to 5.7 GFLOPS, and the speedup compared to single tile was up to 49 using 49 tiles.
Mechanically verified hardware implementing an 8-bit parallel IO Byzantine agreement processor
NASA Technical Reports Server (NTRS)
Moore, J. Strother
1992-01-01
Consider a network of four processors that use the Oral Messages (Byzantine Generals) Algorithm of Pease, Shostak, and Lamport to achieve agreement in the presence of faults. Bevier and Young have published a functional description of a single processor that, when interconnected appropriately with three identical others, implements this network under the assumption that the four processors step in synchrony. By formalizing the original Pease, et al work, Bevier and Young mechanically proved that such a network achieves fault tolerance. We develop, formalize, and discuss a hardware design that has been mechanically proven to implement their processor. In particular, we formally define mapping functions from the abstract state space of the Bevier-Young processor to a concrete state space of a hardware module and state a theorem that expresses the claim that the hardware correctly implements the processor. We briefly discuss the Brock-Hunt Formal Hardware Description Language which permits designs both to be proved correct with the Boyer-Moore theorem prover and to be expressed in a commercially supported hardware description language for additional electrical analysis and layout. We briefly describe our implementation.
Virtex-II Pro SEE Test Methods and Results
NASA Technical Reports Server (NTRS)
Petrick, David; Powell, Wesley; Howard, James W., Jr.; LaBel, Kenneth A.
2004-01-01
The objective of this coarse Single Event Effect (SEE) test is to determine the suitability of the commercial Virtex-II Pro family for use in spaceflight applications. To this end, this test is primarily intended to determine any Singe Event Latchup (SEL) susceptibilities for these devices. Secondly, this test is intended to measure the level of Single Event Upset (SEU) susceptibilities and in a general sense where they occur. The coarse SEE test was performed on a commercial XC2VP7 device, a relatively small single processor version of the Virtex-II Pro. As the XC2VP7 shares the same functional block design and fabrication process with the larger Virtex-II Pro devices, the results of this test should also be applicable to the larger devices. The XC2VP7 device was tested on a commercial Virtex-II Pro development board. The testing was performed at the Cyclotron laboratories at Texas A&M and Michigan State Universities using ions of varying energy levels and fluences.
30/20 GHz communications systems baseband processor development
NASA Astrophysics Data System (ADS)
Brown, L.; Sabourin, D.; Stilwell, J.; McCallister, R.; Borota, M.
The architecture and system design concepts for a commercial satellite communications system planned for the 1990's has been developed. The system provides data communications between the individual users via trunking and customer premise service terminals utilizing a central switching satellite operating in a time-division multiple-access mode. Baseband processing is employed to route and control traffic on an individual message basis while providing significant advantages in improved link margins and system flexibility. Key technology developments required to prove the flight readiness of the baseband processor design are being verified in the baseband processor proof-of-concept model described herein.
Embedded Data Processor and Portable Computer Technology testbeds
NASA Technical Reports Server (NTRS)
Alena, Richard; Liu, Yuan-Kwei; Goforth, Andre; Fernquist, Alan R.
1993-01-01
Attention is given to current activities in the Embedded Data Processor and Portable Computer Technology testbed configurations that are part of the Advanced Data Systems Architectures Testbed at the Information Sciences Division at NASA Ames Research Center. The Embedded Data Processor Testbed evaluates advanced microprocessors for potential use in mission and payload applications within the Space Station Freedom Program. The Portable Computer Technology (PCT) Testbed integrates and demonstrates advanced portable computing devices and data system architectures. The PCT Testbed uses both commercial and custom-developed devices to demonstrate the feasibility of functional expansion and networking for portable computers in flight missions.
30/20 GHz communications systems baseband processor development
NASA Technical Reports Server (NTRS)
Brown, L.; Sabourin, D.; Stilwell, J.; Mccallister, R.; Borota, M.
1982-01-01
The architecture and system design concepts for a commercial satellite communications system planned for the 1990's has been developed. The system provides data communications between the individual users via trunking and customer premise service terminals utilizing a central switching satellite operating in a time-division multiple-access mode. Baseband processing is employed to route and control traffic on an individual message basis while providing significant advantages in improved link margins and system flexibility. Key technology developments required to prove the flight readiness of the baseband processor design are being verified in the baseband processor proof-of-concept model described herein.
A light hydrocarbon fuel processor producing high-purity hydrogen
NASA Astrophysics Data System (ADS)
Löffler, Daniel G.; Taylor, Kyle; Mason, Dylan
This paper discusses the design process and presents performance data for a dual fuel (natural gas and LPG) fuel processor for PEM fuel cells delivering between 2 and 8 kW electric power in stationary applications. The fuel processor resulted from a series of design compromises made to address different design constraints. First, the product quality was selected; then, the unit operations needed to achieve that product quality were chosen from the pool of available technologies. Next, the specific equipment needed for each unit operation was selected. Finally, the unit operations were thermally integrated to achieve high thermal efficiency. Early in the design process, it was decided that the fuel processor would deliver high-purity hydrogen. Hydrogen can be separated from other gases by pressure-driven processes based on either selective adsorption or permeation. The pressure requirement made steam reforming (SR) the preferred reforming technology because it does not require compression of combustion air; therefore, steam reforming is more efficient in a high-pressure fuel processor than alternative technologies like autothermal reforming (ATR) or partial oxidation (POX), where the combustion occurs at the pressure of the process stream. A low-temperature pre-reformer reactor is needed upstream of a steam reformer to suppress coke formation; yet, low temperatures facilitate the formation of metal sulfides that deactivate the catalyst. For this reason, a desulfurization unit is needed upstream of the pre-reformer. Hydrogen separation was implemented using a palladium alloy membrane. Packed beds were chosen for the pre-reformer and reformer reactors primarily because of their low cost, relatively simple operation and low maintenance. Commercial, off-the-shelf balance of plant (BOP) components (pumps, valves, and heat exchangers) were used to integrate the unit operations. The fuel processor delivers up to 100 slm hydrogen >99.9% pure with <1 ppm CO, <3 ppm CO 2. The thermal efficiency is better than 67% operating at full load. This fuel processor has been integrated with a 5-kW fuel cell producing electricity and hot water.
DOT National Transportation Integrated Search
2003-04-01
The objective of this study was to assess the feasibility of using commercial off-the-shelf(COTS)processor-based systems for safety- related railroad applications. From the safety perspective,the fundamental challenges of using COTS products are most...
Runtime Performance Monitoring Tool for RTEMS System Software
NASA Astrophysics Data System (ADS)
Cho, B.; Kim, S.; Park, H.; Kim, H.; Choi, J.; Chae, D.; Lee, J.
2007-08-01
RTEMS is a commercial-grade real-time operating system that supports multi-processor computers. However, there are not many development tools for RTEMS. In this paper, we report new RTEMS-based runtime performance monitoring tool. We have implemented a light weight runtime monitoring task with an extension to the RTEMS APIs. Using our tool, software developers can verify various performance- related parameters during runtime. Our tool can be used during software development phase and in-orbit operation as well. Our implemented target agent is light weight and has small overhead using SpaceWire interface. Efforts to reduce overhead and to add other monitoring parameters are currently under research.
Extended performance electric propulsion power processor design study. Volume 2: Technical summary
NASA Technical Reports Server (NTRS)
Biess, J. J.; Inouye, L. Y.; Schoenfeld, A. D.
1977-01-01
Electric propulsion power processor technology has processed during the past decade to the point that it is considered ready for application. Several power processor design concepts were evaluated and compared. Emphasis was placed on a 30 cm ion thruster power processor with a beam power rating supply of 2.2KW to 10KW for the main propulsion power stage. Extension in power processor performance were defined and were designed in sufficient detail to determine efficiency, component weight, part count, reliability and thermal control. A detail design was performed on a microprocessor as the thyristor power processor controller. A reliability analysis was performed to evaluate the effect of the control electronics redesign. Preliminary electrical design, mechanical design and thermal analysis were performed on a 6KW power transformer for the beam supply. Bi-Mod mechanical, structural and thermal control configurations were evaluated for the power processor and preliminary estimates of mechanical weight were determined.
NASA Astrophysics Data System (ADS)
Pruhs, Kirk
A particularly important emergent technology is heterogeneous processors (or cores), which many computer architects believe will be the dominant architectural design in the future. The main advantage of a heterogeneous architecture, relative to an architecture of identical processors, is that it allows for the inclusion of processors whose design is specialized for particular types of jobs, and for jobs to be assigned to a processor best suited for that job. Most notably, it is envisioned that these heterogeneous architectures will consist of a small number of high-power high-performance processors for critical jobs, and a larger number of lower-power lower-performance processors for less critical jobs. Naturally, the lower-power processors would be more energy efficient in terms of the computation performed per unit of energy expended, and would generate less heat per unit of computation. For a given area and power budget, heterogeneous designs can give significantly better performance for standard workloads. Moreover, even processors that were designed to be homogeneous, are increasingly likely to be heterogeneous at run time: the dominant underlying cause is the increasing variability in the fabrication process as the feature size is scaled down (although run time faults will also play a role). Since manufacturing yields would be unacceptably low if every processor/core was required to be perfect, and since there would be significant performance loss from derating the entire chip to the functioning of the least functional processor (which is what would be required in order to attain processor homogeneity), some processor heterogeneity seems inevitable in chips with many processors/cores.
Electroacoustic dewatering of food and other suspensions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, B.C.; Zelinski, M.S.; Criner, C.L.
1989-05-31
The food processing industry is a large user of energy for evaporative drying due to limited effectiveness of conventional mechanical dewatering machines. Battelle's Electroacoustic Dewatering (EAD) process improves the performance of mechanical dewatering machines by superimposing electric and ultrasonic fields. A two phase development program to demonstrate the benefits of EAD was carried out in cooperation with the food processing industry, the National Food Processors Association (NFPA) and two equipment vendors. In Phase I, laboratory scale studies were carried out on a variety of food suspensions. The process was scaled up to small commercial scale in Phase II. The technicalmore » feasibility of EAD for a variety of food materials, without adversely affecting the food properties, was successfully demonstrated during this phase, which is the subject of this report. Two Process Research Units (PRUs) were designed and built through joint efforts between Battelle and two equipment vendors. A 0.5-meter wide belt press was tested on apple mash, corn fiber, and corn gluten at sites provided by two food processors. A high speed citrus juice finisher (a hybrid form of screw press and centrifuge) was tested on orange pulp. These tests were carried out jointly by Battelle, equipment vendors, NFPA, and food processors. The apple and citrus juice products were analyzed by food processors and NFPA. 26 figs., 30 tabs.« less
Computational Particle Dynamic Simulations on Multicore Processors (CPDMu) Final Report Phase I
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmalz, Mark S
2011-07-24
Statement of Problem - Department of Energy has many legacy codes for simulation of computational particle dynamics and computational fluid dynamics applications that are designed to run on sequential processors and are not easily parallelized. Emerging high-performance computing architectures employ massively parallel multicore architectures (e.g., graphics processing units) to increase throughput. Parallelization of legacy simulation codes is a high priority, to achieve compatibility, efficiency, accuracy, and extensibility. General Statement of Solution - A legacy simulation application designed for implementation on mainly-sequential processors has been represented as a graph G. Mathematical transformations, applied to G, produce a graph representation {und G}more » for a high-performance architecture. Key computational and data movement kernels of the application were analyzed/optimized for parallel execution using the mapping G {yields} {und G}, which can be performed semi-automatically. This approach is widely applicable to many types of high-performance computing systems, such as graphics processing units or clusters comprised of nodes that contain one or more such units. Phase I Accomplishments - Phase I research decomposed/profiled computational particle dynamics simulation code for rocket fuel combustion into low and high computational cost regions (respectively, mainly sequential and mainly parallel kernels), with analysis of space and time complexity. Using the research team's expertise in algorithm-to-architecture mappings, the high-cost kernels were transformed, parallelized, and implemented on Nvidia Fermi GPUs. Measured speedups (GPU with respect to single-core CPU) were approximately 20-32X for realistic model parameters, without final optimization. Error analysis showed no loss of computational accuracy. Commercial Applications and Other Benefits - The proposed research will constitute a breakthrough in solution of problems related to efficient parallel computation of particle and fluid dynamics simulations. These problems occur throughout DOE, military and commercial sectors: the potential payoff is high. We plan to license or sell the solution to contractors for military and domestic applications such as disaster simulation (aerodynamic and hydrodynamic), Government agencies (hydrological and environmental simulations), and medical applications (e.g., in tomographic image reconstruction). Keywords - High-performance Computing, Graphic Processing Unit, Fluid/Particle Simulation. Summary for Members of Congress - Department of Energy has many simulation codes that must compute faster, to be effective. The Phase I research parallelized particle/fluid simulations for rocket combustion, for high-performance computing systems.« less
Radiation Tolerant, FPGA-Based SmallSat Computer System
NASA Technical Reports Server (NTRS)
LaMeres, Brock J.; Crum, Gary A.; Martinez, Andres; Petro, Andrew
2015-01-01
The Radiation Tolerant, FPGA-based SmallSat Computer System (RadSat) computing platform exploits a commercial off-the-shelf (COTS) Field Programmable Gate Array (FPGA) with real-time partial reconfiguration to provide increased performance, power efficiency and radiation tolerance at a fraction of the cost of existing radiation hardened computing solutions. This technology is ideal for small spacecraft that require state-of-the-art on-board processing in harsh radiation environments but where using radiation hardened processors is cost prohibitive.
Code of Federal Regulations, 2010 CFR
2010-04-01
... Definitions. For the purposes of this part: (a)-(b) [Reserved] (c) Promotional material includes: (1) Any text... books and records of an individual, a partnership, corporation or other type association (1) for one of...) Commercial leverage account means an account of a commercial enterprise, such as a producer, processor...
Autonomous, agile micro-satellites and supporting technologies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Breitfeller, E; Dittman, M D; Gaughan, R J
1999-07-19
This paper updates the on-going effort at Lawrence Livermore National Laboratory to develop autonomous, agile micro-satellites (MicroSats). The objective of this development effort is to develop MicroSats weighing only a few tens of kilograms, that are able to autonomously perform precision maneuvers and can be used telerobotically in a variety of mission modes. The required capabilities include satellite rendezvous, inspection, proximity-operations, docking, and servicing. The MicroSat carries an integrated proximity-operations sensor-suite incorporating advanced avionics. A new self-pressurizing propulsion system utilizing a miniaturized pump and non-toxic mono-propellant hydrogen peroxide was successfully tested. This system can provide a nominal 25 kg MicroSatmore » with 200-300 m/s delta-v including a warm-gas attitude control system. The avionics is based on the latest PowerPC processor using a CompactPCI bus architecture, which is modular, high-performance and processor-independent. This leverages commercial-off-the-shelf (COTS) technologies and minimizes the effects of future changes in processors. The MicroSat software development environment uses the Vx-Works real-time operating system (RTOS) that provides a rapid development environment for integration of new software modules, allowing early integration and test. We will summarize results of recent integrated ground flight testing of our latest non-toxic pumped propulsion MicroSat testbed vehicle operated on our unique dynamic air-rail.« less
Demonstration program for Omega receiver prototype microcomputer data processing
NASA Technical Reports Server (NTRS)
Lilley, R. W.
1976-01-01
The JOLT (TM) commercial microcomputer, based on the MOS Technology 6502 processor chip, for use in Omega navigation system is evaluated. A computer program was prepared in hand-assembled code to demonstrate receiver operation. The processor provides binary processing with interrupts enabled, a carriage return is given to initialize the teleprinter, and a jump is performed to enter the program loop to wait for an interrupt. The program loop operates continuously testing the interrupt flag. The interrupt routine reads the receiver status word and determines whether the current time-slot is the A slot. If so, the interrupt flag, which is also the data index pointer, is reset to zero. The status word is stored in the status buffer. If the time-slot is not A, the interrupt flag/pointer is incremented by one to index the phase and status to the proper buffer words for later use by the print routine.
2005-12-01
Upsets in SRAM FPGAs,” Military and Aerospace Applications of Programmable Logic Devices, September 2002. 8. Wakerly , John F,. “Microcomputer...change. The goal of the Configurable Fault Tolerant Processor (CFTP) Project is to explore, develop and demonstrate the applicability of using off-the...develop and demonstrate the applicability of using commercial-of-the-shelf (COTS) Field Programmable Gate Arrays (FPGA) in the design of
System analysis of graphics processor architecture using virtual prototyping
NASA Astrophysics Data System (ADS)
Hancock, William R.; Groat, Jeff; Steeves, Todd; Spaanenburg, Henk; Shackleton, John
1995-06-01
Honeywell has been actively involved in the definition of the next generation display processors for military and commercial cockpits. A major concern is how to achieve super graphics workstation performance in avionics application. Most notable are requirements for low volume, low power, harsh environmental conditions, real-time performance and low cost. This paper describes the application of VHDL to the system analysis tasks associated with achieving these goals in a cost effective manner. The paper will describe the top level architecture identified to provide the graphical and video processing power needed to drive future high resolution display devices and to generate more natural panoramic 3D formats. The major discussion, however, will be on the use of VHDL to model the processing elements and customized pipelines needed to realize the architecture and for doing the complex system tradeoff studies necessary to achieve a cost effective implementation. New software tools have been developed to allow 'virtual' prototyping in the VHDL environment. This results in a hardware/software codesign using VHDL performance and functional models. This unique architectural tool allows simulation and tradeoffs within a standard and tightly integrated toolset, which eventually will be used to specify and design the entire system from the top level requirements and system performance to the lowest level individual ASICs. New processing elements, algorithms, and standard graphical inputs can be designed, tested and evaluated without the costly hardware prototyping using the innovative 'virtual' prototyping techniques which are evolving on this project. In addition, virtual prototyping of the display processor does not bind the preliminary design to point solutions as a physical prototype will. when the development schedule is known, one can extrapolate processing elements performance and design the system around the most current technology.
Multi-Core Processor Memory Contention Benchmark Analysis Case Study
NASA Technical Reports Server (NTRS)
Simon, Tyler; McGalliard, James
2009-01-01
Multi-core processors dominate current mainframe, server, and high performance computing (HPC) systems. This paper provides synthetic kernel and natural benchmark results from an HPC system at the NASA Goddard Space Flight Center that illustrate the performance impacts of multi-core (dual- and quad-core) vs. single core processor systems. Analysis of processor design, application source code, and synthetic and natural test results all indicate that multi-core processors can suffer from significant memory subsystem contention compared to similar single-core processors.
Buffered coscheduling for parallel programming and enhanced fault tolerance
Petrini, Fabrizio [Los Alamos, NM; Feng, Wu-chun [Los Alamos, NM
2006-01-31
A computer implemented method schedules processor jobs on a network of parallel machine processors or distributed system processors. Control information communications generated by each process performed by each processor during a defined time interval is accumulated in buffers, where adjacent time intervals are separated by strobe intervals for a global exchange of control information. A global exchange of the control information communications at the end of each defined time interval is performed during an intervening strobe interval so that each processor is informed by all of the other processors of the number of incoming jobs to be received by each processor in a subsequent time interval. The buffered coscheduling method of this invention also enhances the fault tolerance of a network of parallel machine processors or distributed system processors
NASA Astrophysics Data System (ADS)
Dave, Gaurav P.; Sureshkumar, N.; Blessy Trencia Lincy, S. S.
2017-11-01
Current trend in processor manufacturing focuses on multi-core architectures rather than increasing the clock speed for performance improvement. Graphic processors have become as commodity hardware for providing fast co-processing in computer systems. Developments in IoT, social networking web applications, big data created huge demand for data processing activities and such kind of throughput intensive applications inherently contains data level parallelism which is more suited for SIMD architecture based GPU. This paper reviews the architectural aspects of multi/many core processors and graphics processors. Different case studies are taken to compare performance of throughput computing applications using shared memory programming in OpenMP and CUDA API based programming.
NASA Astrophysics Data System (ADS)
Pujiyanto; Yasin, M.; Rusydi, F.
2018-03-01
Development of lead ion detection systems is expected to have an advantage in terms of simplicity of the device and easy for concentration analysis of a lead ion with very high performance. One important part of lead ion detection systems are electrical signal acquisition parts. The electrical signal acquisition part uses the main electronic components: non inverting op-amplifier, instrumentation amplifier, multiplier circuit and logarithmic amplifier. Here will be shown the performance of lead ion detection systems when the existing electrical signal processors use commercial electronic components. The results that can be drawn from this experimental were the lead ion sensor that has been developed can be used to detect lead ions with a sensitivity of 10.48 mV/ppm with the linearity 97.11% and had a measurement range of 0.1 ppm to 80 ppm.
AltiVec performance increases for autonomous robotics for the MARSSCAPE architecture program
NASA Astrophysics Data System (ADS)
Gothard, Benny M.
2002-02-01
One of the main tall poles that must be overcome to develop a fully autonomous vehicle is the inability of the computer to understand its surrounding environment to a level that is required for the intended task. The military mission scenario requires a robot to interact in a complex, unstructured, dynamic environment. Reference A High Fidelity Multi-Sensor Scene Understanding System for Autonomous Navigation The Mobile Autonomous Robot Software Self Composing Adaptive Programming Environment (MarsScape) perception research addresses three aspects of the problem; sensor system design, processing architectures, and algorithm enhancements. A prototype perception system has been demonstrated on robotic High Mobility Multi-purpose Wheeled Vehicle and All Terrain Vehicle testbeds. This paper addresses the tall pole of processing requirements and the performance improvements based on the selected MarsScape Processing Architecture. The processor chosen is the Motorola Altivec-G4 Power PC(PPC) (1998 Motorola, Inc.), a highly parallized commercial Single Instruction Multiple Data processor. Both derived perception benchmarks and actual perception subsystems code will be benchmarked and compared against previous Demo II-Semi-autonomous Surrogate Vehicle processing architectures along with desktop Personal Computers(PC). Performance gains are highlighted with progress to date, and lessons learned and future directions are described.
A Qualitative Security Analysis of a New Class of 3-D Integrated Crypto Co-processors
2012-01-01
and mobile phones, lottery ticket vending machines , and various electronic payment systems. The main reason for their use in such applications is that...military applications such as secure communication links. However, the proliferation of Automated Teller Machines (ATMs) in the ’80s introduced them to...commercial applications. Today many popular consumer devices have cryptographic processors in them, for example, smart- cards for pay-TV access machines
Towards the formal verification of the requirements and design of a processor interface unit
NASA Technical Reports Server (NTRS)
Fura, David A.; Windley, Phillip J.; Cohen, Gerald C.
1993-01-01
The formal verification of the design and partial requirements for a Processor Interface Unit (PIU) using the Higher Order Logic (HOL) theorem-proving system is described. The processor interface unit is a single-chip subsystem within a fault-tolerant embedded system under development within the Boeing Defense and Space Group. It provides the opportunity to investigate the specification and verification of a real-world subsystem within a commercially-developed fault-tolerant computer. An overview of the PIU verification effort is given. The actual HOL listing from the verification effort are documented in a companion NASA contractor report entitled 'Towards the Formal Verification of the Requirements and Design of a Processor Interface Unit - HOL Listings' including the general-purpose HOL theories and definitions that support the PIU verification as well as tactics used in the proofs.
Acoustically based fetal heart rate monitor
NASA Technical Reports Server (NTRS)
Baker, Donald A.; Zuckerwar, Allan J.
1991-01-01
The acoustically based fetal heart rate monitor permits an expectant mother to perform the fetal Non-Stress Test in her home. The potential market would include the one million U.S. pregnancies per year requiring this type of prenatal surveillance. The monitor uses polyvinylidene fluoride (PVF2) piezoelectric polymer film for the acoustic sensors, which are mounted in a seven-element array on a cummerbund. Evaluation of the sensor ouput signals utilizes a digital signal processor, which performs a linear prediction routine in real time. Clinical tests reveal that the acoustically based monitor provides Non-Stress Test records which are comparable to those obtained with a commercial ultrasonic transducer.
NASA Astrophysics Data System (ADS)
Nikmehr, Hooman; Phillips, Braden; Lim, Cheng-Chew
2005-02-01
Recently, decimal arithmetic has become attractive in the financial and commercial world including banking, tax calculation, currency conversion, insurance and accounting. Although computers are still carrying out decimal calculation using software libraries and binary floating-point numbers, it is likely that in the near future, all processors will be equipped with units performing decimal operations directly on decimal operands. One critical building block for some complex decimal operations is the decimal carry-free adder. This paper discusses the mathematical framework of the addition, introduces a new signed-digit format for representing decimal numbers and presents an efficient architectural implementation. Delay estimation analysis shows that the adder offers improved performance over earlier designs.
Commercial Off-The-Shelf (COTS) Graphics Processing Board (GPB) Radiation Test Evaluation Report
NASA Technical Reports Server (NTRS)
Salazar, George A.; Steele, Glen F.
2013-01-01
Large round trip communications latency for deep space missions will require more onboard computational capabilities to enable the space vehicle to undertake many tasks that have traditionally been ground-based, mission control responsibilities. As a result, visual display graphics will be required to provide simpler vehicle situational awareness through graphical representations, as well as provide capabilities never before done in a space mission, such as augmented reality for in-flight maintenance or Telepresence activities. These capabilities will require graphics processors and associated support electronic components for high computational graphics processing. In an effort to understand the performance of commercial graphics card electronics operating in the expected radiation environment, a preliminary test was performed on five commercial offthe- shelf (COTS) graphics cards. This paper discusses the preliminary evaluation test results of five COTS graphics processing cards tested to the International Space Station (ISS) low earth orbit radiation environment. Three of the five graphics cards were tested to a total dose of 6000 rads (Si). The test articles, test configuration, preliminary results, and recommendations are discussed.
Tomlins, Keith Ian; Chijioke, Ugo; Westby, Andrew
2018-01-01
Gari, a fermented and dried semolina made from cassava, is one of the most common foods in West Africa. Recently introduced biofortified yellow cassava containing provitamin A carotenoids could help tackle vitamin A deficiency prevalent in those areas. However there are concerns because of the low retention of carotenoids during gari processing compared to other processes (e.g. boiling). The aim of the study was to assess the levels of true retention in trans–β-carotene during gari processing and investigate the causes of low retention. Influence of processing step, processor (3 commercial processors) and variety (TMS 01/1371; 01/1368 and 01/1412) were assessed. It was shown that low true retention (46% on average) during gari processing may be explained by not only chemical losses (i.e. due to roasting temperature) but also by physical losses (i.e. due to leaching of carotenoids in discarded liquids): true retention in the liquid lost from grating negatively correlated with true retention retained in the mash (R = -0.914). Moreover, true retention followed the same pattern as lost water at the different processing steps (i.e. for the commercial processors). Variety had a significant influence on true retention, carotenoid content, and trans-cis isomerisation but the processor type had little effect. It is the first time that the importance of physical carotenoid losses was demonstrated during processing of biofortified crops. PMID:29561886
Bechoff, Aurélie; Tomlins, Keith Ian; Chijioke, Ugo; Ilona, Paul; Westby, Andrew; Boy, Erick
2018-01-01
Gari, a fermented and dried semolina made from cassava, is one of the most common foods in West Africa. Recently introduced biofortified yellow cassava containing provitamin A carotenoids could help tackle vitamin A deficiency prevalent in those areas. However there are concerns because of the low retention of carotenoids during gari processing compared to other processes (e.g. boiling). The aim of the study was to assess the levels of true retention in trans-β-carotene during gari processing and investigate the causes of low retention. Influence of processing step, processor (3 commercial processors) and variety (TMS 01/1371; 01/1368 and 01/1412) were assessed. It was shown that low true retention (46% on average) during gari processing may be explained by not only chemical losses (i.e. due to roasting temperature) but also by physical losses (i.e. due to leaching of carotenoids in discarded liquids): true retention in the liquid lost from grating negatively correlated with true retention retained in the mash (R = -0.914). Moreover, true retention followed the same pattern as lost water at the different processing steps (i.e. for the commercial processors). Variety had a significant influence on true retention, carotenoid content, and trans-cis isomerisation but the processor type had little effect. It is the first time that the importance of physical carotenoid losses was demonstrated during processing of biofortified crops.
A multitasking, multisinked, multiprocessor data acquisition front end
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fox, R.; Au, R.; Molen, A.V.
1989-10-01
The authors have developed a generalized data acquisition front end system which is based on MC68020 processors running a commercial real time kernel (rhoSOS), and implemented primarily in a high level language (C). This system has been attached to the back end on-line computing system at NSCL via our high performance ETHERNET protocol. Data may be simultaneously sent to any number of back end systems. Fixed fraction sampling along links to back end computing is also supported. A nonprocedural program generator simplifies the development of experiment specific code.
On-board multicarrier demodulator for mobile applications using DSP implementation
NASA Astrophysics Data System (ADS)
Yim, W. H.; Kwan, C. C. D.; Coakley, F. P.; Evans, B. G.
1990-11-01
This paper describes the design and implementation of an on-board multicarrier demodulator using commercial digital signal processors. This is for use in a mobile satellite communication system employing an up-link SCPC/FDMA scheme. Channels are separated by a flexible multistage digital filter bank followed by a channel multiplexed digital demodulator array. The cross/dot product design approach of error detector leads to a new QPSK frequency control algorithm that allows fast acquisition without special preamble pattern. Timing correction is performed digitally using an extended stack of polyphase sub-filters.
Towards the formal specification of the requirements and design of a processor interface unit
NASA Technical Reports Server (NTRS)
Fura, David A.; Windley, Phillip J.; Cohen, Gerald C.
1993-01-01
Work to formally specify the requirements and design of a Processor Interface Unit (PIU), a single-chip subsystem providing memory interface, bus interface, and additional support services for a commercial microprocessor within a fault-tolerant computer system, is described. This system, the Fault-Tolerant Embedded Processor (FTEP), is targeted towards applications in avionics and space requiring extremely high levels of mission reliability, extended maintenance free operation, or both. The approaches that were developed for modeling the PIU requirements and for composition of the PIU subcomponents at high levels of abstraction are described. These approaches were used to specify and verify a nontrivial subset of the PIU behavior. The PIU specification in Higher Order Logic (HOL) is documented in a companion NASA contractor report entitled 'Towards the Formal Specification of the Requirements and Design of a Processor Interfacs Unit - HOL Listings.' The subsequent verification approach and HOL listings are documented in NASA contractor report entitled 'Towards the Formal Verification of the Requirements and Design of a Processor Interface Unit' and NASA contractor report entitled 'Towards the Formal Verification of the Requirements and Design of a Processor Interface Unit - HOL Listings.'
A high performance linear equation solver on the VPP500 parallel supercomputer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakanishi, Makoto; Ina, Hiroshi; Miura, Kenichi
1994-12-31
This paper describes the implementation of two high performance linear equation solvers developed for the Fujitsu VPP500, a distributed memory parallel supercomputer system. The solvers take advantage of the key architectural features of VPP500--(1) scalability for an arbitrary number of processors up to 222 processors, (2) flexible data transfer among processors provided by a crossbar interconnection network, (3) vector processing capability on each processor, and (4) overlapped computation and transfer. The general linear equation solver based on the blocked LU decomposition method achieves 120.0 GFLOPS performance with 100 processors in the LIN-PACK Highly Parallel Computing benchmark.
Extended performance electric propulsion power processor design study. Volume 1: Executive summary
NASA Technical Reports Server (NTRS)
Biess, J. J.; Inouye, L. Y.; Schoenfeld, A. D.
1977-01-01
Several power processor design concepts were evaluated and compared. Emphasis was placed on a 30cm ion thruster power processor with a beam supply rating of 2.2kW to 10kW. Extensions in power processor performance were defined and were designed in sufficient detail to determine efficiency, component weight, part count, reliability and thermal control. Preliminary electrical design, mechanical design, and thermal analysis were performed on a 6kW power transformer for the beam supply. Bi-Mod mechanical, structural, and thermal control configurations were evaluated for the power processor, and preliminary estimates of mechanical weight were determined. A program development plan was formulated that outlines the work breakdown structure for the development, qualification and fabrication of the power processor flight hardware.
Simulation of a master-slave event set processor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Comfort, J.C.
1984-03-01
Event set manipulation may consume a considerable amount of the computation time spent in performing a discrete-event simulation. One way of minimizing this time is to allow event set processing to proceed in parallel with the remainder of the simulation computation. The paper describes a multiprocessor simulation computer, in which all non-event set processing is performed by the principal processor (called the host). Event set processing is coordinated by a front end processor (the master) and actually performed by several other functionally identical processors (the slaves). A trace-driven simulation program modeling this system was constructed, and was run with tracemore » output taken from two different simulation programs. Output from this simulation suggests that a significant reduction in run time may be realized by this approach. Sensitivity analysis was performed on the significant parameters to the system (number of slave processors, relative processor speeds, and interprocessor communication times). A comparison between actual and simulation run times for a one-processor system was used to assist in the validation of the simulation. 7 references.« less
Implementing direct, spatially isolated problems on transputer networks
NASA Technical Reports Server (NTRS)
Ellis, Graham K.
1988-01-01
Parametric studies were performed on transputer networks of up to 40 processors to determine how to implement and maximize the performance of the solution of problems where no processor-to-processor data transfer is required for the problem solution (spatially isolated). Two types of problems are investigated a computationally intensive problem where the solution required the transmission of 160 bytes of data through the parallel network, and a communication intensive example that required the transmission of 3 Mbytes of data through the network. This data consists of solutions being sent back to the host processor and not intermediate results for another processor to work on. Studies were performed on both integer and floating-point transputers. The latter features an on-chip floating-point math unit and offers approximately an order of magnitude performance increase over the integer transputer on real valued computations. The results indicate that a minimum amount of work is required on each node per communication to achieve high network speedups (efficiencies). The floating-point processor requires approximately an order of magnitude more work per communication than the integer processor because of the floating-point unit's increased computing capacity.
Development of small scale cluster computer for numerical analysis
NASA Astrophysics Data System (ADS)
Zulkifli, N. H. N.; Sapit, A.; Mohammed, A. N.
2017-09-01
In this study, two units of personal computer were successfully networked together to form a small scale cluster. Each of the processor involved are multicore processor which has four cores in it, thus made this cluster to have eight processors. Here, the cluster incorporate Ubuntu 14.04 LINUX environment with MPI implementation (MPICH2). Two main tests were conducted in order to test the cluster, which is communication test and performance test. The communication test was done to make sure that the computers are able to pass the required information without any problem and were done by using simple MPI Hello Program where the program written in C language. Additional, performance test was also done to prove that this cluster calculation performance is much better than single CPU computer. In this performance test, four tests were done by running the same code by using single node, 2 processors, 4 processors, and 8 processors. The result shows that with additional processors, the time required to solve the problem decrease. Time required for the calculation shorten to half when we double the processors. To conclude, we successfully develop a small scale cluster computer using common hardware which capable of higher computing power when compare to single CPU processor, and this can be beneficial for research that require high computing power especially numerical analysis such as finite element analysis, computational fluid dynamics, and computational physics analysis.
Reactor Dosimetry Applications Using RAPTOR-M3G:. a New Parallel 3-D Radiation Transport Code
NASA Astrophysics Data System (ADS)
Longoni, Gianluca; Anderson, Stanwood L.
2009-08-01
The numerical solution of the Linearized Boltzmann Equation (LBE) via the Discrete Ordinates method (SN) requires extensive computational resources for large 3-D neutron and gamma transport applications due to the concurrent discretization of the angular, spatial, and energy domains. This paper will discuss the development RAPTOR-M3G (RApid Parallel Transport Of Radiation - Multiple 3D Geometries), a new 3-D parallel radiation transport code, and its application to the calculation of ex-vessel neutron dosimetry responses in the cavity of a commercial 2-loop Pressurized Water Reactor (PWR). RAPTOR-M3G is based domain decomposition algorithms, where the spatial and angular domains are allocated and processed on multi-processor computer architectures. As compared to traditional single-processor applications, this approach reduces the computational load as well as the memory requirement per processor, yielding an efficient solution methodology for large 3-D problems. Measured neutron dosimetry responses in the reactor cavity air gap will be compared to the RAPTOR-M3G predictions. This paper is organized as follows: Section 1 discusses the RAPTOR-M3G methodology; Section 2 describes the 2-loop PWR model and the numerical results obtained. Section 3 addresses the parallel performance of the code, and Section 4 concludes this paper with final remarks and future work.
NASA Astrophysics Data System (ADS)
Flynn, Edward M.; Mackowski, Michael J.
1993-01-01
This interim report documents the results of the first two phases of a four-phase program to develop a high flux heat exchanger for cooling future high performance aircraft electronics. Phase 1 defines future needs for high flux heat removal in advanced military electronics systems. The results are sorted by broad application categories: (1) commercial digital systems, (2) military data processors, (3) power processors, and (4) radar and optical systems. For applications expected to be fielded in five to ten years, the outlook is for steady state flux levels of 30-50 W/sq cm for digital processors and several hundred W/sq cm for power control applications. In Phase 1, a trade study was conducted on emerging cooling technologies which could remove a steady state chip heat flux of 100 W/sq cm while holding chip junction temperature to 90 C. Constraints imposed on heat exchanger design, in order to reflect operation in a fighter aircraft environment, included a practical lower limit on coolant supply temperature, the preference for a nontoxic, nonflammable, and nonfreezing coolant, the need to minimize weight and volume, and operation in an accelerating environment. The trade study recommended the Compact High Intensity Cooler (CHIC) for design, fabrication, and test in the final two phases of this program.
Computational needs survey of NASA automation and robotics missions. Volume 1: Survey and results
NASA Technical Reports Server (NTRS)
Davis, Gloria J.
1991-01-01
NASA's operational use of advanced processor technology in space systems lags behind its commercial development by more than eight years. One of the factors contributing to this is that mission computing requirements are frequently unknown, unstated, misrepresented, or simply not available in a timely manner. NASA must provide clear common requirements to make better use of available technology, to cut development lead time on deployable architectures, and to increase the utilization of new technology. A preliminary set of advanced mission computational processing requirements of automation and robotics (A&R) systems are provided for use by NASA, industry, and academic communities. These results were obtained in an assessment of the computational needs of current projects throughout NASA. The high percent of responses indicated a general need for enhanced computational capabilities beyond the currently available 80386 and 68020 processor technology. Because of the need for faster processors and more memory, 90 percent of the polled automation projects have reduced or will reduce the scope of their implementation capabilities. The requirements are presented with respect to their targeted environment, identifying the applications required, system performance levels necessary to support them, and the degree to which they are met with typical programmatic constraints. Volume one includes the survey and results. Volume two contains the appendixes.
Computational needs survey of NASA automation and robotics missions. Volume 2: Appendixes
NASA Technical Reports Server (NTRS)
Davis, Gloria J.
1991-01-01
NASA's operational use of advanced processor technology in space systems lags behind its commercial development by more than eight years. One of the factors contributing to this is the fact that mission computing requirements are frequency unknown, unstated, misrepresented, or simply not available in a timely manner. NASA must provide clear common requirements to make better use of available technology, to cut development lead time on deployable architectures, and to increase the utilization of new technology. Here, NASA, industry and academic communities are provided with a preliminary set of advanced mission computational processing requirements of automation and robotics (A and R) systems. The results were obtained in an assessment of the computational needs of current projects throughout NASA. The high percent of responses indicated a general need for enhanced computational capabilities beyond the currently available 80386 and 68020 processor technology. Because of the need for faster processors and more memory, 90 percent of the polled automation projects have reduced or will reduce the scope of their implemented capabilities. The requirements are presented with respect to their targeted environment, identifying the applications required, system performance levels necessary to support them, and the degree to which they are met with typical programmatic constraints. Here, appendixes are provided.
Case Study of Using High Performance Commercial Processors in Space
NASA Technical Reports Server (NTRS)
Ferguson, Roscoe C.; Olivas, Zulema
2009-01-01
The purpose of the Space Shuttle Cockpit Avionics Upgrade project (1999 2004) was to reduce crew workload and improve situational awareness. The upgrade was to augment the Shuttle avionics system with new hardware and software. A major success of this project was the validation of the hardware architecture and software design. This was significant because the project incorporated new technology and approaches for the development of human rated space software. An early version of this system was tested at the Johnson Space Center for one month by teams of astronauts. The results were positive, but NASA eventually cancelled the project towards the end of the development cycle. The goal to reduce crew workload and improve situational awareness resulted in the need for high performance Central Processing Units (CPUs). The choice of CPU selected was the PowerPC family, which is a reduced instruction set computer (RISC) known for its high performance. However, the requirement for radiation tolerance resulted in the re-evaluation of the selected family member of the PowerPC line. Radiation testing revealed that the original selected processor (PowerPC 7400) was too soft to meet mission objectives and an effort was established to perform trade studies and performance testing to determine a feasible candidate. At that time, the PowerPC RAD750s were radiation tolerant, but did not meet the required performance needs of the project. Thus, the final solution was to select the PowerPC 7455. This processor did not have a radiation tolerant version, but had some ability to detect failures. However, its cache tags did not provide parity and thus the project incorporated a software strategy to detect radiation failures. The strategy was to incorporate dual paths for software generating commands to the legacy Space Shuttle avionics to prevent failures due to the softness of the upgraded avionics.
Case Study of Using High Performance Commercial Processors in a Space Environment
NASA Technical Reports Server (NTRS)
Ferguson, Roscoe C.; Olivas, Zulema
2009-01-01
The purpose of the Space Shuttle Cockpit Avionics Upgrade project was to reduce crew workload and improve situational awareness. The upgrade was to augment the Shuttle avionics system with new hardware and software. A major success of this project was the validation of the hardware architecture and software design. This was significant because the project incorporated new technology and approaches for the development of human rated space software. An early version of this system was tested at the Johnson Space Center for one month by teams of astronauts. The results were positive, but NASA eventually cancelled the project towards the end of the development cycle. The goal to reduce crew workload and improve situational awareness resulted in the need for high performance Central Processing Units (CPUs). The choice of CPU selected was the PowerPC family, which is a reduced instruction set computer (RISC) known for its high performance. However, the requirement for radiation tolerance resulted in the reevaluation of the selected family member of the PowerPC line. Radiation testing revealed that the original selected processor (PowerPC 7400) was too soft to meet mission objectives and an effort was established to perform trade studies and performance testing to determine a feasible candidate. At that time, the PowerPC RAD750s where radiation tolerant, but did not meet the required performance needs of the project. Thus, the final solution was to select the PowerPC 7455. This processor did not have a radiation tolerant version, but faired better than the 7400 in the ability to detect failures. However, its cache tags did not provide parity and thus the project incorporated a software strategy to detect radiation failures. The strategy was to incorporate dual paths for software generating commands to the legacy Space Shuttle avionics to prevent failures due to the softness of the upgraded avionics.
Energy Efficient Image/Video Data Transmission on Commercial Multi-Core Processors
Lee, Sungju; Kim, Heegon; Chung, Yongwha; Park, Daihee
2012-01-01
In transmitting image/video data over Video Sensor Networks (VSNs), energy consumption must be minimized while maintaining high image/video quality. Although image/video compression is well known for its efficiency and usefulness in VSNs, the excessive costs associated with encoding computation and complexity still hinder its adoption for practical use. However, it is anticipated that high-performance handheld multi-core devices will be used as VSN processing nodes in the near future. In this paper, we propose a way to improve the energy efficiency of image and video compression with multi-core processors while maintaining the image/video quality. We improve the compression efficiency at the algorithmic level or derive the optimal parameters for the combination of a machine and compression based on the tradeoff between the energy consumption and the image/video quality. Based on experimental results, we confirm that the proposed approach can improve the energy efficiency of the straightforward approach by a factor of 2∼5 without compromising image/video quality. PMID:23202181
Gigaflop architecture, a hardware perspective
NASA Technical Reports Server (NTRS)
Feierbach, G. F.
1978-01-01
Any super computer built in the early 1980s will use components that are available by fall 1978. The architecture of such a system cannot depart radically from current super computers if the software experience painfully acquired from these computers in the 70's is to apply. Given the above constraints, 10 billion floating point operations per second (BFLOPS) are attainable and a problem memory of 512 million (64 bit) words could be supported by the technology of the time. In contrast to this, industry is likely to respond with commercially available machines with a performance of less than 150 MFLOPS. This is due to self-imposed constraints on the manufacturers to provide upward compatible architectures (same instruction set) and systems which can be sold in significant volumes. Since this computing speed is inadequate to meet the demands of computational fluid dynamics, a special processor is required. Issues which are felt to be significant in the pursuit of maximum compute capability in this special processor are discussed.
Formal design specification of a Processor Interface Unit
NASA Technical Reports Server (NTRS)
Fura, David A.; Windley, Phillip J.; Cohen, Gerald C.
1992-01-01
This report describes work to formally specify the requirements and design of a processor interface unit (PIU), a single-chip subsystem providing memory-interface bus-interface, and additional support services for a commercial microprocessor within a fault-tolerant computer system. This system, the Fault-Tolerant Embedded Processor (FTEP), is targeted towards applications in avionics and space requiring extremely high levels of mission reliability, extended maintenance-free operation, or both. The need for high-quality design assurance in such applications is an undisputed fact, given the disastrous consequences that even a single design flaw can produce. Thus, the further development and application of formal methods to fault-tolerant systems is of critical importance as these systems see increasing use in modern society.
Generic Divide and Conquer Internet-Based Computing
NASA Technical Reports Server (NTRS)
Follen, Gregory J. (Technical Monitor); Radenski, Atanas
2003-01-01
The growth of Internet-based applications and the proliferation of networking technologies have been transforming traditional commercial application areas as well as computer and computational sciences and engineering. This growth stimulates the exploration of Peer to Peer (P2P) software technologies that can open new research and application opportunities not only for the commercial world, but also for the scientific and high-performance computing applications community. The general goal of this project is to achieve better understanding of the transition to Internet-based high-performance computing and to develop solutions for some of the technical challenges of this transition. In particular, we are interested in creating long-term motivation for end users to provide their idle processor time to support computationally intensive tasks. We believe that a practical P2P architecture should provide useful service to both clients with high-performance computing needs and contributors of lower-end computing resources. To achieve this, we are designing dual -service architecture for P2P high-performance divide-and conquer computing; we are also experimenting with a prototype implementation. Our proposed architecture incorporates a master server, utilizes dual satellite servers, and operates on the Internet in a dynamically changing large configuration of lower-end nodes provided by volunteer contributors. A dual satellite server comprises a high-performance computing engine and a lower-end contributor service engine. The computing engine provides generic support for divide and conquer computations. The service engine is intended to provide free useful HTTP-based services to contributors of lower-end computing resources. Our proposed architecture is complementary to and accessible from computational grids, such as Globus, Legion, and Condor. Grids provide remote access to existing higher-end computing resources; in contrast, our goal is to utilize idle processor time of lower-end Internet nodes. Our project is focused on a generic divide and conquer paradigm and on mobile applications of this paradigm that can operate on a loose and ever changing pool of lower-end Internet nodes.
High-performance reconfigurable hardware architecture for restricted Boltzmann machines.
Ly, Daniel Le; Chow, Paul
2010-11-01
Despite the popularity and success of neural networks in research, the number of resulting commercial or industrial applications has been limited. A primary cause for this lack of adoption is that neural networks are usually implemented as software running on general-purpose processors. Hence, a hardware implementation that can exploit the inherent parallelism in neural networks is desired. This paper investigates how the restricted Boltzmann machine (RBM), which is a popular type of neural network, can be mapped to a high-performance hardware architecture on field-programmable gate array (FPGA) platforms. The proposed modular framework is designed to reduce the time complexity of the computations through heavily customized hardware engines. A method to partition large RBMs into smaller congruent components is also presented, allowing the distribution of one RBM across multiple FPGA resources. The framework is tested on a platform of four Xilinx Virtex II-Pro XC2VP70 FPGAs running at 100 MHz through a variety of different configurations. The maximum performance was obtained by instantiating an RBM of 256 × 256 nodes distributed across four FPGAs, which resulted in a computational speed of 3.13 billion connection-updates-per-second and a speedup of 145-fold over an optimized C program running on a 2.8-GHz Intel processor.
Life sciences flight experiments microcomputer
NASA Technical Reports Server (NTRS)
Bartram, Peter N.
1987-01-01
A promising microcomputer configuration for the Spacelab Life Sciences Lab. Equipment inventory consists of multiple processors. One processor's use is reserved, with additional processors dedicated to real time input and output operations. A simple form of such a configuration, with a processor board for analog to digital conversion and another processor board for digital to analog conversion, was studied. The system used digital parallel data lines between the boards, operating independently of the system bus. Good performance of individual components was demonstrated: the analog to digital converter was at over 10,000 samples per second. The combination of the data transfer between boards with the input or output functions on each board slowed performance, with a maximum throughput of 2800 to 2900 analog samples per second. Any of several techniques, such as use of the system bus for data transfer or the addition of direct memory access hardware to the processor boards, should give significantly improved performance.
GPU: the biggest key processor for AI and parallel processing
NASA Astrophysics Data System (ADS)
Baji, Toru
2017-07-01
Two types of processors exist in the market. One is the conventional CPU and the other is Graphic Processor Unit (GPU). Typical CPU is composed of 1 to 8 cores while GPU has thousands of cores. CPU is good for sequential processing, while GPU is good to accelerate software with heavy parallel executions. GPU was initially dedicated for 3D graphics. However from 2006, when GPU started to apply general-purpose cores, it was noticed that this architecture can be used as a general purpose massive-parallel processor. NVIDIA developed a software framework Compute Unified Device Architecture (CUDA) that make it possible to easily program the GPU for these application. With CUDA, GPU started to be used in workstations and supercomputers widely. Recently two key technologies are highlighted in the industry. The Artificial Intelligence (AI) and Autonomous Driving Cars. AI requires a massive parallel operation to train many-layers of neural networks. With CPU alone, it was impossible to finish the training in a practical time. The latest multi-GPU system with P100 makes it possible to finish the training in a few hours. For the autonomous driving cars, TOPS class of performance is required to implement perception, localization, path planning processing and again SoC with integrated GPU will play a key role there. In this paper, the evolution of the GPU which is one of the biggest commercial devices requiring state-of-the-art fabrication technology will be introduced. Also overview of the GPU demanding key application like the ones described above will be introduced.
WATERLOPP V2/64: A highly parallel machine for numerical computation
NASA Astrophysics Data System (ADS)
Ostlund, Neil S.
1985-07-01
Current technological trends suggest that the high performance scientific machines of the future are very likely to consist of a large number (greater than 1024) of processors connected and communicating with each other in some as yet undetermined manner. Such an assembly of processors should behave as a single machine in obtaining numerical solutions to scientific problems. However, the appropriate way of organizing both the hardware and software of such an assembly of processors is an unsolved and active area of research. It is particularly important to minimize the organizational overhead of interprocessor comunication, global synchronization, and contention for shared resources if the performance of a large number ( n) of processors is to be anything like the desirable n times the performance of a single processor. In many situations, adding a processor actually decreases the performance of the overall system since the extra organizational overhead is larger than the extra processing power added. The systolic loop architecture is a new multiple processor architecture which attemps at a solution to the problem of how to organize a large number of asynchronous processors into an effective computational system while minimizing the organizational overhead. This paper gives a brief overview of the basic systolic loop architecture, systolic loop algorithms for numerical computation, and a 64-processor implementation of the architecture, WATERLOOP V2/64, that is being used as a testbed for exploring the hardware, software, and algorithmic aspects of the architecture.
ARACHNID: A prototype object-oriented database tool for distributed systems
NASA Technical Reports Server (NTRS)
Younger, Herbert; Oreilly, John; Frogner, Bjorn
1994-01-01
This paper discusses the results of a Phase 2 SBIR project sponsored by NASA and performed by MIMD Systems, Inc. A major objective of this project was to develop specific concepts for improved performance in accessing large databases. An object-oriented and distributed approach was used for the general design, while a geographical decomposition was used as a specific solution. The resulting software framework is called ARACHNID. The Faint Source Catalog developed by NASA was the initial database testbed. This is a database of many giga-bytes, where an order of magnitude improvement in query speed is being sought. This database contains faint infrared point sources obtained from telescope measurements of the sky. A geographical decomposition of this database is an attractive approach to dividing it into pieces. Each piece can then be searched on individual processors with only a weak data linkage between the processors being required. As a further demonstration of the concepts implemented in ARACHNID, a tourist information system is discussed. This version of ARACHNID is the commercial result of the project. It is a distributed, networked, database application where speed, maintenance, and reliability are important considerations. This paper focuses on the design concepts and technologies that form the basis for ARACHNID.
Neurovision processor for designing intelligent sensors
NASA Astrophysics Data System (ADS)
Gupta, Madan M.; Knopf, George K.
1992-03-01
A programmable multi-task neuro-vision processor, called the Positive-Negative (PN) neural processor, is proposed as a plausible hardware mechanism for constructing robust multi-task vision sensors. The computational operations performed by the PN neural processor are loosely based on the neural activity fields exhibited by certain nervous tissue layers situated in the brain. The neuro-vision processor can be programmed to generate diverse dynamic behavior that may be used for spatio-temporal stabilization (STS), short-term visual memory (STVM), spatio-temporal filtering (STF) and pulse frequency modulation (PFM). A multi- functional vision sensor that performs a variety of information processing operations on time- varying two-dimensional sensory images can be constructed from a parallel and hierarchical structure of numerous individually programmed PN neural processors.
A GPU accelerated PDF transparency engine
NASA Astrophysics Data System (ADS)
Recker, John; Lin, I.-Jong; Tastl, Ingeborg
2011-01-01
As commercial printing presses become faster, cheaper and more efficient, so too must the Raster Image Processors (RIP) that prepare data for them to print. Digital press RIPs, however, have been challenged to on the one hand meet the ever increasing print performance of the latest digital presses, and on the other hand process increasingly complex documents with transparent layers and embedded ICC profiles. This paper explores the challenges encountered when implementing a GPU accelerated driver for the open source Ghostscript Adobe PostScript and PDF language interpreter targeted at accelerating PDF transparency for high speed commercial presses. It further describes our solution, including an image memory manager for tiling input and output images and documents, a PDF compatible multiple image layer blending engine, and a GPU accelerated ICC v4 compatible color transformation engine. The result, we believe, is the foundation for a scalable, efficient, distributed RIP system that can meet current and future RIP requirements for a wide range of commercial digital presses.
NASA Astrophysics Data System (ADS)
Genovese, Mariangela; Napoli, Ettore
2013-05-01
The identification of moving objects is a fundamental step in computer vision processing chains. The development of low cost and lightweight smart cameras steadily increases the request of efficient and high performance circuits able to process high definition video in real time. The paper proposes two processor cores aimed to perform the real time background identification on High Definition (HD, 1920 1080 pixel) video streams. The implemented algorithm is the OpenCV version of the Gaussian Mixture Model (GMM), an high performance probabilistic algorithm for the segmentation of the background that is however computationally intensive and impossible to implement on general purpose CPU with the constraint of real time processing. In the proposed paper, the equations of the OpenCV GMM algorithm are optimized in such a way that a lightweight and low power implementation of the algorithm is obtained. The reported performances are also the result of the use of state of the art truncated binary multipliers and ROM compression techniques for the implementation of the non-linear functions. The first circuit has commercial FPGA devices as a target and provides speed and logic resource occupation that overcome previously proposed implementations. The second circuit is oriented to an ASIC (UMC-90nm) standard cell implementation. Both implementations are able to process more than 60 frames per second in 1080p format, a frame rate compatible with HD television.
DFT algorithms for bit-serial GaAs array processor architectures
NASA Technical Reports Server (NTRS)
Mcmillan, Gary B.
1988-01-01
Systems and Processes Engineering Corporation (SPEC) has developed an innovative array processor architecture for computing Fourier transforms and other commonly used signal processing algorithms. This architecture is designed to extract the highest possible array performance from state-of-the-art GaAs technology. SPEC's architectural design includes a high performance RISC processor implemented in GaAs, along with a Floating Point Coprocessor and a unique Array Communications Coprocessor, also implemented in GaAs technology. Together, these data processors represent the latest in technology, both from an architectural and implementation viewpoint. SPEC has examined numerous algorithms and parallel processing architectures to determine the optimum array processor architecture. SPEC has developed an array processor architecture with integral communications ability to provide maximum node connectivity. The Array Communications Coprocessor embeds communications operations directly in the core of the processor architecture. A Floating Point Coprocessor architecture has been defined that utilizes Bit-Serial arithmetic units, operating at very high frequency, to perform floating point operations. These Bit-Serial devices reduce the device integration level and complexity to a level compatible with state-of-the-art GaAs device technology.
A High-Throughput Processor for Flight Control Research Using Small UAVs
NASA Technical Reports Server (NTRS)
Klenke, Robert H.; Sleeman, W. C., IV; Motter, Mark A.
2006-01-01
There are numerous autopilot systems that are commercially available for small (<100 lbs) UAVs. However, they all share several key disadvantages for conducting aerodynamic research, chief amongst which is the fact that most utilize older, slower, 8- or 16-bit microcontroller technologies. This paper describes the development and testing of a flight control system (FCS) for small UAV s based on a modern, high throughput, embedded processor. In addition, this FCS platform contains user-configurable hardware resources in the form of a Field Programmable Gate Array (FPGA) that can be used to implement custom, application-specific hardware. This hardware can be used to off-load routine tasks such as sensor data collection, from the FCS processor thereby further increasing the computational throughput of the system.
Simulation analysis of a microcomputer-based, low-cost Omega navigation system
NASA Technical Reports Server (NTRS)
Lilley, R. W.; Salter, R. J., Jr.
1976-01-01
The current status of research on a proposed micro-computer-based, low-cost Omega Navigation System (ONS) is described. The design approach emphasizes minimum hardware, maximum software, and the use of a low-cost, commercially-available microcomputer. Currently under investigation is the implementation of a low-cost navigation processor and its interface with an omega sensor to complete the hardware-based ONS. Sensor processor functions are simulated to determine how many of the sensor processor functions can be handled by innovative software. An input data base of live Omega ground and flight test data was created. The Omega sensor and microcomputer interface modules used to collect the data are functionally described. Automatic synchronization to the Omega transmission pattern is described as an example of the algorithms developed using this data base.
High-performance computing — an overview
NASA Astrophysics Data System (ADS)
Marksteiner, Peter
1996-08-01
An overview of high-performance computing (HPC) is given. Different types of computer architectures used in HPC are discussed: vector supercomputers, high-performance RISC processors, various parallel computers like symmetric multiprocessors, workstation clusters, massively parallel processors. Software tools and programming techniques used in HPC are reviewed: vectorizing compilers, optimization and vector tuning, optimization for RISC processors; parallel programming techniques like shared-memory parallelism, message passing and data parallelism; and numerical libraries.
Missile signal processing common computer architecture for rapid technology upgrade
NASA Astrophysics Data System (ADS)
Rabinkin, Daniel V.; Rutledge, Edward; Monticciolo, Paul
2004-10-01
Interceptor missiles process IR images to locate an intended target and guide the interceptor towards it. Signal processing requirements have increased as the sensor bandwidth increases and interceptors operate against more sophisticated targets. A typical interceptor signal processing chain is comprised of two parts. Front-end video processing operates on all pixels of the image and performs such operations as non-uniformity correction (NUC), image stabilization, frame integration and detection. Back-end target processing, which tracks and classifies targets detected in the image, performs such algorithms as Kalman tracking, spectral feature extraction and target discrimination. In the past, video processing was implemented using ASIC components or FPGAs because computation requirements exceeded the throughput of general-purpose processors. Target processing was performed using hybrid architectures that included ASICs, DSPs and general-purpose processors. The resulting systems tended to be function-specific, and required custom software development. They were developed using non-integrated toolsets and test equipment was developed along with the processor platform. The lifespan of a system utilizing the signal processing platform often spans decades, while the specialized nature of processor hardware and software makes it difficult and costly to upgrade. As a result, the signal processing systems often run on outdated technology, algorithms are difficult to update, and system effectiveness is impaired by the inability to rapidly respond to new threats. A new design approach is made possible three developments; Moore's Law - driven improvement in computational throughput; a newly introduced vector computing capability in general purpose processors; and a modern set of open interface software standards. Today's multiprocessor commercial-off-the-shelf (COTS) platforms have sufficient throughput to support interceptor signal processing requirements. This application may be programmed under existing real-time operating systems using parallel processing software libraries, resulting in highly portable code that can be rapidly migrated to new platforms as processor technology evolves. Use of standardized development tools and 3rd party software upgrades are enabled as well as rapid upgrade of processing components as improved algorithms are developed. The resulting weapon system will have a superior processing capability over a custom approach at the time of deployment as a result of a shorter development cycles and use of newer technology. The signal processing computer may be upgraded over the lifecycle of the weapon system, and can migrate between weapon system variants enabled by modification simplicity. This paper presents a reference design using the new approach that utilizes an Altivec PowerPC parallel COTS platform. It uses a VxWorks-based real-time operating system (RTOS), and application code developed using an efficient parallel vector library (PVL). A quantification of computing requirements and demonstration of interceptor algorithm operating on this real-time platform are provided.
Evaluation of commercial video-based intersection signal actuation systems.
DOT National Transportation Integrated Search
2008-12-01
Video cameras and computer image processors have come into widespread use for the detection of : vehicles for signal actuation at controlled intersections. Video is considered both a cost-saving and : convenient alternative to conventional stop-line ...
Finite element computation on nearest neighbor connected machines
NASA Technical Reports Server (NTRS)
Mcaulay, A. D.
1984-01-01
Research aimed at faster, more cost effective parallel machines and algorithms for improving designer productivity with finite element computations is discussed. A set of 8 boards, containing 4 nearest neighbor connected arrays of commercially available floating point chips and substantial memory, are inserted into a commercially available machine. One-tenth Mflop (64 bit operation) processors provide an 89% efficiency when solving the equations arising in a finite element problem for a single variable regular grid of size 40 by 40 by 40. This is approximately 15 to 20 times faster than a much more expensive machine such as a VAX 11/780 used in double precision. The efficiency falls off as faster or more processors are envisaged because communication times become dominant. A novel successive overrelaxation algorithm which uses cyclic reduction in order to permit data transfer and computation to overlap in time is proposed.
Performance of the Cell processor for biomolecular simulations
NASA Astrophysics Data System (ADS)
De Fabritiis, G.
2007-06-01
The new Cell processor represents a turning point for computing intensive applications. Here, I show that for molecular dynamics it is possible to reach an impressive sustained performance in excess of 30 Gflops with a peak of 45 Gflops for the non-bonded force calculations, over one order of magnitude faster than a single core standard processor.
Hybrid Electro-Optic Processor
1991-07-01
This report describes the design of a hybrid electro - optic processor to perform adaptive interference cancellation in radar systems. The processor is...modulator is reported. Included is this report is a discussion of the design, partial fabrication in the laboratory, and partial testing of the hybrid electro ... optic processor. A follow on effort is planned to complete the construction and testing of the processor. The work described in this report is the
Reducing adaptive optics latency using Xeon Phi many-core processors
NASA Astrophysics Data System (ADS)
Barr, David; Basden, Alastair; Dipper, Nigel; Schwartz, Noah
2015-11-01
The next generation of Extremely Large Telescopes (ELTs) for astronomy will rely heavily on the performance of their adaptive optics (AO) systems. Real-time control is at the heart of the critical technologies that will enable telescopes to deliver the best possible science and will require a very significant extrapolation from current AO hardware existing for 4-10 m telescopes. Investigating novel real-time computing architectures and testing their eligibility against anticipated challenges is one of the main priorities of technology development for the ELTs. This paper investigates the suitability of the Intel Xeon Phi, which is a commercial off-the-shelf hardware accelerator. We focus on wavefront reconstruction performance, implementing a straightforward matrix-vector multiplication (MVM) algorithm. We present benchmarking results of the Xeon Phi on a real-time Linux platform, both as a standalone processor and integrated into an existing real-time controller (RTC). Performance of single and multiple Xeon Phis are investigated. We show that this technology has the potential of greatly reducing the mean latency and variations in execution time (jitter) of large AO systems. We present both a detailed performance analysis of the Xeon Phi for a typical E-ELT first-light instrument along with a more general approach that enables us to extend to any AO system size. We show that systematic and detailed performance analysis is an essential part of testing novel real-time control hardware to guarantee optimal science results.
A High Performance VLSI Computer Architecture For Computer Graphics
NASA Astrophysics Data System (ADS)
Chin, Chi-Yuan; Lin, Wen-Tai
1988-10-01
A VLSI computer architecture, consisting of multiple processors, is presented in this paper to satisfy the modern computer graphics demands, e.g. high resolution, realistic animation, real-time display etc.. All processors share a global memory which are partitioned into multiple banks. Through a crossbar network, data from one memory bank can be broadcasted to many processors. Processors are physically interconnected through a hyper-crossbar network (a crossbar-like network). By programming the network, the topology of communication links among processors can be reconfigurated to satisfy specific dataflows of different applications. Each processor consists of a controller, arithmetic operators, local memory, a local crossbar network, and I/O ports to communicate with other processors, memory banks, and a system controller. Operations in each processor are characterized into two modes, i.e. object domain and space domain, to fully utilize the data-independency characteristics of graphics processing. Special graphics features such as 3D-to-2D conversion, shadow generation, texturing, and reflection, can be easily handled. With the current high density interconnection (MI) technology, it is feasible to implement a 64-processor system to achieve 2.5 billion operations per second, a performance needed in most advanced graphics applications.
Optical Associative Processors For Visual Perception"
NASA Astrophysics Data System (ADS)
Casasent, David; Telfer, Brian
1988-05-01
We consider various associative processor modifications required to allow these systems to be used for visual perception, scene analysis, and object recognition. For these applications, decisions on the class of the objects present in the input image are required and thus heteroassociative memories are necessary (rather than the autoassociative memories that have been given most attention). We analyze the performance of both associative processors and note that there is considerable difference between heteroassociative and autoassociative memories. We describe associative processors suitable for realizing functions such as: distortion invariance (using linear discriminant function memory synthesis techniques), noise and image processing performance (using autoassociative memories in cascade with with a heteroassociative processor and with a finite number of autoassociative memory iterations employed), shift invariance (achieved through the use of associative processors operating on feature space data), and the analysis of multiple objects in high noise (which is achieved using associative processing of the output from symbolic correlators). We detail and provide initial demonstrations of the use of associative processors operating on iconic, feature space and symbolic data, as well as adaptive associative processors.
Sentinel-2 Level 2A Prototype Processor: Architecture, Algorithms And First Results
NASA Astrophysics Data System (ADS)
Muller-Wilm, Uwe; Louis, Jerome; Richter, Rudolf; Gascon, Ferran; Niezette, Marc
2013-12-01
Sen2Core is a prototype processor for Sentinel-2 Level 2A product processing and formatting. The processor is developed for and with ESA and performs the tasks of Atmospheric Correction and Scene Classification of Level 1C input data. Level 2A outputs are: Bottom-Of- Atmosphere (BOA) corrected reflectance images, Aerosol Optical Thickness-, Water Vapour-, Scene Classification maps and Quality indicators, including cloud and snow probabilities. The Level 2A Product Formatting performed by the processor follows the specification of the Level 1C User Product.
Technology transfer of military space microprocessor developments
NASA Astrophysics Data System (ADS)
Gorden, C.; King, D.; Byington, L.; Lanza, D.
1999-01-01
Over the past 13 years the Air Force Research Laboratory (AFRL) has led the development of microprocessors and computers for USAF space and strategic missile applications. As a result of these Air Force development programs, advanced computer technology is available for use by civil and commercial space customers as well. The Generic VHSIC Spaceborne Computer (GVSC) program began in 1985 at AFRL to fulfill a deficiency in the availability of space-qualified data and control processors. GVSC developed a radiation hardened multi-chip version of the 16-bit, Mil-Std 1750A microprocessor. The follow-on to GVSC, the Advanced Spaceborne Computer Module (ASCM) program, was initiated by AFRL to establish two industrial sources for complete, radiation-hardened 16-bit and 32-bit computers and microelectronic components. Development of the Control Processor Module (CPM), the first of two ASCM contract phases, concluded in 1994 with the availability of two sources for space-qualified, 16-bit Mil-Std-1750A computers, cards, multi-chip modules, and integrated circuits. The second phase of the program, the Advanced Technology Insertion Module (ATIM), was completed in December 1997. ATIM developed two single board computers based on 32-bit reduced instruction set computer (RISC) processors. GVSC, CPM, and ATIM technologies are flying or baselined into the majority of today's DoD, NASA, and commercial satellite systems.
Effect of poor control of film processors on mammographic image quality.
Kimme-Smith, C; Sun, H; Bassett, L W; Gold, R H
1992-11-01
With the increasingly stringent standards of image quality in mammography, film processor quality control is especially important. Current methods are not sufficient for ensuring good processing. The authors used a sensitometer and densitometer system to evaluate the performance of 22 processors at 16 mammographic facilities. Standard sensitometric values of two films were established, and processor performance was assessed for variations from these standards. Developer chemistry of each processor was analyzed and correlated with its sensitometric values. Ten processors were retested, and nine were found to be out of calibration. The developer components of hydroquinone, sulfites, bromide, and alkalinity varied the most, and low concentrations of hydroquinone were associated with lower average gradients at two facilities. Use of the sensitometer and densitometer system helps identify out-of-calibration processors, but further study is needed to correlate sensitometric values with developer component values. The authors believe that present quality control would be improved if sensitometric or other tests could be used to identify developer components that are out of calibration.
Formulating poultry processing sanitizers from alkaline salts of fatty acids
USDA-ARS?s Scientific Manuscript database
Though some poultry processing operations remove microorganisms from carcasses; other processing operations cause cross-contamination that spreads microorganisms between carcasses, processing water, and processing equipment. One method used by commercial poultry processors to reduce microbial contam...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Y.; Cameron, K.W.
1998-11-24
Workload characterization has been proven an essential tool to architecture design and performance evaluation in both scientific and commercial computing areas. Traditional workload characterization techniques include FLOPS rate, cache miss ratios, CPI (cycles per instruction or IPC, instructions per cycle) etc. With the complexity of sophisticated modern superscalar microprocessors, these traditional characterization techniques are not powerful enough to pinpoint the performance bottleneck of an application on a specific microprocessor. They are also incapable of immediately demonstrating the potential performance benefit of any architectural or functional improvement in a new processor design. To solve these problems, many people rely on simulators,more » which have substantial constraints especially on large-scale scientific computing applications. This paper presents a new technique of characterizing applications at the instruction level using hardware performance counters. It has the advantage of collecting instruction-level characteristics in a few runs virtually without overhead or slowdown. A variety of instruction counts can be utilized to calculate some average abstract workload parameters corresponding to microprocessor pipelines or functional units. Based on the microprocessor architectural constraints and these calculated abstract parameters, the architectural performance bottleneck for a specific application can be estimated. In particular, the analysis results can provide some insight to the problem that only a small percentage of processor peak performance can be achieved even for many very cache-friendly codes. Meanwhile, the bottleneck estimation can provide suggestions about viable architectural/functional improvement for certain workloads. Eventually, these abstract parameters can lead to the creation of an analytical microprocessor pipeline model and memory hierarchy model.« less
Systems and methods for performing wireless financial transactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCown, Steven Harvey
2012-07-03
A secure computing module (SCM) is configured for connection with a host device. The SCM includes a processor for performing secure processing operations, a host interface for coupling the processor to the host device, and a memory connected to the processor wherein the processor logically isolates at least some of the memory from access by the host device. The SCM also includes a proximate-field wireless communicator connected to the processor to communicate with another SCM associated with another host device. The SCM generates a secure digital signature for a financial transaction package and communicates the package and the signature tomore » the other SCM using the proximate-field wireless communicator. Financial transactions are performed from person to person using the secure digital signature of each person's SCM and possibly message encryption. The digital signatures and transaction details are communicated to appropriate financial organizations to authenticate the transaction parties and complete the transaction.« less
Concurrent Probabilistic Simulation of High Temperature Composite Structural Response
NASA Technical Reports Server (NTRS)
Abdi, Frank
1996-01-01
A computational structural/material analysis and design tool which would meet industry's future demand for expedience and reduced cost is presented. This unique software 'GENOA' is dedicated to parallel and high speed analysis to perform probabilistic evaluation of high temperature composite response of aerospace systems. The development is based on detailed integration and modification of diverse fields of specialized analysis techniques and mathematical models to combine their latest innovative capabilities into a commercially viable software package. The technique is specifically designed to exploit the availability of processors to perform computationally intense probabilistic analysis assessing uncertainties in structural reliability analysis and composite micromechanics. The primary objectives which were achieved in performing the development were: (1) Utilization of the power of parallel processing and static/dynamic load balancing optimization to make the complex simulation of structure, material and processing of high temperature composite affordable; (2) Computational integration and synchronization of probabilistic mathematics, structural/material mechanics and parallel computing; (3) Implementation of an innovative multi-level domain decomposition technique to identify the inherent parallelism, and increasing convergence rates through high- and low-level processor assignment; (4) Creating the framework for Portable Paralleled architecture for the machine independent Multi Instruction Multi Data, (MIMD), Single Instruction Multi Data (SIMD), hybrid and distributed workstation type of computers; and (5) Market evaluation. The results of Phase-2 effort provides a good basis for continuation and warrants Phase-3 government, and industry partnership.
Solid Oxide Fuel Cells Operating on Alternative and Renewable Fuels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Xiaoxing; Quan, Wenying; Xiao, Jing
2014-09-30
This DOE project at the Pennsylvania State University (Penn State) initially involved Siemens Energy, Inc. to (1) develop new fuel processing approaches for using selected alternative and renewable fuels – anaerobic digester gas (ADG) and commercial diesel fuel (with 15 ppm sulfur) – in solid oxide fuel cell (SOFC) power generation systems; and (2) conduct integrated fuel processor – SOFC system tests to evaluate the performance of the fuel processors and overall systems. Siemens Energy Inc. was to provide SOFC system to Penn State for testing. The Siemens work was carried out at Siemens Energy Inc. in Pittsburgh, PA. Themore » unexpected restructuring in Siemens organization, however, led to the elimination of the Siemens Stationary Fuel Cell Division within the company. Unfortunately, this led to the Siemens subcontract with Penn State ending on September 23rd, 2010. SOFC system was never delivered to Penn State. With the assistance of NETL project manager, the Penn State team has since developed a collaborative research with Delphi as the new subcontractor and this work involved the testing of a stack of planar solid oxide fuel cells from Delphi.« less
Stanford Hardware Development Program
NASA Technical Reports Server (NTRS)
Peterson, A.; Linscott, I.; Burr, J.
1986-01-01
Architectures for high performance, digital signal processing, particularly for high resolution, wide band spectrum analysis were developed. These developments are intended to provide instrumentation for NASA's Search for Extraterrestrial Intelligence (SETI) program. The real time signal processing is both formal and experimental. The efficient organization and optimal scheduling of signal processing algorithms were investigated. The work is complemented by efforts in processor architecture design and implementation. A high resolution, multichannel spectrometer that incorporates special purpose microcoded signal processors is being tested. A general purpose signal processor for the data from the multichannel spectrometer was designed to function as the processing element in a highly concurrent machine. The processor performance required for the spectrometer is in the range of 1000 to 10,000 million instructions per second (MIPS). Multiple node processor configurations, where each node performs at 100 MIPS, are sought. The nodes are microprogrammable and are interconnected through a network with high bandwidth for neighboring nodes, and medium bandwidth for nodes at larger distance. The implementation of both the current mutlichannel spectrometer and the signal processor as Very Large Scale Integration CMOS chip sets was commenced.
Practical use of a word processor in a histopathology laboratory.
Briggs, J C; Ibrahim, N B; Mackintosh, I; Norris, D
1982-01-01
Some of the facilities available with a commercially purchased word processing program, linked to a DEC PDP 11/23 computer are described, together with an account of the practical histopathological use. The system is based on a share of the computer with a Clinical Chemistry Department. Development was time-consuming and required the constant availability of the Department of Physics. However, once working, considerable saving in secretarial time has resulted and a number of projects have been started which would not have been contemplated without the use of the word processor and its linked computer. Images PMID:7068906
Advanced flight computers for planetary exploration
NASA Technical Reports Server (NTRS)
Stephenson, R. Rhoads
1988-01-01
Research concerning flight computers for use on interplanetary probes is reviewed. The history of these computers from the Viking mission to the present is outlined. The differences between ground commercial computers and computers for planetary exploration are listed. The development of a computer for the Mariner Mark II comet rendezvous asteroid flyby mission is described. Various aspects of recently developed computer systems are examined, including the Max real time, embedded computer, a hypercube distributed supercomputer, a SAR data processor, a processor for the High Resolution IR Imaging Spectrometer, and a robotic vision multiresolution pyramid machine for processsing images obtained by a Mars Rover.
Onboard Processor for Compressing HSI Data
NASA Technical Reports Server (NTRS)
Cook, Sid; Harsanyi, Joe; Day, John H. (Technical Monitor)
2002-01-01
With EO-1 Hyperion and MightySat in orbit NASA and the DoD are showing their continued commitment to hyperspectral imaging (HSI). As HSI sensor technology continues to mature, the ever-increasing amounts of sensor data generated will result in a need for more cost effective communication and data handling systems. Lockheed Martin, with considerable experience in spacecraft design and developing special purpose onboard processors, has teamed with Applied Signal & Image Technology (ASIT), who has an extensive heritage in HSI, to develop a real-time and intelligent onboard processing (OBP) system to reduce HSI sensor downlink requirements. Our goal is to reduce the downlink requirement by a factor greater than 100, while retaining the necessary spectral fidelity of the sensor data needed to satisfy the many science, military, and intelligence goals of these systems. Our initial spectral compression experiments leverage commercial-off-the-shelf (COTS) spectral exploitation algorithms for segmentation, material identification and spectral compression that ASIT has developed. ASIT will also support the modification and integration of this COTS software into the OBP. Other commercially available COTS software for spatial compression will also be employed as part of the overall compression processing sequence. Over the next year elements of a high-performance reconfigurable OBP will be developed to implement proven preprocessing steps that distill the HSI data stream in both spectral and spatial dimensions. The system will intelligently reduce the volume of data that must be stored, transmitted to the ground, and processed while minimizing the loss of information.
NASA Technical Reports Server (NTRS)
Perkinson, J. A.
1974-01-01
The application of associative memory processor equipment to conventional host processors type systems is discussed. Efforts were made to demonstrate how such application relieves the task burden of conventional systems, and enhance system speed and efficiency. Data cover comparative theoretical performance analysis, demonstration of expanded growth capabilities, and demonstrations of actual hardware in simulated environment.
Full-text, Downloading, & Other Issues.
ERIC Educational Resources Information Center
Tenopir, Carol
1983-01-01
Issues having a possible impact on online search services in libraries are discussed including full text databases, front-end processors which translate user's input into the command language of an appropriate system, downloading to create personal files from commercial databases, and pricing. (EJS)
Hot Chips and Hot Interconnects for High End Computing Systems
NASA Technical Reports Server (NTRS)
Saini, Subhash
2005-01-01
I will discuss several processors: 1. The Cray proprietary processor used in the Cray X1; 2. The IBM Power 3 and Power 4 used in an IBM SP 3 and IBM SP 4 systems; 3. The Intel Itanium and Xeon, used in the SGI Altix systems and clusters respectively; 4. IBM System-on-a-Chip used in IBM BlueGene/L; 5. HP Alpha EV68 processor used in DOE ASCI Q cluster; 6. SPARC64 V processor, which is used in the Fujitsu PRIMEPOWER HPC2500; 7. An NEC proprietary processor, which is used in NEC SX-6/7; 8. Power 4+ processor, which is used in Hitachi SR11000; 9. NEC proprietary processor, which is used in Earth Simulator. The IBM POWER5 and Red Storm Computing Systems will also be discussed. The architectures of these processors will first be presented, followed by interconnection networks and a description of high-end computer systems based on these processors and networks. The performance of various hardware/programming model combinations will then be compared, based on latest NAS Parallel Benchmark results (MPI, OpenMP/HPF and hybrid (MPI + OpenMP). The tutorial will conclude with a discussion of general trends in the field of high performance computing, (quantum computing, DNA computing, cellular engineering, and neural networks).
Present and Future Applications of Digital Electronics in Nuclear Science - a Commercial Prospective
NASA Astrophysics Data System (ADS)
Tan, Hui
2011-10-01
Digital readout electronics instrumenting radiation detectors have experienced significant advancements in the last decade or so. This on one hand can be attributed to the steady improvements in commercial digital processing components such as analog-to-digital converters (ADCs), digital-to-analog converters (DACs), field-programmable-gate-arrays (FPGAs), and digital-signal-processors (DSPs), and on the other hand can also be attributed to the increasing needs for improved time, position, and energy resolution in nuclear physics experiments, which have spurred the rapid development of commercial off-the-shelf high speed, high resolution digitizers or spectrometers. Absent from conventional analog electronics, the capability to record fast decaying pulses from radiation detectors in digital readout electronics has profoundly benefited nuclear physics researchers since they now can perform detailed pulse processing for applications such as gamma-ray tracking and decay-event selection and reconstruction. In this talk, present state-of-the-art digital readout electronics and its applications in a variety of nuclear science fields will be discussed, and future directions in hardware development for digital electronics will also be outlined, all from the prospective of a commercial manufacturer of digital electronics.
Applications considerations in the system design of highly concurrent multiprocessors
NASA Technical Reports Server (NTRS)
Lundstrom, Stephen F.
1987-01-01
A flow model processor approach to parallel processing is described, using very-high-performance individual processors, high-speed circuit switched interconnection networks, and a high-speed synchronization capability to minimize the effect of the inherently serial portions of applications on performance. Design studies related to the determination of the number of processors, the memory organization, and the structure of the networks used to interconnect the processor and memory resources are discussed. Simulations indicate that applications centered on the large shared data memory should be able to sustain over 500 million floating point operations per second.
Shuttle orbiter S-band payload communications equipment design evaluation
NASA Technical Reports Server (NTRS)
Springett, J. C.; Maronde, R. G.
1979-01-01
The analysis of the design, and the performance assessment of the Orbiter S-band communication equipment are reported. The equipment considered include: network transponder, network signal processor, FM transmitter, FM signal processor, payload interrogator, and payload signal processor.
Computer-Aided Techniques for Providing Operator Performance Measures.
ERIC Educational Resources Information Center
Connelly, Edward M.; And Others
This report documents the theory, structure, and implementation of a performance processor (written in FORTRAN IV) that can accept performance demonstration data representing various levels of operator's skill and, under user control, analyze data to provide candidate performance measures and validation test results. The processor accepts two…
A customizable commercial miniaturized 320×256 indium gallium arsenide shortwave infrared camera
NASA Astrophysics Data System (ADS)
Huang, Shih-Che; O'Grady, Matthew; Groppe, Joseph V.; Ettenberg, Martin H.; Brubaker, Robert M.
2004-10-01
The design and performance of a commercial short-wave-infrared (SWIR) InGaAs microcamera engine is presented. The 0.9-to-1.7 micron SWIR imaging system consists of a room-temperature-TEC-stabilized, 320x256 (25 μm pitch) InGaAs focal plane array (FPA) and a high-performance, highly customizable image-processing set of electronics. The detectivity, D*, of the system is greater than 1013 cm-√Hz/W at 1.55 μm, and this sensitivity may be adjusted in real-time over 100 dB. It features snapshot-mode integration with a minimum exposure time of 130 μs. The digital video processor provides real time pixel-to-pixel, 2-point dark-current subtraction and non-uniformity compensation along with defective-pixel substitution. Other features include automatic gain control (AGC), gamma correction, 7 preset configurations, adjustable exposure time, external triggering, and windowing. The windowing feature is highly flexible; the region of interest (ROI) may be placed anywhere on the imager and can be varied at will. Windowing allows for high-speed readout enabling such applications as target acquisition and tracking; for example, a 32x32 ROI window may be read out at over 3500 frames per second (fps). Output video is provided as EIA170-compatible analog, or as 12-bit CameraLink-compatible digital. All the above features are accomplished in a small volume < 28 cm3, weight < 70 g, and with low power consumption < 1.3 W at room temperature using this new microcamera engine. Video processing is based on a field-programmable gate array (FPGA) platform with a soft-embedded processor that allows for ease of integration/addition of customer-specific algorithms, processes, or design requirements. The camera was developed with the high-performance, space-restricted, power-conscious application in mind, such as robotic or UAV deployment.
High Performance, Dependable Multiprocessor
NASA Technical Reports Server (NTRS)
Ramos, Jeremy; Samson, John R.; Troxel, Ian; Subramaniyan, Rajagopal; Jacobs, Adam; Greco, James; Cieslewski, Grzegorz; Curreri, John; Fischer, Michael; Grobelny, Eric;
2006-01-01
With the ever increasing demand for higher bandwidth and processing capacity of today's space exploration, space science, and defense missions, the ability to efficiently apply commercial-off-the-shelf (COTS) processors for on-board computing is now a critical need. In response to this need, NASA's New Millennium Program office has commissioned the development of Dependable Multiprocessor (DM) technology for use in payload and robotic missions. The Dependable Multiprocessor technology is a COTS-based, power efficient, high performance, highly dependable, fault tolerant cluster computer. To date, Honeywell has successfully demonstrated a TRL4 prototype of the Dependable Multiprocessor [I], and is now working on the development of a TRLS prototype. For the present effort Honeywell has teamed up with the University of Florida's High-performance Computing and Simulation (HCS) Lab, and together the team has demonstrated major elements of the Dependable Multiprocessor TRLS system.
Active non-volatile memory post-processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kannan, Sudarsun; Milojicic, Dejan S.; Talwar, Vanish
A computing node includes an active Non-Volatile Random Access Memory (NVRAM) component which includes memory and a sub-processor component. The memory is to store data chunks received from a processor core, the data chunks comprising metadata indicating a type of post-processing to be performed on data within the data chunks. The sub-processor component is to perform post-processing of said data chunks based on said metadata.
Green Secure Processors: Towards Power-Efficient Secure Processor Design
NASA Astrophysics Data System (ADS)
Chhabra, Siddhartha; Solihin, Yan
With the increasing wealth of digital information stored on computer systems today, security issues have become increasingly important. In addition to attacks targeting the software stack of a system, hardware attacks have become equally likely. Researchers have proposed Secure Processor Architectures which utilize hardware mechanisms for memory encryption and integrity verification to protect the confidentiality and integrity of data and computation, even from sophisticated hardware attacks. While there have been many works addressing performance and other system level issues in secure processor design, power issues have largely been ignored. In this paper, we first analyze the sources of power (energy) increase in different secure processor architectures. We then present a power analysis of various secure processor architectures in terms of their increase in power consumption over a base system with no protection and then provide recommendations for designs that offer the best balance between performance and power without compromising security. We extend our study to the embedded domain as well. We also outline the design of a novel hybrid cryptographic engine that can be used to minimize the power consumption for a secure processor. We believe that if secure processors are to be adopted in future systems (general purpose or embedded), it is critically important that power issues are considered in addition to performance and other system level issues. To the best of our knowledge, this is the first work to examine the power implications of providing hardware mechanisms for security.
System for clinical photometric stereo endoscopy
NASA Astrophysics Data System (ADS)
Durr, Nicholas J.; González, Germán.; Lim, Daryl; Traverso, Giovanni; Nishioka, Norman S.; Vakoc, Benjamin J.; Parot, Vicente
2014-02-01
Photometric stereo endoscopy is a technique that captures information about the high-spatial-frequency topography of the field of view simultaneously with a conventional color image. Here we describe a system that will enable photometric stereo endoscopy to be clinically evaluated in the large intestine of human patients. The clinical photometric stereo endoscopy system consists of a commercial gastroscope, a commercial video processor, an image capturing and processing unit, custom synchronization electronics, white light LEDs, a set of four fibers with diffusing tips, and an alignment cap. The custom pieces that come into contact with the patient are composed of biocompatible materials that can be sterilized before use. The components can then be assembled in the endoscopy suite before use. The resulting endoscope has the same outer diameter as a conventional colonoscope (14 mm), plugs into a commercial video processor, captures topography and color images at 15 Hz, and displays the conventional color image to the gastroenterologist in real-time. We show that this system can capture a color and topographical video in a tubular colon phantom, demonstrating robustness to complex geometries and motion. The reported system is suitable for in vivo evaluation of photometric stereo endoscopy in the human large intestine.
The ATLAS Level-1 Calorimeter Trigger: PreProcessor implementation and performance
NASA Astrophysics Data System (ADS)
Åsman, B.; Achenbach, R.; Allbrooke, B. M. M.; Anders, G.; Andrei, V.; Büscher, V.; Bansil, H. S.; Barnett, B. M.; Bauss, B.; Bendtz, K.; Bohm, C.; Bracinik, J.; Brawn, I. P.; Brock, R.; Buttinger, W.; Caputo, R.; Caughron, S.; Cerrito, L.; Charlton, D. G.; Childers, J. T.; Curtis, C. J.; Daniells, A. C.; Davis, A. O.; Davygora, Y.; Dorn, M.; Eckweiler, S.; Edmunds, D.; Edwards, J. P.; Eisenhandler, E.; Ellis, K.; Ermoline, Y.; Föhlisch, F.; Faulkner, P. J. W.; Fedorko, W.; Fleckner, J.; French, S. T.; Gee, C. N. P.; Gillman, A. R.; Goeringer, C.; Hülsing, T.; Hadley, D. R.; Hanke, P.; Hauser, R.; Heim, S.; Hellman, S.; Hickling, R. S.; Hidvégi, A.; Hillier, S. J.; Hofmann, J. I.; Hristova, I.; Ji, W.; Johansen, M.; Keller, M.; Khomich, A.; Kluge, E.-E.; Koll, J.; Laier, H.; Landon, M. P. J.; Lang, V. S.; Laurens, P.; Lepold, F.; Lilley, J. N.; Linnemann, J. T.; Müller, F.; Müller, T.; Mahboubi, K.; Martin, T. A.; Mass, A.; Meier, K.; Meyer, C.; Middleton, R. P.; Moa, T.; Moritz, S.; Morris, J. D.; Mudd, R. D.; Narayan, R.; zur Nedden, M.; Neusiedl, A.; Newman, P. R.; Nikiforov, A.; Ohm, C. C.; Perera, V. J. O.; Pfeiffer, U.; Plucinski, P.; Poddar, S.; Prieur, D. P. F.; Qian, W.; Rieck, P.; Rizvi, E.; Sankey, D. P. C.; Schäfer, U.; Scharf, V.; Schmitt, K.; Schröder, C.; Schultz-Coulon, H.-C.; Schumacher, C.; Schwienhorst, R.; Silverstein, S. B.; Simioni, E.; Snidero, G.; Staley, R. J.; Stamen, R.; Stock, P.; Stockton, M. C.; Tan, C. L. A.; Tapprogge, S.; Thomas, J. P.; Thompson, P. D.; Thomson, M.; True, P.; Watkins, P. M.; Watson, A. T.; Watson, M. F.; Weber, P.; Wessels, M.; Wiglesworth, C.; Williams, S. L.
2012-12-01
The PreProcessor system of the ATLAS Level-1 Calorimeter Trigger (L1Calo) receives about 7200 analogue signals from the electromagnetic and hadronic components of the calorimetric detector system. Lateral division results in cells which are pre-summed to so-called Trigger Towers of size 0.1 × 0.1 along azimuth (phi) and pseudorapidity (η). The received calorimeter signals represent deposits of transverse energy. The system consists of 124 individual PreProcessor modules that digitise the input signals for each LHC collision, and provide energy and timing information to the digital processors of the L1Calo system, which identify physics objects forming much of the basis for the full ATLAS first level trigger decision. This paper describes the architecture of the PreProcessor, its hardware realisation, functionality, and performance.
Performance Modeling of the ADA Rendezvous
1991-10-01
queueing network of figure 2, SERVERTASK can complete only one rendezvous at a time. Thus, the rate that the rendezvous requests are processed at the... Network 1, SERVERTASK competes with the traffic tasks of Server Processor. Each time SERVERTASK gains access to the processor, SERVERTASK completes...Client Processor Server Processor Software Server Nek Netork2 Figure 10. A conceptualization of the algorithm. The SERVERTASK software server of Network 2
Self-checking self-repairing computer nodes using the mirror processor
NASA Technical Reports Server (NTRS)
Tamir, Yuval
1992-01-01
Circuitry added to fault-tolerant systems for concurrent error deduction usually reduces performance. Using a technique called micro rollback, it is possible to eliminate most of the performance penalty of concurrent error detection. Error detection is performed in parallel with intermodule communication, and erroneous state changes are later undone. The author reports on the design and implementation of a VLSI RISC microprocessor, called the Mirror Processor (MP), which is capable of micro rollback. In order to achieve concurrent error detection, two MP chips operate in lockstep, comparing external signals and a signature of internal signals every clock cycle. If a mismatch is detected, both processors roll back to the beginning of the cycle when the error occurred. In some cases the erroneous state is corrected by copying a value from the fault-free processor to the faulty processor. The architecture, microarchitecture, and VLSI implementation of the MP, emphasizing its error-detection, error-recovery, and self-diagnosis capabilities, are described.
Soft-core processor study for node-based architectures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Houten, Jonathan Roger; Jarosz, Jason P.; Welch, Benjamin James
2008-09-01
Node-based architecture (NBA) designs for future satellite projects hold the promise of decreasing system development time and costs, size, weight, and power and positioning the laboratory to address other emerging mission opportunities quickly. Reconfigurable Field Programmable Gate Array (FPGA) based modules will comprise the core of several of the NBA nodes. Microprocessing capabilities will be necessary with varying degrees of mission-specific performance requirements on these nodes. To enable the flexibility of these reconfigurable nodes, it is advantageous to incorporate the microprocessor into the FPGA itself, either as a hardcore processor built into the FPGA or as a soft-core processor builtmore » out of FPGA elements. This document describes the evaluation of three reconfigurable FPGA based processors for use in future NBA systems--two soft cores (MicroBlaze and non-fault-tolerant LEON) and one hard core (PowerPC 405). Two standard performance benchmark applications were developed for each processor. The first, Dhrystone, is a fixed-point operation metric. The second, Whetstone, is a floating-point operation metric. Several trials were run at varying code locations, loop counts, processor speeds, and cache configurations. FPGA resource utilization was recorded for each configuration. Cache configurations impacted the results greatly; for optimal processor efficiency it is necessary to enable caches on the processors. Processor caches carry a penalty; cache error mitigation is necessary when operating in a radiation environment.« less
Cheung, Kit; Schultz, Simon R; Luk, Wayne
2015-01-01
NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs). Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimized performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP) rule for learning. A 6-FPGA system can simulate a network of up to ~600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation.
Cheung, Kit; Schultz, Simon R.; Luk, Wayne
2016-01-01
NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs). Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimized performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP) rule for learning. A 6-FPGA system can simulate a network of up to ~600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation. PMID:26834542
[Improving speech comprehension using a new cochlear implant speech processor].
Müller-Deile, J; Kortmann, T; Hoppe, U; Hessel, H; Morsnowski, A
2009-06-01
The aim of this multicenter clinical field study was to assess the benefits of the new Freedom 24 sound processor for cochlear implant (CI) users implanted with the Nucleus 24 cochlear implant system. The study included 48 postlingually profoundly deaf experienced CI users who demonstrated speech comprehension performance with their current speech processor on the Oldenburg sentence test (OLSA) in quiet conditions of at least 80% correct scores and who were able to perform adaptive speech threshold testing using the OLSA in noisy conditions. Following baseline measures of speech comprehension performance with their current speech processor, subjects were upgraded to the Freedom 24 speech processor. After a take-home trial period of at least 2 weeks, subject performance was evaluated by measuring the speech reception threshold with the Freiburg multisyllabic word test and speech intelligibility with the Freiburg monosyllabic word test at 50 dB and 70 dB in the sound field. The results demonstrated highly significant benefits for speech comprehension with the new speech processor. Significant benefits for speech comprehension were also demonstrated with the new speech processor when tested in competing background noise.In contrast, use of the Abbreviated Profile of Hearing Aid Benefit (APHAB) did not prove to be a suitably sensitive assessment tool for comparative subjective self-assessment of hearing benefits with each processor. Use of the preprocessing algorithm known as adaptive dynamic range optimization (ADRO) in the Freedom 24 led to additional improvements over the standard upgrade map for speech comprehension in quiet and showed equivalent performance in noise. Through use of the preprocessing beam-forming algorithm BEAM, subjects demonstrated a highly significant improved signal-to-noise ratio for speech comprehension thresholds (i.e., signal-to-noise ratio for 50% speech comprehension scores) when tested with an adaptive procedure using the Oldenburg sentences in the clinical setting S(0)N(CI), with speech signal at 0 degrees and noise lateral to the CI at 90 degrees . With the convincing findings from our evaluations of this multicenter study cohort, a trial with the Freedom 24 sound processor for all suitable CI users is recommended. For evaluating the benefits of a new processor, the comparative assessment paradigm used in our study design would be considered ideal for use with individual patients.
Accelerating molecular dynamic simulation on the cell processor and Playstation 3.
Luttmann, Edgar; Ensign, Daniel L; Vaidyanathan, Vishal; Houston, Mike; Rimon, Noam; Øland, Jeppe; Jayachandran, Guha; Friedrichs, Mark; Pande, Vijay S
2009-01-30
Implementation of molecular dynamics (MD) calculations on novel architectures will vastly increase its power to calculate the physical properties of complex systems. Herein, we detail algorithmic advances developed to accelerate MD simulations on the Cell processor, a commodity processor found in PlayStation 3 (PS3). In particular, we discuss issues regarding memory access versus computation and the types of calculations which are best suited for streaming processors such as the Cell, focusing on implicit solvation models. We conclude with a comparison of improved performance on the PS3's Cell processor over more traditional processors. (c) 2008 Wiley Periodicals, Inc.
Leung, Vitus J [Albuquerque, NM; Phillips, Cynthia A [Albuquerque, NM; Bender, Michael A [East Northport, NY; Bunde, David P [Urbana, IL
2009-07-21
In a multiple processor computing apparatus, directional routing restrictions and a logical channel construct permit fault tolerant, deadlock-free routing. Processor allocation can be performed by creating a linear ordering of the processors based on routing rules used for routing communications between the processors. The linear ordering can assume a loop configuration, and bin-packing is applied to this loop configuration. The interconnection of the processors can be conceptualized as a generally rectangular 3-dimensional grid, and the MC allocation algorithm is applied with respect to the 3-dimensional grid.
NASA Technical Reports Server (NTRS)
1990-01-01
The Multi-Compatible Network Interface Unit (MCNIU) is intended to connect the space station's communications and tracking, guidance and navigation, life support, electric power, payload data, hand controls, display consoles and other systems, and also communicate with diverse processors. Honeywell is now marketing MCNIU commercially. It has applicability in certain military operations or civil control centers. It has nongovernment utility among large companies, universities and research organizations that transfer large amounts of data among workstations and computers. *This product is no longer commercially available.
FPGA-Based, Self-Checking, Fault-Tolerant Computers
NASA Technical Reports Server (NTRS)
Some, Raphael; Rennels, David
2004-01-01
A proposed computer architecture would exploit the capabilities of commercially available field-programmable gate arrays (FPGAs) to enable computers to detect and recover from bit errors. The main purpose of the proposed architecture is to enable fault-tolerant computing in the presence of single-event upsets (SEUs). [An SEU is a spurious bit flip (also called a soft error) caused by a single impact of ionizing radiation.] The architecture would also enable recovery from some soft errors caused by electrical transients and, to some extent, from intermittent and permanent (hard) errors caused by aging of electronic components. A typical FPGA of the current generation contains one or more complete processor cores, memories, and highspeed serial input/output (I/O) channels, making it possible to shrink a board-level processor node to a single integrated-circuit chip. Custom, highly efficient microcontrollers, general-purpose computers, custom I/O processors, and signal processors can be rapidly and efficiently implemented by use of FPGAs. Unfortunately, FPGAs are susceptible to SEUs. Prior efforts to mitigate the effects of SEUs have yielded solutions that degrade performance of the system and require support from external hardware and software. In comparison with other fault-tolerant- computing architectures (e.g., triple modular redundancy), the proposed architecture could be implemented with less circuitry and lower power demand. Moreover, the fault-tolerant computing functions would require only minimal support from circuitry outside the central processing units (CPUs) of computers, would not require any software support, and would be largely transparent to software and to other computer hardware. There would be two types of modules: a self-checking processor module and a memory system (see figure). The self-checking processor module would be implemented on a single FPGA and would be capable of detecting its own internal errors. It would contain two CPUs executing identical programs in lock step, with comparison of their outputs to detect errors. It would also contain various cache local memory circuits, communication circuits, and configurable special-purpose processors that would use self-checking checkers. (The basic principle of the self-checking checker method is to utilize logic circuitry that generates error signals whenever there is an error in either the checker or the circuit being checked.) The memory system would comprise a main memory and a hardware-controlled check-pointing system (CPS) based on a buffer memory denoted the recovery cache. The main memory would contain random-access memory (RAM) chips and FPGAs that would, in addition to everything else, implement double-error-detecting and single-error-correcting memory functions to enable recovery from single-bit errors.
A hierarchical, automated target recognition algorithm for a parallel analog processor
NASA Technical Reports Server (NTRS)
Woodward, Gail; Padgett, Curtis
1997-01-01
A hierarchical approach is described for an automated target recognition (ATR) system, VIGILANTE, that uses a massively parallel, analog processor (3DANN). The 3DANN processor is capable of performing 64 concurrent inner products of size 1x4096 every 250 nanoseconds.
Fault tolerant, radiation hard, high performance digital signal processor
NASA Technical Reports Server (NTRS)
Holmann, Edgar; Linscott, Ivan R.; Maurer, Michael J.; Tyler, G. L.; Libby, Vibeke
1990-01-01
An architecture has been developed for a high-performance VLSI digital signal processor that is highly reliable, fault-tolerant, and radiation-hard. The signal processor, part of a spacecraft receiver designed to support uplink radio science experiments at the outer planets, organizes the connections between redundant arithmetic resources, register files, and memory through a shuffle exchange communication network. The configuration of the network and the state of the processor resources are all under microprogram control, which both maps the resources according to algorithmic needs and reconfigures the processing should a failure occur. In addition, the microprogram is reloadable through the uplink to accommodate changes in the science objectives throughout the course of the mission. The processor will be implemented with silicon compiler tools, and its design will be verified through silicon compilation simulation at all levels from the resources to full functionality. By blending reconfiguration with redundancy the processor implementation is fault-tolerant and reliable, and possesses the long expected lifetime needed for a spacecraft mission to the outer planets.
40 CFR 725.1 - Scope and purpose.
Code of Federal Regulations, 2010 CFR
2010-07-01
... research and development for commercial purposes. New microorganisms for which manufacturers and importers... any microorganism that EPA determines by rule is being manufactured, imported, or processed for a significant new use. (b) Any manufacturer, importer, or processor required to report under section 5 of TSCA...
Database for LDV Signal Processor Performance Analysis
NASA Technical Reports Server (NTRS)
Baker, Glenn D.; Murphy, R. Jay; Meyers, James F.
1989-01-01
A comparative and quantitative analysis of various laser velocimeter signal processors is difficult because standards for characterizing signal bursts have not been established. This leaves the researcher to select a signal processor based only on manufacturers' claims without the benefit of direct comparison. The present paper proposes the use of a database of digitized signal bursts obtained from a laser velocimeter under various configurations as a method for directly comparing signal processors.
Multitask neurovision processor with extensive feedback and feedforward connections
NASA Astrophysics Data System (ADS)
Gupta, Madan M.; Knopf, George K.
1991-11-01
A multi-task neuro-vision parameter which performs a variety of information processing operations associated with the early stages of biological vision is presented. The network architecture of this neuro-vision processor, called the positive-negative (PN) neural processor, is loosely based on the neural activity fields exhibited by thalamic and cortical nervous tissue layers. The computational operation performed by the processor arises from the strength of the recurrent feedback among the numerous positive and negative neural computing units. By adjusting the feedback connections it is possible to generate diverse dynamic behavior that may be used for short-term visual memory (STVM), spatio-temporal filtering (STF), and pulse frequency modulation (PFM). The information attributes that are to be processes may be regulated by modifying the feedforward connections from the signal space to the neural processor.
Optical systolic array processor using residue arithmetic
NASA Technical Reports Server (NTRS)
Jackson, J.; Casasent, D.
1983-01-01
The use of residue arithmetic to increase the accuracy and reduce the dynamic range requirements of optical matrix-vector processors is evaluated. It is determined that matrix-vector operations and iterative algorithms can be performed totally in residue notation. A new parallel residue quantizer circuit is developed which significantly improves the performance of the systolic array feedback processor. Results are presented of a computer simulation of this system used to solve a set of three simultaneous equations.
Duke, Mila Morais; Wolfe, Jace; Schafer, Erin
2016-05-01
Cochlear implant (CI) recipients often experience difficulty understanding speech in noise and speech that originates from a distance. Many CI recipients also experience difficulty understanding speech originating from a television. Use of hearing assistance technology (HAT) may improve speech recognition in noise and for signals that originate from more than a few feet from the listener; however, there are no published studies evaluating the potential benefits of a wireless HAT designed to deliver audio signals from a television directly to a CI sound processor. The objective of this study was to compare speech recognition in quiet and in noise of CI recipients with the use of their CI alone and with the use of their CI and a wireless HAT (Cochlear Wireless TV Streamer). A two-way repeated measures design was used to evaluate performance differences obtained in quiet and in competing noise (65 dBA) with the CI sound processor alone and with the sound processor coupled to the Cochlear Wireless TV Streamer. Sixteen users of Cochlear Nucleus 24 Freedom, CI512, and CI422 implants were included in the study. Participants were evaluated in four conditions including use of the sound processor alone and use of the sound processor with the wireless streamer in quiet and in the presence of competing noise at 65 dBA. Speech recognition was evaluated in each condition with two full lists of Computer-Assisted Speech Perception Testing and Training Sentence-Level Test sentences presented from a light-emitting diode television. Speech recognition in noise was significantly better with use of the wireless streamer compared to participants' performance with their CI sound processor alone. There was also a nonsignificant trend toward better performance in quiet with use of the TV Streamer. Performance was significantly poorer when evaluated in noise compared to performance in quiet when the TV Streamer was not used. Use of the Cochlear Wireless TV Streamer designed to stream audio from a television directly to a CI sound processor provides better speech recognition in quiet and in noise when compared to performance obtained with use of the CI sound processor alone. American Academy of Audiology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yao; Balaprakash, Prasanna; Meng, Jiayuan
We present Raexplore, a performance modeling framework for architecture exploration. Raexplore enables rapid, automated, and systematic search of architecture design space by combining hardware counter-based performance characterization and analytical performance modeling. We demonstrate Raexplore for two recent manycore processors IBM Blue- Gene/Q compute chip and Intel Xeon Phi, targeting a set of scientific applications. Our framework is able to capture complex interactions between architectural components including instruction pipeline, cache, and memory, and to achieve a 3–22% error for same-architecture and cross-architecture performance predictions. Furthermore, we apply our framework to assess the two processors, and discover and evaluate a list ofmore » architectural scaling options for future processor designs.« less
75 FR 52507 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-26
... standards designed to ensure that all catch delivered to the processor is accurately weighed and accounted... NMFS for catcher/processors and motherships is based on the vessel meeting a series of design criteria. Because of the wide variations in factory layout for inshore processors, NMFS requires a performance-based...
Detailed specifications are given for a network of data processors and submodels that can generate the parameter fields required by the regional oxidant model formulated in Part 1 of this report. Operations performed by the processor network include simulation of the motion and d...
NASA Technical Reports Server (NTRS)
Lund, D.
1998-01-01
This report presents a description of the tests performed, and the test data, for the AI METSAT Signal Processor Assembly P/N 1331670-2, S/N F05. The assembly was tested in accordance with AE-26754, "METSAT Signal Processor Scan Drive and Integration Procedure." The objective is to demonstrate functionality of the signal processor prior to instrument integration.
NASA Technical Reports Server (NTRS)
Lund, D.
1998-01-01
This report presents a description of tests performed, and the test data, for the A1 METSAT Signal Processor Assembly PN: 1331679-2, S/N F03. This assembly was tested in accordance with AE-26754, "METSAT Signal Processor Scan Drive Test and Integration Procedure." The objective is to demonstrate functionality of the signal processor prior to instrument integration.
NASA Astrophysics Data System (ADS)
Esepkina, N. A.; Lavrov, A. P.; Anan'ev, M. N.; Blagodarnyi, V. S.; Ivanov, S. I.; Mansyrev, M. I.; Molodyakov, S. A.
1995-10-01
Two new types of optoelectronic radio-signal processors were investigated. Charge-coupled device (CCD) photodetectors are used in these processors under continuous scanning conditions, i.e. in a time delay and storage mode. One of these processors is based on a CCD photodetector array with a reference-signal amplitude transparency and the other is an adaptive acousto-optical signal processor with linear frequency modulation. The processor with the transparency performs multichannel discrete—analogue convolution of an input signal with a corresponding kernel of the transformation determined by the transparency. If a light source is an array of light-emitting diodes of special (stripe) geometry, the optical stages of the processor can be made from optical fibre components and the whole processor then becomes a rigid 'sandwich' (a compact hybrid optoelectronic microcircuit). A report is given also of a study of a prototype processor with optical fibre components for the reception of signals from a system with antenna aperture synthesis, which forms a radio image of the Earth.
Method for simultaneous overlapped communications between neighboring processors in a multiple
Benner, Robert E.; Gustafson, John L.; Montry, Gary R.
1991-01-01
A parallel computing system and method having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system.
Frequency Dependence of Single-event Upset in Advanced Commerical PowerPC Microprocessors
NASA Technical Reports Server (NTRS)
Irom, Frokh; Farmanesh, Farhad F.; Swift, Gary M.; Johnston, Allen H.
2004-01-01
This paper examines single-event upsets in advanced commercial SOI microprocessors in a dynamic mode, studying SEU sensitivity of General Purpose Registers (GPRs) with clock frequency. Results are presented for SOI processors with feature sizes of 0.18 microns and two different core voltages. Single-event upset from heavy ions is measured for advanced commercial microprocessors in a dynamic mode with clock frequency up to 1GHz. Frequency and core voltage dependence of single-event upsets in registers is discussed.
Unaligned instruction relocation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bertolli, Carlo; O'Brien, John K.; Sallenave, Olivier H.
In one embodiment, a computer-implemented method includes receiving source code to be compiled into an executable file for an unaligned instruction set architecture (ISA). Aligned assembled code is generated, by a computer processor. The aligned assembled code complies with an aligned ISA and includes aligned processor code for a processor and aligned accelerator code for an accelerator. A first linking pass is performed on the aligned assembled code, including relocating a first relocation target in the aligned accelerator code that refers to a first object outside the aligned accelerator code. Unaligned assembled code is generated in accordance with the unalignedmore » ISA and includes unaligned accelerator code for the accelerator and unaligned processor code for the processor. A second linking pass is performed on the unaligned assembled code, including relocating a second relocation target outside the unaligned accelerator code that refers to an object in the unaligned accelerator code.« less
Unaligned instruction relocation
Bertolli, Carlo; O'Brien, John K.; Sallenave, Olivier H.; Sura, Zehra N.
2018-01-23
In one embodiment, a computer-implemented method includes receiving source code to be compiled into an executable file for an unaligned instruction set architecture (ISA). Aligned assembled code is generated, by a computer processor. The aligned assembled code complies with an aligned ISA and includes aligned processor code for a processor and aligned accelerator code for an accelerator. A first linking pass is performed on the aligned assembled code, including relocating a first relocation target in the aligned accelerator code that refers to a first object outside the aligned accelerator code. Unaligned assembled code is generated in accordance with the unaligned ISA and includes unaligned accelerator code for the accelerator and unaligned processor code for the processor. A second linking pass is performed on the unaligned assembled code, including relocating a second relocation target outside the unaligned accelerator code that refers to an object in the unaligned accelerator code.
Proton exchange membrane fuel cell technology for transportation applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swathirajan, S.
1996-04-01
Proton Exchange Membrane (PEM) fuel cells are extremely promising as future power plants in the transportation sector to achieve an increase in energy efficiency and eliminate environmental pollution due to vehicles. GM is currently involved in a multiphase program with the US Department of Energy for developing a proof-of-concept hybrid vehicle based on a PEM fuel cell power plant and a methanol fuel processor. Other participants in the program are Los Alamos National Labs, Dow Chemical Co., Ballard Power Systems and DuPont Co., In the just completed phase 1 of the program, a 10 kW PEM fuel cell power plantmore » was built and tested to demonstrate the feasibility of integrating a methanol fuel processor with a PEM fuel cell stack. However, the fuel cell power plant must overcome stiff technical and economic challenges before it can be commercialized for light duty vehicle applications. Progress achieved in phase I on the use of monolithic catalyst reactors in the fuel processor, managing CO impurity in the fuel cell stack, low-cost electrode-membrane assembles, and on the integration of the fuel processor with a Ballard PEM fuel cell stack will be presented.« less
Parallel algorithms for quantum chemistry. I. Integral transformations on a hypercube multiprocessor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whiteside, R.A.; Binkley, J.S.; Colvin, M.E.
1987-02-15
For many years it has been recognized that fundamental physical constraints such as the speed of light will limit the ultimate speed of single processor computers to less than about three billion floating point operations per second (3 GFLOPS). This limitation is becoming increasingly restrictive as commercially available machines are now within an order of magnitude of this asymptotic limit. A natural way to avoid this limit is to harness together many processors to work on a single computational problem. In principle, these parallel processing computers have speeds limited only by the number of processors one chooses to acquire. Themore » usefulness of potentially unlimited processing speed to a computationally intensive field such as quantum chemistry is obvious. If these methods are to be applied to significantly larger chemical systems, parallel schemes will have to be employed. For this reason we have developed distributed-memory algorithms for a number of standard quantum chemical methods. We are currently implementing these on a 32 processor Intel hypercube. In this paper we present our algorithm and benchmark results for one of the bottleneck steps in quantum chemical calculations: the four index integral transformation.« less
NASA Technical Reports Server (NTRS)
Johnson, M.; Label, K.; McCabe, J.; Powell, W.; Bolotin, G.; Kolawa, E.; Ng, T.; Hyde, D.
2007-01-01
Implementation of challenging Exploration Systems Missions Directorate objectives and strategies can be constrained by onboard computing capabilities and power efficiencies. The Radiation Hardened Electronics for Space Environments (RHESE) High Performance Processors for Space Environments project will address this challenge by significantly advancing the sustained throughput and processing efficiency of high-per$ormance radiation-hardened processors, targeting delivery of products by the end of FY12.
Design and Analysis of Scheduling Policies for Real-Time Computer Systems
1992-01-01
C. M. Krishna, "The Impact of Workload on the Reliability of Real-Time Processor Triads," to appear in Micro . Rel. [17] J.F. Kurose, "Performance... Processor Triads", to appear in Micro . Rel. "* J.F. Kurose. "Performance Analysis of Minimum Laxity Scheduling in Discrete Time Queue- ing Systems", to...exponentially distributed service times and deadlines. A similar model was developed for the ED policy for a single processor system under identical
NASA Technical Reports Server (NTRS)
Harper, Richard E.; Butler, Bryan P.
1990-01-01
The Draper fault-tolerant processor with fault-tolerant shared memory (FTP/FTSM), which is designed to allow application tasks to continue execution during the memory alignment process, is described. Processor performance is not affected by memory alignment. In addition, the FTP/FTSM incorporates a hardware scrubber device to perform the memory alignment quickly during unused memory access cycles. The FTP/FTSM architecture is described, followed by an estimate of the time required for channel reintegration.
Hypercluster - Parallel processing for computational mechanics
NASA Technical Reports Server (NTRS)
Blech, Richard A.
1988-01-01
An account is given of the development status, performance capabilities and implications for further development of NASA-Lewis' testbed 'hypercluster' parallel computer network, in which multiple processors communicate through a shared memory. Processors have local as well as shared memory; the hypercluster is expanded in the same manner as the hypercube, with processor clusters replacing the normal single processor node. The NASA-Lewis machine has three nodes with a vector personality and one node with a scalar personality. Each of the vector nodes uses four board-level vector processors, while the scalar node uses four general-purpose microcomputer boards.
Real-time separation of multineuron recordings with a DSP32C signal processor.
Gädicke, R; Albus, K
1995-04-01
We have developed a hardware and software package for real-time discrimination of multiple-unit activities recorded simultaneously from multiple microelectrodes using a VME-Bus system. Compared with other systems cited in literature or commercially available, our system has the following advantages. (1) Each electrode is served by its own preprocessor (DSP32C); (2) On-line spike discrimination is performed independently for each electrode. (3) The VME-bus allows processing of data received from 16 electrodes. The digitized (62.5 kHz) spike form is itself used as the model spike; the algorithm allows for comparing and sorting complete wave forms in real time into 8 different models per electrode.
The Chimera II Real-Time Operating System for advanced sensor-based control applications
NASA Technical Reports Server (NTRS)
Stewart, David B.; Schmitz, Donald E.; Khosla, Pradeep K.
1992-01-01
Attention is given to the Chimera II Real-Time Operating System, which has been developed for advanced sensor-based control applications. The Chimera II provides a high-performance real-time kernel and a variety of IPC features. The hardware platform required to run Chimera II consists of commercially available hardware, and allows custom hardware to be easily integrated. The design allows it to be used with almost any type of VMEbus-based processors and devices. It allows radially differing hardware to be programmed using a common system, thus providing a first and necessary step towards the standardization of reconfigurable systems that results in a reduction of development time and cost.
Baseband-processed SS-TDMA communication system architecture and design concepts
NASA Technical Reports Server (NTRS)
Attwood, S.; Sabourin, D.
1982-01-01
The architecture and system design for a commercial satellite communications system planned for the 1990's was developed by Motorola for NASA's Lewis Research Center. The system provides data communications between individual users via trunking and customer premises service terminals utilizing a central switching satellite operating in a time-division multiple-access (TDMA) mode. The major elements of the design incorporating baseband processing include: demand-assigned multiple access reservation protocol, spectral utilization, system synchronization, modulation technique and forward error control implementation. Motorola's baseband processor design, which is being proven in a proof-of-concept advanced technology development, will perform data regeneration and message routing for individual users on-board the spacecraft.
Single-Event Transient Testing of Low Dropout PNP Series Linear Voltage Regulators
NASA Technical Reports Server (NTRS)
Adell, Philippe; Allen, Gregory
2013-01-01
As demand for high-speed, on-board, digital-processing integrated circuits on spacecraft increases (field-programmable gate arrays and digital signal processors in particular), the need for the next generation point-of-load (POL) regulator becomes a prominent design issue. Shrinking process nodes have resulted in core rails dropping to values close to 1.0 V, drastically reducing margin to standard switching converters or regulators that power digital ICs. The goal of this task is to perform SET characterization of several commercial POL converters, and provide a discussion of the impact of these results to state-of-the-art digital processing IC through laser and heavy ion testing
Schafer, Erin C; Romine, Denise; Musgrave, Elizabeth; Momin, Sadaf; Huynh, Christy
2013-01-01
Previous research has suggested that electrically coupled frequency modulation (FM) systems substantially improved speech-recognition performance in noise in individuals with cochlear implants (CIs). However, there is limited evidence to support the use of electromagnetically coupled (neck loop) FM receivers with contemporary CI sound processors containing telecoils. The primary goal of this study was to compare speech-recognition performance in noise and subjective ratings of adolescents and adults using one of three contemporary CI sound processors coupled to electromagnetically and electrically coupled FM receivers from Oticon. A repeated-measures design was used to compare speech-recognition performance in noise and subjective ratings without and with the FM systems across three test sessions (Experiment 1) and to compare performance at different FM-gain settings (Experiment 2). Descriptive statistics were used in Experiment 3 to describe output differences measured through a CI sound processor. Experiment 1 included nine adolescents or adults with unilateral or bilateral Advanced Bionics Harmony (n = 3), Cochlear Nucleus 5 (n = 3), and MED-EL OPUS 2 (n = 3) CI sound processors. In Experiment 2, seven of the original nine participants were tested. In Experiment 3, electroacoustic output was measured from a Nucleus 5 sound processor when coupled to the electromagnetically coupled Oticon Arc neck loop and electrically coupled Oticon R2. In Experiment 1, participants completed a field trial with each FM receiver and three test sessions that included speech-recognition performance in noise and a subjective rating scale. In Experiment 2, participants were tested in three receiver-gain conditions. Results in both experiments were analyzed using repeated-measures analysis of variance. Experiment 3 involved electroacoustic-test measures to determine the monitor-earphone output of the CI alone and CI coupled to the two FM receivers. The results in Experiment 1 suggested that both FM receivers provided significantly better speech-recognition performance in noise than the CI alone; however, the electromagnetically coupled receiver provided significantly better speech-recognition performance in noise and better ratings in some situations than the electrically coupled receiver when set to the same gain. In Experiment 2, the primary analysis suggested significantly better speech-recognition performance in noise for the neck-loop versus electrically coupled receiver, but a second analysis, using the best performance across gain settings for each device, revealed no significant differences between the two FM receivers. Experiment 3 revealed monitor-earphone output differences in the Nucleus 5 sound processor for the two FM receivers when set to the +8 setting used in Experiment 1 but equal output when the electrically coupled device was set to a +16 gain setting and the electromagnetically coupled device was set to the +8 gain setting. Individuals with contemporary sound processors may show more favorable speech-recognition performance in noise electromagnetically coupled FM systems (i.e., Oticon Arc), which is most likely related to the input processing and signal processing pathway within the CI sound processor for direct input versus telecoil input. Further research is warranted to replicate these findings with a larger sample size and to develop and validate a more objective approach to fitting FM systems to CI sound processors. American Academy of Audiology.
Design of an integrated fuel processor for residential PEMFCs applications
NASA Astrophysics Data System (ADS)
Seo, Yu Taek; Seo, Dong Joo; Jeong, Jin Hyeok; Yoon, Wang Lai
KIER has been developing a novel fuel processing system to provide hydrogen rich gas to residential PEMFCs system. For the effective design of a compact hydrogen production system, each unit process for steam reforming and water gas shift, has a steam generator and internal heat exchangers which are thermally and physically integrated into a single packaged hardware system. The newly designed fuel processor (prototype II) showed a thermal efficiency of 78% as a HHV basis with methane conversion of 89%. The preferential oxidation unit with two staged cascade reactors, reduces, the CO concentration to below 10 ppm without complicated temperature control hardware, which is the prerequisite CO limit for the PEMFC stack. After we achieve the initial performance of the fuel processor, partial load operation was carried out to test the performance and reliability of the fuel processor at various loads. The stability of the fuel processor was also demonstrated for three successive days with a stable composition of product gas and thermal efficiency. The CO concentration remained below 10 ppm during the test period and confirmed the stable performance of the two-stage PrOx reactors.
NASA Technical Reports Server (NTRS)
Lund, D.
1998-01-01
This report presents a description of the tests performed, and the test data, for the A1 METSAT Signal Processor Assembly PN: 1331679-2, S/N F04. The assembly was tested in accordance with AE-26754, "METSAT Signal Processor Scan Drive Test and Integration Procedure." The objective is to demonstrate functionality of the signal processor prior to instrument integration.
Documentation of 50% water conservation in a single process at a beef abattoir
USDA-ARS?s Scientific Manuscript database
Beef slaughter is water intensive due to stringent food safety requirements. We conducted a study at a commercial beef processor to demonstrate water conservation by modifying the mechanical head wash. We documented the initial nozzle configuration (112 nozzles), water pressure (275 kPa), and flowra...
40 CFR 725.1 - Scope and purpose.
Code of Federal Regulations, 2011 CFR
2011-07-01
... research and development for commercial purposes. New microorganisms for which manufacturers and importers... significant new use. (b) Any manufacturer, importer, or processor required to report under section 5 of TSCA (see § 725.100 for new microorganisms and § 725.900 for significant new uses) must file a Microbial...
Code of Federal Regulations, 2014 CFR
2014-10-01
... fishery allocations. Prior to the setting of fishery allocations, the TAC, ACL, or ACT when specified, is... in the non-groundfish fishery that is deducted from the ACL or ACT when specified. (2) The commercial... from the TAC, OY, ACL, or ACT when specified. For the catcher/processor and mothership sectors of the...
Code of Federal Regulations, 2013 CFR
2013-10-01
... fishery allocations. Prior to the setting of fishery allocations, the TAC, ACL, or ACT when specified, is... in the non-groundfish fishery that is deducted from the ACL or ACT when specified. (2) The commercial... from the TAC, OY, ACL, or ACT when specified. For the catcher/processor and mothership sectors of the...
FPGA Acceleration of the phylogenetic likelihood function for Bayesian MCMC inference methods.
Zierke, Stephanie; Bakos, Jason D
2010-04-12
Likelihood (ML)-based phylogenetic inference has become a popular method for estimating the evolutionary relationships among species based on genomic sequence data. This method is used in applications such as RAxML, GARLI, MrBayes, PAML, and PAUP. The Phylogenetic Likelihood Function (PLF) is an important kernel computation for this method. The PLF consists of a loop with no conditional behavior or dependencies between iterations. As such it contains a high potential for exploiting parallelism using micro-architectural techniques. In this paper, we describe a technique for mapping the PLF and supporting logic onto a Field Programmable Gate Array (FPGA)-based co-processor. By leveraging the FPGA's on-chip DSP modules and the high-bandwidth local memory attached to the FPGA, the resultant co-processor can accelerate ML-based methods and outperform state-of-the-art multi-core processors. We use the MrBayes 3 tool as a framework for designing our co-processor. For large datasets, we estimate that our accelerated MrBayes, if run on a current-generation FPGA, achieves a 10x speedup relative to software running on a state-of-the-art server-class microprocessor. The FPGA-based implementation achieves its performance by deeply pipelining the likelihood computations, performing multiple floating-point operations in parallel, and through a natural log approximation that is chosen specifically to leverage a deeply pipelined custom architecture. Heterogeneous computing, which combines general-purpose processors with special-purpose co-processors such as FPGAs and GPUs, is a promising approach for high-performance phylogeny inference as shown by the growing body of literature in this field. FPGAs in particular are well-suited for this task because of their low power consumption as compared to many-core processors and Graphics Processor Units (GPUs).
Processors for wavelet analysis and synthesis: NIFS and TI-C80 MVP
NASA Astrophysics Data System (ADS)
Brooks, Geoffrey W.
1996-03-01
Two processors are considered for image quadrature mirror filtering (QMF). The neuromorphic infrared focal-plane sensor (NIFS) is an existing prototype analog processor offering high speed spatio-temporal Gaussian filtering, which could be used for the QMF low- pass function, and difference of Gaussian filtering, which could be used for the QMF high- pass function. Although not designed specifically for wavelet analysis, the biologically- inspired system accomplishes the most computationally intensive part of QMF processing. The Texas Instruments (TI) TMS320C80 Multimedia Video Processor (MVP) is a 32-bit RISC master processor with four advanced digital signal processors (DSPs) on a single chip. Algorithm partitioning, memory management and other issues are considered for optimal performance. This paper presents these considerations with simulated results leading to processor implementation of high-speed QMF analysis and synthesis.
7 CFR 201.73 - Processors and processing of all classes of certified seed.
Code of Federal Regulations, 2010 CFR
2010-01-01
... (CONTINUED) FEDERAL SEED ACT FEDERAL SEED ACT REGULATIONS Certified Seed § 201.73 Processors and processing... of certified seed: (a) Facilities shall be available to perform processing without introducing... 7 Agriculture 3 2010-01-01 2010-01-01 false Processors and processing of all classes of certified...
7 CFR 201.73 - Processors and processing of all classes of certified seed.
Code of Federal Regulations, 2011 CFR
2011-01-01
... (CONTINUED) FEDERAL SEED ACT FEDERAL SEED ACT REGULATIONS Certified Seed § 201.73 Processors and processing... of certified seed: (a) Facilities shall be available to perform processing without introducing... 7 Agriculture 3 2011-01-01 2011-01-01 false Processors and processing of all classes of certified...
7 CFR 201.73 - Processors and processing of all classes of certified seed.
Code of Federal Regulations, 2013 CFR
2013-01-01
... (CONTINUED) FEDERAL SEED ACT FEDERAL SEED ACT REGULATIONS Certified Seed § 201.73 Processors and processing... of certified seed: (a) Facilities shall be available to perform processing without introducing... 7 Agriculture 3 2013-01-01 2013-01-01 false Processors and processing of all classes of certified...
7 CFR 201.73 - Processors and processing of all classes of certified seed.
Code of Federal Regulations, 2012 CFR
2012-01-01
... (CONTINUED) FEDERAL SEED ACT FEDERAL SEED ACT REGULATIONS Certified Seed § 201.73 Processors and processing... of certified seed: (a) Facilities shall be available to perform processing without introducing... 7 Agriculture 3 2012-01-01 2012-01-01 false Processors and processing of all classes of certified...
7 CFR 201.73 - Processors and processing of all classes of certified seed.
Code of Federal Regulations, 2014 CFR
2014-01-01
... (CONTINUED) FEDERAL SEED ACT FEDERAL SEED ACT REGULATIONS Certified Seed § 201.73 Processors and processing... of certified seed: (a) Facilities shall be available to perform processing without introducing... 7 Agriculture 3 2014-01-01 2014-01-01 false Processors and processing of all classes of certified...
On board processor development for NASA's spaceborne imaging radar with system-on-chip technology
NASA Technical Reports Server (NTRS)
Fang, Wai-Chi
2004-01-01
This paper reports a preliminary study result of an on-board spaceborne SAR processor. It consists of a processing requirement analysis, functional specifications, and implementation with system-on-chip technology. Finally, a minimum version of this on-board processor designed for performance evaluation and for partial demonstration is illustrated.
NASA Astrophysics Data System (ADS)
O'Connor, A. S.; Justice, B.; Harris, A. T.
2013-12-01
Graphics Processing Units (GPUs) are high-performance multiple-core processors capable of very high computational speeds and large data throughput. Modern GPUs are inexpensive and widely available commercially. These are general-purpose parallel processors with support for a variety of programming interfaces, including industry standard languages such as C. GPU implementations of algorithms that are well suited for parallel processing can often achieve speedups of several orders of magnitude over optimized CPU codes. Significant improvements in speeds for imagery orthorectification, atmospheric correction, target detection and image transformations like Independent Components Analsyis (ICA) have been achieved using GPU-based implementations. Additional optimizations, when factored in with GPU processing capabilities, can provide 50x - 100x reduction in the time required to process large imagery. Exelis Visual Information Solutions (VIS) has implemented a CUDA based GPU processing frame work for accelerating ENVI and IDL processes that can best take advantage of parallelization. Testing Exelis VIS has performed shows that orthorectification can take as long as two hours with a WorldView1 35,0000 x 35,000 pixel image. With GPU orthorecification, the same orthorectification process takes three minutes. By speeding up image processing, imagery can successfully be used by first responders, scientists making rapid discoveries with near real time data, and provides an operational component to data centers needing to quickly process and disseminate data.
QWIP technology for both military and civilian applications
NASA Astrophysics Data System (ADS)
Gunapala, Sarath D.; Kukkonen, Carl A.; Sirangelo, Mark N.; McQuiston, Barbara K.; Chehayeb, Riad; Kaufmann, M.
2001-10-01
Advanced thermal imaging infrared cameras have been a cost effective and reliable method to obtain the temperature of objects. Quantum Well Infrared Photodetector (QWIP) based thermal imaging systems have advanced the state-of-the-art and are the most sensitive commercially available thermal systems. QWIP Technologies LLC, under exclusive agreement with Caltech University, is currently manufacturing the QWIP-ChipTM, a 320 X 256 element, bound-to-quasibound QWIP FPA. The camera performance falls within the long-wave IR band, spectrally peaked at 8.5 μm. The camera is equipped with a 32-bit floating-point digital signal processor combined with multi- tasking software, delivering a digital acquisition resolution of 12-bits using nominal power consumption of less than 50 Watts. With a variety of video interface options, remote control capability via an RS-232 connection, and an integrated control driver circuit to support motorized zoom and focus- compatible lenses, this camera design has excellent application in both the military and commercial sector. In the area of remote sensing, high-performance QWIP systems can be used for high-resolution, target recognition as part of a new system of airborne platforms (including UAVs). Such systems also have direct application in law enforcement, surveillance, industrial monitoring and road hazard detection systems. This presentation will cover the current performance of the commercial QWIP cameras, conceptual platform systems and advanced image processing for use in both military remote sensing and civilian applications currently being developed in road hazard monitoring.
A fully reconfigurable photonic integrated signal processor
NASA Astrophysics Data System (ADS)
Liu, Weilin; Li, Ming; Guzzon, Robert S.; Norberg, Erik J.; Parker, John S.; Lu, Mingzhi; Coldren, Larry A.; Yao, Jianping
2016-03-01
Photonic signal processing has been considered a solution to overcome the inherent electronic speed limitations. Over the past few years, an impressive range of photonic integrated signal processors have been proposed, but they usually offer limited reconfigurability, a feature highly needed for the implementation of large-scale general-purpose photonic signal processors. Here, we report and experimentally demonstrate a fully reconfigurable photonic integrated signal processor based on an InP-InGaAsP material system. The proposed photonic signal processor is capable of performing reconfigurable signal processing functions including temporal integration, temporal differentiation and Hilbert transformation. The reconfigurability is achieved by controlling the injection currents to the active components of the signal processor. Our demonstration suggests great potential for chip-scale fully programmable all-optical signal processing.
Implementing An Image Understanding System Architecture Using Pipe
NASA Astrophysics Data System (ADS)
Luck, Randall L.
1988-03-01
This paper will describe PIPE and how it can be used to implement an image understanding system. Image understanding is the process of developing a description of an image in order to make decisions about its contents. The tasks of image understanding are generally split into low level vision and high level vision. Low level vision is performed by PIPE -a high performance parallel processor with an architecture specifically designed for processing video images at up to 60 fields per second. High level vision is performed by one of several types of serial or parallel computers - depending on the application. An additional processor called ISMAP performs the conversion from iconic image space to symbolic feature space. ISMAP plugs into one of PIPE's slots and is memory mapped into the high level processor. Thus it forms the high speed link between the low and high level vision processors. The mechanisms for bottom-up, data driven processing and top-down, model driven processing are discussed.
NASA Astrophysics Data System (ADS)
Giusi, Giovanni; Liu, Scige J.; Galli, Emanuele; Di Giorgio, Anna M.; Farina, Maria; Vertolli, Nello; Di Lellis, Andrea M.
2016-07-01
In this paper we present the results of a series of performance tests carried out on a prototype board mounting the Cobham Gaisler GR712RC Dual Core LEON3FT processor. The aim was the characterization of the performances of the dual core processor when used for executing a highly demanding lossless compression task, acting on data segments continuously copied from the static memory to the processor RAM. The selection of the compression activity to evaluate the performances was driven by the possibility of a comparison with previously executed tests on the Cobham/Aeroflex Gaisler UT699 LEON3FT SPARC™ V8. The results of the test activity have shown a factor 1.6 of improvement with respect to the previous tests, which can easily be improved by adopting a faster onboard board clock, and provided indications on the best size of the data chunks to be used in the compression activity.
NASA Technical Reports Server (NTRS)
Fatoohi, Rod; Saini, Subbash; Ciotti, Robert
2006-01-01
We study the performance of inter-process communication on four high-speed multiprocessor systems using a set of communication benchmarks. The goal is to identify certain limiting factors and bottlenecks with the interconnect of these systems as well as to compare these interconnects. We measured network bandwidth using different number of communicating processors and communication patterns, such as point-to-point communication, collective communication, and dense communication patterns. The four platforms are: a 512-processor SGI Altix 3700 BX2 shared-memory machine with 3.2 GB/s links; a 64-processor (single-streaming) Cray XI shared-memory machine with 32 1.6 GB/s links; a 128-processor Cray Opteron cluster using a Myrinet network; and a 1280-node Dell PowerEdge cluster with an InfiniBand network. Our, results show the impact of the network bandwidth and topology on the overall performance of each interconnect.
Comparative Implementation of High Performance Computing for Power System Dynamic Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Shuangshuang; Huang, Zhenyu; Diao, Ruisheng
Dynamic simulation for transient stability assessment is one of the most important, but intensive, computations for power system planning and operation. Present commercial software is mainly designed for sequential computation to run a single simulation, which is very time consuming with a single processer. The application of High Performance Computing (HPC) to dynamic simulations is very promising in accelerating the computing process by parallelizing its kernel algorithms while maintaining the same level of computation accuracy. This paper describes the comparative implementation of four parallel dynamic simulation schemes in two state-of-the-art HPC environments: Message Passing Interface (MPI) and Open Multi-Processing (OpenMP).more » These implementations serve to match the application with dedicated multi-processor computing hardware and maximize the utilization and benefits of HPC during the development process.« less
System Level RBDO for Military Ground Vehicles using High Performance Computing
2008-01-01
platform. Only the analyses that required more than 24 processors were conducted on the Onyx 350 due to the limited number of processors on the...optimization constraints varied. The queues set the number of processors and number of finite element code licenses available to the analyses. sgi ONYX ...3900: unix 24 MIPS R16000 PROCESSORS 4 IR2 GRAPHICS PIPES 4 IR3 GRAPHICS PIPES 24 GBYTES MEMORY 36 GBYTES LOCAL DISK SPACE sgi ONYX 350: unix 32 MIPS
Experience in highly parallel processing using DAP
NASA Technical Reports Server (NTRS)
Parkinson, D.
1987-01-01
Distributed Array Processors (DAP) have been in day to day use for ten years and a large amount of user experience has been gained. The profile of user applications is similar to that of the Massively Parallel Processor (MPP) working group. Experience has shown that contrary to expectations, highly parallel systems provide excellent performance on so-called dirty problems such as the physics part of meteorological codes. The reasons for this observation are discussed. The arguments against replacing bit processors with floating point processors are also discussed.
NASA Technical Reports Server (NTRS)
Clement, Bradley J.; Estlin, Tara A.; Bornstein, Benjamin J.
2013-01-01
The Mobile Thread Task Manager (MTTM) is being applied to parallelizing existing flight software to understand the benefits and to develop new techniques and architectural concepts for adapting software to multicore architectures. It allocates and load-balances tasks for a group of threads that migrate across processors to improve cache performance. In order to balance-load across threads, the MTTM augments a basic map-reduce strategy to draw jobs from a global queue. In a multicore processor, memory may be "homed" to the cache of a specific processor and must be accessed from that processor. The MTTB architecture wraps access to data with thread management to move threads to the home processor for that data so that the computation follows the data in an attempt to avoid L2 cache misses. Cache homing is also handled by a memory manager that translates identifiers to processor IDs where the data will be homed (according to rules defined by the user). The user can also specify the number of threads and processors separately, which is important for tuning performance for different patterns of computation and memory access. MTTM efficiently processes tasks in parallel on a multiprocessor computer. It also provides an interface to make it easier to adapt existing software to a multiprocessor environment.
A novel VLSI processor architecture for supercomputing arrays
NASA Technical Reports Server (NTRS)
Venkateswaran, N.; Pattabiraman, S.; Devanathan, R.; Ahmed, Ashaf; Venkataraman, S.; Ganesh, N.
1993-01-01
Design of the processor element for general purpose massively parallel supercomputing arrays is highly complex and cost ineffective. To overcome this, the architecture and organization of the functional units of the processor element should be such as to suit the diverse computational structures and simplify mapping of complex communication structures of different classes of algorithms. This demands that the computation and communication structures of different class of algorithms be unified. While unifying the different communication structures is a difficult process, analysis of a wide class of algorithms reveals that their computation structures can be expressed in terms of basic IP,IP,OP,CM,R,SM, and MAA operations. The execution of these operations is unified on the PAcube macro-cell array. Based on this PAcube macro-cell array, we present a novel processor element called the GIPOP processor, which has dedicated functional units to perform the above operations. The architecture and organization of these functional units are such to satisfy the two important criteria mentioned above. The structure of the macro-cell and the unification process has led to a very regular and simpler design of the GIPOP processor. The production cost of the GIPOP processor is drastically reduced as it is designed on high performance mask programmable PAcube arrays.
Compute Server Performance Results
NASA Technical Reports Server (NTRS)
Stockdale, I. E.; Barton, John; Woodrow, Thomas (Technical Monitor)
1994-01-01
Parallel-vector supercomputers have been the workhorses of high performance computing. As expectations of future computing needs have risen faster than projected vector supercomputer performance, much work has been done investigating the feasibility of using Massively Parallel Processor systems as supercomputers. An even more recent development is the availability of high performance workstations which have the potential, when clustered together, to replace parallel-vector systems. We present a systematic comparison of floating point performance and price-performance for various compute server systems. A suite of highly vectorized programs was run on systems including traditional vector systems such as the Cray C90, and RISC workstations such as the IBM RS/6000 590 and the SGI R8000. The C90 system delivers 460 million floating point operations per second (FLOPS), the highest single processor rate of any vendor. However, if the price-performance ration (PPR) is considered to be most important, then the IBM and SGI processors are superior to the C90 processors. Even without code tuning, the IBM and SGI PPR's of 260 and 220 FLOPS per dollar exceed the C90 PPR of 160 FLOPS per dollar when running our highly vectorized suite,
NASA Technical Reports Server (NTRS)
Taylor, B. K.; Casasent, D. P.
1989-01-01
The use of simplified error models to accurately simulate and evaluate the performance of an optical linear-algebra processor is described. The optical architecture used to perform banded matrix-vector products is reviewed, along with a linear dynamic finite-element case study. The laboratory hardware and ac-modulation technique used are presented. The individual processor error-source models and their simulator implementation are detailed. Several significant simplifications are introduced to ease the computational requirements and complexity of the simulations. The error models are verified with a laboratory implementation of the processor, and are used to evaluate its potential performance.
Fault Mitigation Schemes for Future Spaceflight Multicore Processors
NASA Technical Reports Server (NTRS)
Alexander, James W.; Clement, Bradley J.; Gostelow, Kim P.; Lai, John Y.
2012-01-01
Future planetary exploration missions demand significant advances in on-board computing capabilities over current avionics architectures based on a single-core processing element. The state-of-the-art multi-core processor provides much promise in meeting such challenges while introducing new fault tolerance problems when applied to space missions. Software-based schemes are being presented in this paper that can achieve system-level fault mitigation beyond that provided by radiation-hard-by-design (RHBD). For mission and time critical applications such as the Terrain Relative Navigation (TRN) for planetary or small body navigation, and landing, a range of fault tolerance methods can be adapted by the application. The software methods being investigated include Error Correction Code (ECC) for data packet routing between cores, virtual network routing, Triple Modular Redundancy (TMR), and Algorithm-Based Fault Tolerance (ABFT). A robust fault tolerance framework that provides fail-operational behavior under hard real-time constraints and graceful degradation will be demonstrated using TRN executing on a commercial Tilera(R) processor with simulated fault injections.
Eigensolution of finite element problems in a completely connected parallel architecture
NASA Technical Reports Server (NTRS)
Akl, F.; Morel, M.
1989-01-01
A parallel algorithm is presented for the solution of the generalized eigenproblem in linear elastic finite element analysis. The algorithm is based on a completely connected parallel architecture in which each processor is allowed to communicate with all other processors. The algorithm is successfully implemented on a tightly coupled MIMD parallel processor. A finite element model is divided into m domains each of which is assumed to process n elements. Each domain is then assigned to a processor or to a logical processor (task) if the number of domains exceeds the number of physical processors. The effect of the number of domains, the number of degrees-of-freedom located along the global fronts, and the dimension of the subspace on the performance of the algorithm is investigated. For a 64-element rectangular plate, speed-ups of 1.86, 3.13, 3.18, and 3.61 are achieved on two, four, six, and eight processors, respectively.
Adapting the Reconfigurable SpaceCube Processing System for Multiple Mission Applications
NASA Technical Reports Server (NTRS)
Petrick, Dave
2014-01-01
This paper will detail the use of SpaceCube in multiple space flight applications including the Hubble Space Telescope Servicing Mission 4 (HST-SM4), an International Space Station (ISS) radiation test bed experiment, and the main avionics subsystem for two separate ISS attached payloads. Each mission has had varying degrees of data processing complexities, performance requirements, and external interfaces. We will show the methodology used to minimize the changes required to the physical hardware, FPGA designs, embedded software interfaces, and testing.This paper will summarize significant results as they apply to each mission application. In the HST-SM4 application we utilized the FPGA resources to accelerate portions of the image processing algorithms more than 25 times faster than a standard space processor in order to meet computational speed requirements. For the ISS radiation on-orbit demonstration, the main goal is to show that we can rely on the commercial FPGAs and processors in a space environment. We describe our FPGA and processor radiation mitigation strategies that have resulted in our eight PowerPCs being available and error free for more than 99.99 of the time over the period of four years. This positive data and proven reliability of the SpaceCube on ISS resulted in the Department of Defense (DoD) selecting SpaceCube, which is replacing an older and slower computer currently used on ISS, as the main avionics for two upcoming ISS experiment campaigns. This paper will show how we quickly reconfigured the SpaceCube system to meet the more stringent reliability requirements
Parallel design patterns for a low-power, software-defined compressed video encoder
NASA Astrophysics Data System (ADS)
Bruns, Michael W.; Hunt, Martin A.; Prasad, Durga; Gunupudi, Nageswara R.; Sonachalam, Sekar
2011-06-01
Video compression algorithms such as H.264 offer much potential for parallel processing that is not always exploited by the technology of a particular implementation. Consumer mobile encoding devices often achieve real-time performance and low power consumption through parallel processing in Application Specific Integrated Circuit (ASIC) technology, but many other applications require a software-defined encoder. High quality compression features needed for some applications such as 10-bit sample depth or 4:2:2 chroma format often go beyond the capability of a typical consumer electronics device. An application may also need to efficiently combine compression with other functions such as noise reduction, image stabilization, real time clocks, GPS data, mission/ESD/user data or software-defined radio in a low power, field upgradable implementation. Low power, software-defined encoders may be implemented using a massively parallel memory-network processor array with 100 or more cores and distributed memory. The large number of processor elements allow the silicon device to operate more efficiently than conventional DSP or CPU technology. A dataflow programming methodology may be used to express all of the encoding processes including motion compensation, transform and quantization, and entropy coding. This is a declarative programming model in which the parallelism of the compression algorithm is expressed as a hierarchical graph of tasks with message communication. Data parallel and task parallel design patterns are supported without the need for explicit global synchronization control. An example is described of an H.264 encoder developed for a commercially available, massively parallel memorynetwork processor device.
Gigaflop performance on a CRAY-2: Multitasking a computational fluid dynamics application
NASA Technical Reports Server (NTRS)
Tennille, Geoffrey M.; Overman, Andrea L.; Lambiotte, Jules J.; Streett, Craig L.
1991-01-01
The methodology is described for converting a large, long-running applications code that executed on a single processor of a CRAY-2 supercomputer to a version that executed efficiently on multiple processors. Although the conversion of every application is different, a discussion of the types of modification used to achieve gigaflop performance is included to assist others in the parallelization of applications for CRAY computers, especially those that were developed for other computers. An existing application, from the discipline of computational fluid dynamics, that had utilized over 2000 hrs of CPU time on CRAY-2 during the previous year was chosen as a test case to study the effectiveness of multitasking on a CRAY-2. The nature of dominant calculations within the application indicated that a sustained computational rate of 1 billion floating-point operations per second, or 1 gigaflop, might be achieved. The code was first analyzed and modified for optimal performance on a single processor in a batch environment. After optimal performance on a single CPU was achieved, the code was modified to use multiple processors in a dedicated environment. The results of these two efforts were merged into a single code that had a sustained computational rate of over 1 gigaflop on a CRAY-2. Timings and analysis of performance are given for both single- and multiple-processor runs.
Assessment of mammographic film processor performance in a hospital and mobile screening unit.
Murray, J G; Dowsett, D J; Laird, O; Ennis, J T
1992-12-01
In contrast to the majority of mammographic breast screening programmes, film processing at this centre occurs on site in both hospital and mobile trailer units. Initial (1989) quality control (QC) sensitometric tests revealed a large variation in film processor performance in the mobile unit. The clinical significance of these variations was assessed and acceptance limits for processor performance determined. Abnormal mammograms were used as reference material and copied using high definition 35 mm film over a range of exposure settings. The copies were than matched with QC film density variation from the mobile unit. All films were subsequently ranked for spatial and contrast resolution. Optimal values for processing time of 2 min (equivalent to film transit time 3 min and developer time 46 s) and temperature of 36 degrees C were obtained. The widespread anomaly of reporting film transit time as processing time is highlighted. Use of mammogram copies as a means of measuring the influence of film processor variation is advocated. Careful monitoring of the mobile unit film processor performance has produced stable quality comparable with the hospital based unit. The advantages of on site film processing are outlined. The addition of a sensitometric step wedge to all mammography film stock as a means of assessing image quality is recommended.
Software-defined reconfigurable microwave photonics processor.
Pérez, Daniel; Gasulla, Ivana; Capmany, José
2015-06-01
We propose, for the first time to our knowledge, a software-defined reconfigurable microwave photonics signal processor architecture that can be integrated on a chip and is capable of performing all the main functionalities by suitable programming of its control signals. The basic configuration is presented and a thorough end-to-end design model derived that accounts for the performance of the overall processor taking into consideration the impact and interdependencies of both its photonic and RF parts. We demonstrate the model versatility by applying it to several relevant application examples.
APRON: A Cellular Processor Array Simulation and Hardware Design Tool
NASA Astrophysics Data System (ADS)
Barr, David R. W.; Dudek, Piotr
2009-12-01
We present a software environment for the efficient simulation of cellular processor arrays (CPAs). This software (APRON) is used to explore algorithms that are designed for massively parallel fine-grained processor arrays, topographic multilayer neural networks, vision chips with SIMD processor arrays, and related architectures. The software uses a highly optimised core combined with a flexible compiler to provide the user with tools for the design of new processor array hardware architectures and the emulation of existing devices. We present performance benchmarks for the software processor array implemented on standard commodity microprocessors. APRON can be configured to use additional processing hardware if necessary and can be used as a complete graphical user interface and development environment for new or existing CPA systems, allowing more users to develop algorithms for CPA systems.
An evaluation of MPI message rate on hybrid-core processors
Barrett, Brian W.; Brightwell, Ron; Grant, Ryan; ...
2014-11-01
Power and energy concerns are motivating chip manufacturers to consider future hybrid-core processor designs that may combine a small number of traditional cores optimized for single-thread performance with a large number of simpler cores optimized for throughput performance. This trend is likely to impact the way in which compute resources for network protocol processing functions are allocated and managed. In particular, the performance of MPI match processing is critical to achieving high message throughput. In this paper, we analyze the ability of simple and more complex cores to perform MPI matching operations for various scenarios in order to gain insightmore » into how MPI implementations for future hybrid-core processors should be designed.« less
Low-Latency Embedded Vision Processor (LLEVS)
2016-03-01
26 3.2.3 Task 3 Projected Performance Analysis of FPGA- based Vision Processor ........... 31 3.2.3.1 Algorithms Latency Analysis ...Programmable Gate Array Custom Hardware for Real- Time Multiresolution Analysis . ............................................... 35...conduct data analysis for performance projections. The data acquired through measurements , simulation and estimation provide the requisite platform for
Mapping a battlefield simulation onto message-passing parallel architectures
NASA Technical Reports Server (NTRS)
Nicol, David M.
1987-01-01
Perhaps the most critical problem in distributed simulation is that of mapping: without an effective mapping of workload to processors the speedup potential of parallel processing cannot be realized. Mapping a simulation onto a message-passing architecture is especially difficult when the computational workload dynamically changes as a function of time and space; this is exactly the situation faced by battlefield simulations. This paper studies an approach where the simulated battlefield domain is first partitioned into many regions of equal size; typically there are more regions than processors. The regions are then assigned to processors; a processor is responsible for performing all simulation activity associated with the regions. The assignment algorithm is quite simple and attempts to balance load by exploiting locality of workload intensity. The performance of this technique is studied on a simple battlefield simulation implemented on the Flex/32 multiprocessor. Measurements show that the proposed method achieves reasonable processor efficiencies. Furthermore, the method shows promise for use in dynamic remapping of the simulation.
21 CFR 113.5 - Current good manufacturing practice.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 2 2010-04-01 2010-04-01 false Current good manufacturing practice. 113.5 Section... CONTAINERS General Provisions § 113.5 Current good manufacturing practice. The criteria in §§ 113.10, 113.40..., methods, practices, and controls used by the commercial processor in the manufacture, processing, or...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-07
... Act of 1996 (and reauthorized in 2007), NMFS is required to enumerate the economic impacts of the... allow analysts to estimate the economic contributions and impacts of marine fish processing to each... paper forms. Methods of submittal include email of electronic forms, and mail and facsimile transmission...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-21
... on seaLandings, a consolidated electronic means of reporting landings and production of commercial... submitting required reports and logbooks using seaLandings. NMFS will provide a demonstration of the new version of seaLandings for at-sea catcher/processors and motherships, and training on how to submit daily...
Documentation of 50% water conservation in a single process at a beef abattoir. Meat Science
USDA-ARS?s Scientific Manuscript database
Beef slaughter is water intensive due to stringent food safety requirements. We conducted a study at a commercial beef processor to demonstrate water conservation by modifying the mechanical head wash. We documented the initial nozzle configuration (112 nozzles), water pressure (275 kPa), and flowra...
Development of compact fuel processor for 2 kW class residential PEMFCs
NASA Astrophysics Data System (ADS)
Seo, Yu Taek; Seo, Dong Joo; Jeong, Jin Hyeok; Yoon, Wang Lai
Korea Institute of Energy Research (KIER) has been developing a novel fuel processing system to provide hydrogen rich gas to residential polymer electrolyte membrane fuel cells (PEMFCs) cogeneration system. For the effective design of a compact hydrogen production system, the unit processes of steam reforming, high and low temperature water gas shift, steam generator and internal heat exchangers are thermally and physically integrated into a packaged hardware system. Several prototypes are under development and the prototype I fuel processor showed thermal efficiency of 73% as a HHV basis with methane conversion of 81%. Recently tested prototype II has been shown the improved performance of thermal efficiency of 76% with methane conversion of 83%. In both prototypes, two-stage PrOx reactors reduce CO concentration less than 10 ppm, which is the prerequisite CO limit condition of product gas for the PEMFCs stack. After confirming the initial performance of prototype I fuel processor, it is coupled with PEMFC single cell to test the durability and demonstrated that the fuel processor is operated for 3 days successfully without any failure of fuel cell voltage. Prototype II fuel processor also showed stable performance during the durability test.
MULTI-CORE AND OPTICAL PROCESSOR RELATED APPLICATIONS RESEARCH AT OAK RIDGE NATIONAL LABORATORY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barhen, Jacob; Kerekes, Ryan A; ST Charles, Jesse Lee
2008-01-01
High-speed parallelization of common tasks holds great promise as a low-risk approach to achieving the significant increases in signal processing and computational performance required for next generation innovations in reconfigurable radio systems. Researchers at the Oak Ridge National Laboratory have been working on exploiting the parallelization offered by this emerging technology and applying it to a variety of problems. This paper will highlight recent experience with four different parallel processors applied to signal processing tasks that are directly relevant to signal processing required for SDR/CR waveforms. The first is the EnLight Optical Core Processor applied to matched filter (MF) correlationmore » processing via fast Fourier transform (FFT) of broadband Dopplersensitive waveforms (DSW) using active sonar arrays for target tracking. The second is the IBM CELL Broadband Engine applied to 2-D discrete Fourier transform (DFT) kernel for image processing and frequency domain processing. And the third is the NVIDIA graphical processor applied to document feature clustering. EnLight Optical Core Processor. Optical processing is inherently capable of high-parallelism that can be translated to very high performance, low power dissipation computing. The EnLight 256 is a small form factor signal processing chip (5x5 cm2) with a digital optical core that is being developed by an Israeli startup company. As part of its evaluation of foreign technology, ORNL's Center for Engineering Science Advanced Research (CESAR) had access to a precursor EnLight 64 Alpha hardware for a preliminary assessment of capabilities in terms of large Fourier transforms for matched filter banks and on applications related to Doppler-sensitive waveforms. This processor is optimized for array operations, which it performs in fixed-point arithmetic at the rate of 16 TeraOPS at 8-bit precision. This is approximately 1000 times faster than the fastest DSP available today. The optical core performs the matrix-vector multiplications, where the nominal matrix size is 256x256. The system clock is 125MHz. At each clock cycle, 128K multiply-and-add operations per second (OPS) are carried out, which yields a peak performance of 16 TeraOPS. IBM Cell Broadband Engine. The Cell processor is the extraordinary resulting product of 5 years of sustained, intensive R&D collaboration (involving over $400M investment) between IBM, Sony, and Toshiba. Its architecture comprises one multithreaded 64-bit PowerPC processor element (PPE) with VMX capabilities and two levels of globally coherent cache, and 8 synergistic processor elements (SPEs). Each SPE consists of a processor (SPU) designed for streaming workloads, local memory, and a globally coherent direct memory access (DMA) engine. Computations are performed in 128-bit wide single instruction multiple data streams (SIMD). An integrated high-bandwidth element interconnect bus (EIB) connects the nine processors and their ports to external memory and to system I/O. The Applied Software Engineering Research (ASER) Group at the ORNL is applying the Cell to a variety of text and image analysis applications. Research on Cell-equipped PlayStation3 (PS3) consoles has led to the development of a correlation-based image recognition engine that enables a single PS3 to process images at more than 10X the speed of state-of-the-art single-core processors. NVIDIA Graphics Processing Units. The ASER group is also employing the latest NVIDIA graphical processing units (GPUs) to accelerate clustering of thousands of text documents using recently developed clustering algorithms such as document flocking and affinity propagation.« less
Accuracy of the lattice-Boltzmann method using the Cell processor
NASA Astrophysics Data System (ADS)
Harvey, M. J.; de Fabritiis, G.; Giupponi, G.
2008-11-01
Accelerator processors like the new Cell processor are extending the traditional platforms for scientific computation, allowing orders of magnitude more floating-point operations per second (flops) compared to standard central processing units. However, they currently lack double-precision support and support for some IEEE 754 capabilities. In this work, we develop a lattice-Boltzmann (LB) code to run on the Cell processor and test the accuracy of this lattice method on this platform. We run tests for different flow topologies, boundary conditions, and Reynolds numbers in the range Re=6 350 . In one case, simulation results show a reduced mass and momentum conservation compared to an equivalent double-precision LB implementation. All other cases demonstrate the utility of the Cell processor for fluid dynamics simulations. Benchmarks on two Cell-based platforms are performed, the Sony Playstation3 and the QS20/QS21 IBM blade, obtaining a speed-up factor of 7 and 21, respectively, compared to the original PC version of the code, and a conservative sustained performance of 28 gigaflops per single Cell processor. Our results suggest that choice of IEEE 754 rounding mode is possibly as important as double-precision support for this specific scientific application.
Method for prefetching non-contiguous data structures
Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Hoenicke, Dirk [Ossining, NY; Ohmacht, Martin [Brewster, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Takken, Todd E [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY
2009-05-05
A low latency memory system access is provided in association with a weakly-ordered multiprocessor system. Each processor in the multiprocessor shares resources, and each shared resource has an associated lock within a locking device that provides support for synchronization between the multiple processors in the multiprocessor and the orderly sharing of the resources. A processor only has permission to access a resource when it owns the lock associated with that resource, and an attempt by a processor to own a lock requires only a single load operation, rather than a traditional atomic load followed by store, such that the processor only performs a read operation and the hardware locking device performs a subsequent write operation rather than the processor. A simple perfecting for non-contiguous data structures is also disclosed. A memory line is redefined so that in addition to the normal physical memory data, every line includes a pointer that is large enough to point to any other line in the memory, wherein the pointers to determine which memory line to prefect rather than some other predictive algorithm. This enables hardware to effectively prefect memory access patterns that are non-contiguous, but repetitive.
Fast particles identification in programmable form at level-0 trigger by means of the 3D-Flow system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crosetto, Dario B.
1998-10-30
The 3D-Flow Processor system is a new, technology-independent concept in very fast, real-time system architectures. Based on either an FPGA or an ASIC implementation, it can address, in a fully programmable manner, applications where commercially available processors would fail because of throughput requirements. Possible applications include filtering-algorithms (pattern recognition) from the input of multiple sensors, as well as moving any input validated by these filtering-algorithms to a single output channel. Both operations can easily be implemented on a 3D-Flow system to achieve a real-time processing system with a very short lag time. This system can be built either with off-the-shelfmore » FPGAs or, for higher data rates, with CMOS chips containing 4 to 16 processors each. The basic building block of the system, a 3D-Flow processor, has been successfully designed in VHDL code written in ''Generic HDL'' (mostly made of reusable blocks that are synthesizable in different technologies, or FPGAs), to produce a netlist for a four-processor ASIC featuring 0.35 micron CBA (Ceil Base Array) technology at 3.3 Volts, 884 mW power dissipation at 60 MHz and 63.75 mm sq. die size. The same VHDL code has been targeted to three FPGA manufacturers (Altera EPF10K250A, ORCA-Lucent Technologies 0R3T165 and Xilinx XCV1000). A complete set of software tools, the 3D-Flow System Manager, equally applicable to ASIC or FPGA implementations, has been produced to provide full system simulation, application development, real-time monitoring, and run-time fault recovery. Today's technology can accommodate 16 processors per chip in a medium size die, at a cost per processor of less than $5 based on the current silicon die/size technology cost.« less
Nonlinear Wave Simulation on the Xeon Phi Knights Landing Processor
NASA Astrophysics Data System (ADS)
Hristov, Ivan; Goranov, Goran; Hristova, Radoslava
2018-02-01
We consider an interesting from computational point of view standing wave simulation by solving coupled 2D perturbed Sine-Gordon equations. We make an OpenMP realization which explores both thread and SIMD levels of parallelism. We test the OpenMP program on two different energy equivalent Intel architectures: 2× Xeon E5-2695 v2 processors, (code-named "Ivy Bridge-EP") in the Hybrilit cluster, and Xeon Phi 7250 processor (code-named "Knights Landing" (KNL). The results show 2 times better performance on KNL processor.
Reduced power processor requirements for the 30-cm diameter HG ion thruster
NASA Technical Reports Server (NTRS)
Rawlin, V. K.
1979-01-01
The characteristics of power processors strongly impact the overall performance and cost of electric propulsion systems. A program was initiated to evaluate simplifications of the thruster-power processor interface requirements. The power processor requirements are mission dependent with major differences arising for those missions which require a nearly constant thruster operating point (typical of geocentric and some inbound planetary missions) and those requiring operation over a large range of input power (such as outbound planetary missions). This paper describes the results of tests which have indicated that as many as seven of the twelve power supplies may be eliminated from the present Functional Model Power Processor used with 30-cm diameter Hg ion thrusters.
MILC Code Performance on High End CPU and GPU Supercomputer Clusters
NASA Astrophysics Data System (ADS)
DeTar, Carleton; Gottlieb, Steven; Li, Ruizi; Toussaint, Doug
2018-03-01
With recent developments in parallel supercomputing architecture, many core, multi-core, and GPU processors are now commonplace, resulting in more levels of parallelism, memory hierarchy, and programming complexity. It has been necessary to adapt the MILC code to these new processors starting with NVIDIA GPUs, and more recently, the Intel Xeon Phi processors. We report on our efforts to port and optimize our code for the Intel Knights Landing architecture. We consider performance of the MILC code with MPI and OpenMP, and optimizations with QOPQDP and QPhiX. For the latter approach, we concentrate on the staggered conjugate gradient and gauge force. We also consider performance on recent NVIDIA GPUs using the QUDA library.
Diesel fuel to dc power: Navy & Marine Corps Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bloomfield, D.P.
1996-12-31
During the past year Analytic Power has tested fuel cell stacks and diesel fuel processors for US Navy and Marine Corps applications. The units are 10 kW demonstration power plants. The USN power plant was built to demonstrate the feasibility of diesel fueled PEM fuel cell power plants for 250 kW and 2.5 MW shipboard power systems. We designed and tested a ten cell, 1 kW USMC substack and fuel processor. The complete 10 kW prototype power plant, which has application to both power and hydrogen generation, is now under construction. The USN and USMC fuel cell stacks have beenmore » tested on both actual and simulated reformate. Analytic Power has accumulated operating experience with autothermal reforming based fuel processors operating on sulfur bearing diesel fuel, jet fuel, propane and natural gas. We have also completed the design and fabrication of an advanced regenerative ATR for the USMC. One of the significant problems with small fuel processors is heat loss which limits its ability to operate with the high steam to carbon ratios required for coke free high efficiency operation. The new USMC unit specifically addresses these heat transfer issues. The advances in the mill programs have been incorporated into Analytic Power`s commercial units which are now under test.« less
Rapid prototyping and evaluation of programmable SIMD SDR processors in LISA
NASA Astrophysics Data System (ADS)
Chen, Ting; Liu, Hengzhu; Zhang, Botao; Liu, Dongpei
2013-03-01
With the development of international wireless communication standards, there is an increase in computational requirement for baseband signal processors. Time-to-market pressure makes it impossible to completely redesign new processors for the evolving standards. Due to its high flexibility and low power, software defined radio (SDR) digital signal processors have been proposed as promising technology to replace traditional ASIC and FPGA fashions. In addition, there are large numbers of parallel data processed in computation-intensive functions, which fosters the development of single instruction multiple data (SIMD) architecture in SDR platform. So a new way must be found to prototype the SDR processors efficiently. In this paper we present a bit-and-cycle accurate model of programmable SIMD SDR processors in a machine description language LISA. LISA is a language for instruction set architecture which can gain rapid model at architectural level. In order to evaluate the availability of our proposed processor, three common baseband functions, FFT, FIR digital filter and matrix multiplication have been mapped on the SDR platform. Analytical results showed that the SDR processor achieved the maximum of 47.1% performance boost relative to the opponent processor.
SBML-PET-MPI: a parallel parameter estimation tool for Systems Biology Markup Language based models.
Zi, Zhike
2011-04-01
Parameter estimation is crucial for the modeling and dynamic analysis of biological systems. However, implementing parameter estimation is time consuming and computationally demanding. Here, we introduced a parallel parameter estimation tool for Systems Biology Markup Language (SBML)-based models (SBML-PET-MPI). SBML-PET-MPI allows the user to perform parameter estimation and parameter uncertainty analysis by collectively fitting multiple experimental datasets. The tool is developed and parallelized using the message passing interface (MPI) protocol, which provides good scalability with the number of processors. SBML-PET-MPI is freely available for non-commercial use at http://www.bioss.uni-freiburg.de/cms/sbml-pet-mpi.html or http://sites.google.com/site/sbmlpetmpi/.
Shannon, Robert V.; Cruz, Rachel J.; Galvin, John J.
2011-01-01
High stimulation rates in cochlear implants (CI) offer better temporal sampling, can induce stochastic-like firing of auditory neurons and can increase the electric dynamic range, all of which could improve CI speech performance. While commercial CI have employed increasingly high stimulation rates, no clear or consistent advantage has been shown for high rates. In this study, speech recognition was acutely measured with experimental processors in 7 CI subjects (Clarion CII users). The stimulation rate varied between (approx.) 600 and 4800 pulses per second per electrode (ppse) and the number of active electrodes varied between 4 and 16. Vowel, consonant, consonant-nucleus-consonant word and IEEE sentence recognition was acutely measured in quiet and in steady noise (+10 dB signal-to-noise ratio). Subjective quality ratings were obtained for each of the experimental processors in quiet and in noise. Except for a small difference for vowel recognition in quiet, there were no significant differences in performance among the experimental stimulation rates for any of the speech measures. There was also a small but significant increase in subjective quality rating as stimulation rates increased from 1200 to 2400 ppse in noise. Consistent with previous studies, performance significantly improved as the number of electrodes was increased from 4 to 8, but no significant difference showed between 8, 12 and 16 electrodes. Altogether, there was little-to-no advantage of high stimulation rates in quiet or in noise, at least for the present speech tests and conditions. PMID:20639631
Satellite on-board real-time SAR processor prototype
NASA Astrophysics Data System (ADS)
Bergeron, Alain; Doucet, Michel; Harnisch, Bernd; Suess, Martin; Marchese, Linda; Bourqui, Pascal; Desnoyers, Nicholas; Legros, Mathieu; Guillot, Ludovic; Mercier, Luc; Châteauneuf, François
2017-11-01
A Compact Real-Time Optronic SAR Processor has been successfully developed and tested up to a Technology Readiness Level of 4 (TRL4), the breadboard validation in a laboratory environment. SAR, or Synthetic Aperture Radar, is an active system allowing day and night imaging independent of the cloud coverage of the planet. The SAR raw data is a set of complex data for range and azimuth, which cannot be compressed. Specifically, for planetary missions and unmanned aerial vehicle (UAV) systems with limited communication data rates this is a clear disadvantage. SAR images are typically processed electronically applying dedicated Fourier transformations. This, however, can also be performed optically in real-time. Originally the first SAR images were optically processed. The optical Fourier processor architecture provides inherent parallel computing capabilities allowing real-time SAR data processing and thus the ability for compression and strongly reduced communication bandwidth requirements for the satellite. SAR signal return data are in general complex data. Both amplitude and phase must be combined optically in the SAR processor for each range and azimuth pixel. Amplitude and phase are generated by dedicated spatial light modulators and superimposed by an optical relay set-up. The spatial light modulators display the full complex raw data information over a two-dimensional format, one for the azimuth and one for the range. Since the entire signal history is displayed at once, the processor operates in parallel yielding real-time performances, i.e. without resulting bottleneck. Processing of both azimuth and range information is performed in a single pass. This paper focuses on the onboard capabilities of the compact optical SAR processor prototype that allows in-orbit processing of SAR images. Examples of processed ENVISAT ASAR images are presented. Various SAR processor parameters such as processing capabilities, image quality (point target analysis), weight and size are reviewed.
Santarelli, Rosamaria; Magnavita, Vincenzo; De Filippi, Roberta; Ventura, Laura; Genovese, Elisabetta; Arslan, Edoardo
2009-04-01
To compare speech perception performance in children fitted with previous generation Nucleus sound processor, Sprint or Esprit 3G, and the Freedom, the most recently released system from the Cochlear Corporation that features a larger input dynamic range. Prospective intrasubject comparative study. University Medical Center. Seventeen prelingually deafened children who had received the Nucleus 24 cochlear implant and used the Sprint or Esprit 3G sound processor. Cochlear implantation with Cochlear device. Speech perception was evaluated at baseline (Sprint, n = 11; Esprit 3G, n = 6) and after 1 month's experience with the Freedom sound processor. Identification and recognition of disyllabic words and identification of vowels were performed via recorded voice in quiet (70 dB [A]), in the presence of background noise at various levels of signal-to-noise ratio (+10, +5, 0, -5) and at a soft presentation level (60 dB [A]). Consonant identification and recognition of disyllabic words, trisyllabic words, and sentences were evaluated in live voice. Frequency discrimination was measured in a subset of subjects (n = 5) by using an adaptive, 3-interval, 3-alternative, forced-choice procedure. Identification of disyllabic words administered at a soft presentation level showed a significant increase when switching to the Freedom compared with the previously worn processor in children using the Sprint or Esprit 3G. Identification and recognition of disyllabic words in the presence of background noise as well as consonant identification and sentence recognition increased significantly for the Freedom compared with the previously worn device only in children fitted with the Sprint. Frequency discrimination was significantly better when switching to the Freedom compared with the previously worn processor. Serial comparisons revealed that that speech perception performance evaluated in children aged 5 to 15 years was superior with the Freedom than previous generations of Nucleus sound processors. These differences are deemed to ensue from an increased input dynamic range, a feature that offers potentially enhanced phonemic discrimination.
An Application-Based Performance Characterization of the Columbia Supercluster
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Djomehri, Jahed M.; Hood, Robert; Jin, Hoaqiang; Kiris, Cetin; Saini, Subhash
2005-01-01
Columbia is a 10,240-processor supercluster consisting of 20 Altix nodes with 512 processors each, and currently ranked as the second-fastest computer in the world. In this paper, we present the performance characteristics of Columbia obtained on up to four computing nodes interconnected via the InfiniBand and/or NUMAlink4 communication fabrics. We evaluate floating-point performance, memory bandwidth, message passing communication speeds, and compilers using a subset of the HPC Challenge benchmarks, and some of the NAS Parallel Benchmarks including the multi-zone versions. We present detailed performance results for three scientific applications of interest to NASA, one from molecular dynamics, and two from computational fluid dynamics. Our results show that both the NUMAlink4 and the InfiniBand hold promise for application scaling to a large number of processors.
NASA Technical Reports Server (NTRS)
1998-01-01
This report presents a description of the tests performed, and the test data, for the A2 METSAT Signal Processor Assembly PN: 1331120-2, S/N F03. The assembly was tested in accordance with AE-26754, "METSAT Signal Processor Scan Drive Test and Integration Procedure."
NASA Technical Reports Server (NTRS)
1998-01-01
This report presents a description of the tests performed, and the test data, for the A2 METSAT Signal Processor Assembly PN: 1331120-2, S/N F04. The assembly was tested in accordance with AE-26754, "METSAT Signal Processor Scan Drive Test and Integration Procedure."
Analysis and simulation tools for solar array power systems
NASA Astrophysics Data System (ADS)
Pongratananukul, Nattorn
This dissertation presents simulation tools developed specifically for the design of solar array power systems. Contributions are made in several aspects of the system design phases, including solar source modeling, system simulation, and controller verification. A tool to automate the study of solar array configurations using general purpose circuit simulators has been developed based on the modeling of individual solar cells. Hierarchical structure of solar cell elements, including semiconductor properties, allows simulation of electrical properties as well as the evaluation of the impact of environmental conditions. A second developed tool provides a co-simulation platform with the capability to verify the performance of an actual digital controller implemented in programmable hardware such as a DSP processor, while the entire solar array including the DC-DC power converter is modeled in software algorithms running on a computer. This "virtual plant" allows developing and debugging code for the digital controller, and also to improve the control algorithm. One important task in solar arrays is to track the maximum power point on the array in order to maximize the power that can be delivered. Digital controllers implemented with programmable processors are particularly attractive for this task because sophisticated tracking algorithms can be implemented and revised when needed to optimize their performance. The proposed co-simulation tools are thus very valuable in developing and optimizing the control algorithm, before the system is built. Examples that demonstrate the effectiveness of the proposed methodologies are presented. The proposed simulation tools are also valuable in the design of multi-channel arrays. In the specific system that we have designed and tested, the control algorithm is implemented on a single digital signal processor. In each of the channels the maximum power point is tracked individually. In the prototype we built, off-the-shelf commercial DC-DC converters were utilized. At the end, the overall performance of the entire system was evaluated using solar array simulators capable of simulating various I-V characteristics, and also by using an electronic load. Experimental results are presented.
Automatic film processors' quality control test in Greek military hospitals.
Lymberis, C; Efstathopoulos, E P; Manetou, A; Poudridis, G
1993-04-01
The two major military radiology installations (Athens, Greece) using a total of 15 automatic film processors were assessed using the 21-step-wedge method. The results of quality control in all these processors are presented. The parameters measured under actual working conditions were base and fog, contrast and speed. Base and fog as well as speed displayed large variations with average values generally higher than acceptable, whilst contrast displayed greater stability. Developer temperature was measured daily during the test and was found to be outside the film manufacturers' recommended limits in nine of the 15 processors. In only one processor did film passing time vary on an every day basis and this was due to maloperation. Developer pH test was not part of the daily monitoring service being performed every 5 days for each film processor and found to be in the range 9-12; 10 of the 15 processors presented pH values outside the limits specified by the film manufacturers.
50 CFR 679.50 - Groundfish Observer Program.
Code of Federal Regulations, 2012 CFR
2012-10-01
... completion of the electronic vessel and/or processor survey(s); (B) Complete NMFS electronic vessel and/or processor surveys before performing other jobs or duties which are not part of NMFS groundfish observer...
50 CFR 679.50 - Groundfish Observer Program.
Code of Federal Regulations, 2011 CFR
2011-10-01
... completion of the electronic vessel and/or processor survey(s); (B) Complete NMFS electronic vessel and/or processor surveys before performing other jobs or duties which are not part of NMFS groundfish observer...
NASA Technical Reports Server (NTRS)
Fijany, Amir (Inventor); Bejczy, Antal K. (Inventor)
1993-01-01
This is a real-time robotic controller and simulator which is a MIMD-SIMD parallel architecture for interfacing with an external host computer and providing a high degree of parallelism in computations for robotic control and simulation. It includes a host processor for receiving instructions from the external host computer and for transmitting answers to the external host computer. There are a plurality of SIMD microprocessors, each SIMD processor being a SIMD parallel processor capable of exploiting fine grain parallelism and further being able to operate asynchronously to form a MIMD architecture. Each SIMD processor comprises a SIMD architecture capable of performing two matrix-vector operations in parallel while fully exploiting parallelism in each operation. There is a system bus connecting the host processor to the plurality of SIMD microprocessors and a common clock providing a continuous sequence of clock pulses. There is also a ring structure interconnecting the plurality of SIMD microprocessors and connected to the clock for providing the clock pulses to the SIMD microprocessors and for providing a path for the flow of data and instructions between the SIMD microprocessors. The host processor includes logic for controlling the RRCS by interpreting instructions sent by the external host computer, decomposing the instructions into a series of computations to be performed by the SIMD microprocessors, using the system bus to distribute associated data among the SIMD microprocessors, and initiating activity of the SIMD microprocessors to perform the computations on the data by procedure call.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, M.S.
The Barnwell Waste Management Facility (BWMF) is scheduled to restrict access to waste generators outside of the Atlantic Compact (SC, CT, NJ) on July 1, 2008. South Carolina, authorized under the Low-Level Waste Policy Act of 1980 and Amendments Act of 1985, and in agreement with the other Atlantic Compact states, will only accept Class A, B, and C low-level radioactive waste (LLRW) generated within compact. For many years, the BWMF has been the only LLRW disposal facility to accept Class B and C waste from LLRW generators throughout the country, except those that have access to the Northwest Compactmore » Site. Many Class B/C waste generators consider this to be a national crisis situation requiring interim or possible permanent storage, changes in operation, significant cost impacts, and/or elimination of services, especially in the health care and non-power generation industries. With proper in-house waste management practices and utilization of commercial processor services, a national crisis can be avoided, although some generators with specific waste forms or radionuclides will remain without options. In summary: It is unknown what the future will bring for commercial LLRW disposal. Could the anticipated post Barnwell Class B/C crisis be avoided by any of the following? - Barnwell Site remains open for the nation's commercial Class B/C waste; - Richland Site opens back up to the nation for commercial Class B/C waste; - Texas Site opens up to the nation for commercial Class B/C waste; - Federal Government intervenes by keeping a commercial Class B/C site open for the nation's commercial Class B/C waste; - Federal Government makes a DOE site available for commercial Class B/C waste; - Federal Government revisits the LLRW Policy Act of 1980 and Amendments Act of 1985. Without a future LLRW site capable of accepting Class B/C currently on the horizon, commercial LLRW generators are faced with waste volume elimination, reduction, or storage. With proper in-house waste management practices, utilization of commercial processor services and regulatory relief, a national crisis can be avoided. Waste volumes for storage can be reduced to as little as 10% of the current Class B/C volume. Although a national LLRW crisis can be avoided, some generators with specific waste forms or radionuclides will have a significant financial and/or operational impact due to a lack of commercial LLRW management options. (authors)« less
NASA Technical Reports Server (NTRS)
Kawamoto, Y.
1982-01-01
The objective of the 30/20 GHz Flight Experiment System is to develop the required technology and to experiment with the communication technique for an operational communication satellite system. The system uses polarization, spatial, and frequency isolations to maximize the spectrum utilization. The key spacecraft technologies required for the concept are the scan beam antenna, the baseband processor, the IF switch matrix, TWTA, SSPA, and LNA. The spacecraft communication payload information will be telemetered and monitored closely so that these technologies and performances can be verified. Two types of services, a trunk service and a customer premise service, are demonstrated in the system. Many experiments associated with these services, such as synchronization, demand assignment, link control, and network control will be performed to provide important information on the operational aspect of the system.
A wide-angle camera module for disposable endoscopy
NASA Astrophysics Data System (ADS)
Shim, Dongha; Yeon, Jesun; Yi, Jason; Park, Jongwon; Park, Soo Nam; Lee, Nanhee
2016-08-01
A wide-angle miniaturized camera module for disposable endoscope is demonstrated in this paper. A lens module with 150° angle of view (AOV) is designed and manufactured. All plastic injection-molded lenses and a commercial CMOS image sensor are employed to reduce the manufacturing cost. The image sensor and LED illumination unit are assembled with a lens module. The camera module does not include a camera processor to further reduce its size and cost. The size of the camera module is 5.5 × 5.5 × 22.3 mm3. The diagonal field of view (FOV) of the camera module is measured to be 110°. A prototype of a disposable endoscope is implemented to perform a pre-clinical animal testing. The esophagus of an adult beagle dog is observed. These results demonstrate the feasibility of a cost-effective and high-performance camera module for disposable endoscopy.
Linear and nonlinear schemes applied to pitch control of wind turbines.
Geng, Hua; Yang, Geng
2014-01-01
Linear controllers have been employed in industrial applications for many years, but sometimes they are noneffective on the system with nonlinear characteristics. This paper discusses the structure, performance, implementation cost, advantages, and disadvantages of different linear and nonlinear schemes applied to the pitch control of the wind energy conversion systems (WECSs). The linear controller has the simplest structure and is easily understood by the engineers and thus is widely accepted by the industry. In contrast, nonlinear schemes are more complicated, but they can provide better performance. Although nonlinear algorithms can be implemented in a powerful digital processor nowadays, they need time to be accepted by the industry and their reliability needs to be verified in the commercial products. More information about the system nonlinear feature is helpful to simplify the controller design. However, nonlinear schemes independent of the system model are more robust to the uncertainties or deviations of the system parameters.
NASA Technical Reports Server (NTRS)
Nguyen, D. T.; Watson, Willie R. (Technical Monitor)
2005-01-01
The overall objectives of this research work are to formulate and validate efficient parallel algorithms, and to efficiently design/implement computer software for solving large-scale acoustic problems, arised from the unified frameworks of the finite element procedures. The adopted parallel Finite Element (FE) Domain Decomposition (DD) procedures should fully take advantages of multiple processing capabilities offered by most modern high performance computing platforms for efficient parallel computation. To achieve this objective. the formulation needs to integrate efficient sparse (and dense) assembly techniques, hybrid (or mixed) direct and iterative equation solvers, proper pre-conditioned strategies, unrolling strategies, and effective processors' communicating schemes. Finally, the numerical performance of the developed parallel finite element procedures will be evaluated by solving series of structural, and acoustic (symmetrical and un-symmetrical) problems (in different computing platforms). Comparisons with existing "commercialized" and/or "public domain" software are also included, whenever possible.
2nd Generation QUATARA Flight Computer Project
NASA Technical Reports Server (NTRS)
Falker, Jay; Keys, Andrew; Fraticelli, Jose Molina; Capo-Iugo, Pedro; Peeples, Steven
2015-01-01
Single core flight computer boards have been designed, developed, and tested (DD&T) to be flown in small satellites for the last few years. In this project, a prototype flight computer will be designed as a distributed multi-core system containing four microprocessors running code in parallel. This flight computer will be capable of performing multiple computationally intensive tasks such as processing digital and/or analog data, controlling actuator systems, managing cameras, operating robotic manipulators and transmitting/receiving from/to a ground station. In addition, this flight computer will be designed to be fault tolerant by creating both a robust physical hardware connection and by using a software voting scheme to determine the processor's performance. This voting scheme will leverage on the work done for the Space Launch System (SLS) flight software. The prototype flight computer will be constructed with Commercial Off-The-Shelf (COTS) components which are estimated to survive for two years in a low-Earth orbit.
NASA Astrophysics Data System (ADS)
Baker, Jameson Todd
The complex dose patterns that result in Intensity Modulated Radiation Therapy make the typical QA of a second calculation insufficient for ensuring safe treatment of patients. Many facilities choose to deliver the treatment to film inserted in a phantom and calculate the dose delivered as an additional check of the treatment plan. Radiochromic films allow for measurements without the use of a processor in the current digital age. International Specialty Products developed Gafchromic EBT film, which is a radiochromic film having a useful range of 1 -- 800 cGy. EBT film properties are fully analyzed including studies of uniformity, spectral absorption, exposure sensitivity, energy dependence and post exposure density growth. Dosimetric performance on commercially available digitizers is studied with specific attention on the shortcomings. Finally, a custom designed scanner is built specifically for EBT film and its unique properties. Performance of the EBT digitizer is analyzed and compared against currently available scanners.
Electrical Prototype Power Processor for the 30-cm Mercury electric propulsion engine
NASA Technical Reports Server (NTRS)
Biess, J. J.; Frye, R. J.
1978-01-01
An Electrical Prototpye Power Processor has been designed to the latest electrical and performance requirements for a flight-type 30-cm ion engine and includes all the necessary power, command, telemetry and control interfaces for a typical electric propulsion subsystem. The power processor was configured into seven separate mechanical modules that would allow subassembly fabrication, test and integration into a complete power processor unit assembly. The conceptual mechanical packaging of the electrical prototype power processor unit demonstrated the relative location of power, high voltage and control electronic components to minimize electrical interactions and to provide adequate thermal control in a vacuum environment. Thermal control was accomplished with a heat pipe simulator attached to the base of the modules.
USDA-ARS?s Scientific Manuscript database
Pork scrapple was formulated, with or without citrate-diacetate (0.64%), by a commercial processor to contain various solutions/blends of the following antimicrobials to control L. monocytogenes on pork scrapple during refrigerated storage: i) lactate-diacetate (3.0 or 4.0%), ii) lactate-diacetate-p...
Processor Units Reduce Satellite Construction Costs
NASA Technical Reports Server (NTRS)
2014-01-01
As part of the effort to build the Fast Affordable Science and Technology Satellite (FASTSAT), Marshall Space Flight Center developed a low-cost telemetry unit which is used to facilitate communication between a satellite and its receiving station. Huntsville, Alabama-based Orbital Telemetry Inc. has licensed the NASA technology and is offering to install the cost-cutting units on commercial satellites.
21 CFR 108.35 - Thermal processing of low-acid foods packaged in hermetically sealed containers.
Code of Federal Regulations, 2010 CFR
2010-04-01
... authorized employee of the Food and Drug Administration to inspect the commercial processor's manufacturing... 21 Food and Drugs 2 2010-04-01 2010-04-01 false Thermal processing of low-acid foods packaged in hermetically sealed containers. 108.35 Section 108.35 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT...
2016-03-01
in-vitro decision to incubate a startup, Lexumo [7], which is developing a commercial Software as a Service ( SaaS ) vulnerability assessment...LTS Label Transition System MUSE Mining and Understanding Software Enclaves RTEMS Real-Time Executive for Multi-processor Systems SaaS Software ...as a Service SSA Static Single Assignment SWE Software Epistemology UD/DU Def-Use/Use-Def Chains (Dataflow Graph)
Chen, Dong; Giampapa, Mark; Heidelberger, Philip; Ohmacht, Martin; Satterfield, David L; Steinmacher-Burow, Burkhard; Sugavanam, Krishnan
2013-05-21
A system and method for enhancing performance of a computer which includes a computer system including a data storage device. The computer system includes a program stored in the data storage device and steps of the program are executed by a processer. The processor processes instructions from the program. A wait state in the processor waits for receiving specified data. A thread in the processor has a pause state wherein the processor waits for specified data. A pin in the processor initiates a return to an active state from the pause state for the thread. A logic circuit is external to the processor, and the logic circuit is configured to detect a specified condition. The pin initiates a return to the active state of the thread when the specified condition is detected using the logic circuit.
Development and analysis of the Software Implemented Fault-Tolerance (SIFT) computer
NASA Technical Reports Server (NTRS)
Goldberg, J.; Kautz, W. H.; Melliar-Smith, P. M.; Green, M. W.; Levitt, K. N.; Schwartz, R. L.; Weinstock, C. B.
1984-01-01
SIFT (Software Implemented Fault Tolerance) is an experimental, fault-tolerant computer system designed to meet the extreme reliability requirements for safety-critical functions in advanced aircraft. Errors are masked by performing a majority voting operation over the results of identical computations, and faulty processors are removed from service by reassigning computations to the nonfaulty processors. This scheme has been implemented in a special architecture using a set of standard Bendix BDX930 processors, augmented by a special asynchronous-broadcast communication interface that provides direct, processor to processor communication among all processors. Fault isolation is accomplished in hardware; all other fault-tolerance functions, together with scheduling and synchronization are implemented exclusively by executive system software. The system reliability is predicted by a Markov model. Mathematical consistency of the system software with respect to the reliability model has been partially verified, using recently developed tools for machine-aided proof of program correctness.
Wang, Po T; Gandasetiawan, Keulanna; McCrimmon, Colin M; Karimi-Bidhendi, Alireza; Liu, Charles Y; Heydari, Payam; Nenadic, Zoran; Do, An H
2016-08-01
A fully implantable brain-computer interface (BCI) can be a practical tool to restore independence to those affected by spinal cord injury. We envision that such a BCI system will invasively acquire brain signals (e.g. electrocorticogram) and translate them into control commands for external prostheses. The feasibility of such a system was tested by implementing its benchtop analogue, centered around a commercial, ultra-low power (ULP) digital signal processor (DSP, TMS320C5517, Texas Instruments). A suite of signal processing and BCI algorithms, including (de)multiplexing, Fast Fourier Transform, power spectral density, principal component analysis, linear discriminant analysis, Bayes rule, and finite state machine was implemented and tested in the DSP. The system's signal acquisition fidelity was tested and characterized by acquiring harmonic signals from a function generator. In addition, the BCI decoding performance was tested, first with signals from a function generator, and subsequently using human electroencephalogram (EEG) during eyes opening and closing task. On average, the system spent 322 ms to process and analyze 2 s of data. Crosstalk (<;-65 dB) and harmonic distortion (~1%) were minimal. Timing jitter averaged 49 μs per 1000 ms. The online BCI decoding accuracies were 100% for both function generator and EEG data. These results show that a complex BCI algorithm can be executed on an ULP DSP without compromising performance. This suggests that the proposed hardware platform may be used as a basis for future, fully implantable BCI systems.
Many-core computing for space-based stereoscopic imaging
NASA Astrophysics Data System (ADS)
McCall, Paul; Torres, Gildo; LeGrand, Keith; Adjouadi, Malek; Liu, Chen; Darling, Jacob; Pernicka, Henry
The potential benefits of using parallel computing in real-time visual-based satellite proximity operations missions are investigated. Improvements in performance and relative navigation solutions over single thread systems can be achieved through multi- and many-core computing. Stochastic relative orbit determination methods benefit from the higher measurement frequencies, allowing them to more accurately determine the associated statistical properties of the relative orbital elements. More accurate orbit determination can lead to reduced fuel consumption and extended mission capabilities and duration. Inherent to the process of stereoscopic image processing is the difficulty of loading, managing, parsing, and evaluating large amounts of data efficiently, which may result in delays or highly time consuming processes for single (or few) processor systems or platforms. In this research we utilize the Single-Chip Cloud Computer (SCC), a fully programmable 48-core experimental processor, created by Intel Labs as a platform for many-core software research, provided with a high-speed on-chip network for sharing information along with advanced power management technologies and support for message-passing. The results from utilizing the SCC platform for the stereoscopic image processing application are presented in the form of Performance, Power, Energy, and Energy-Delay-Product (EDP) metrics. Also, a comparison between the SCC results and those obtained from executing the same application on a commercial PC are presented, showing the potential benefits of utilizing the SCC in particular, and any many-core platforms in general for real-time processing of visual-based satellite proximity operations missions.
High-Speed On-Board Data Processing Platform for LIDAR Projects at NASA Langley Research Center
NASA Astrophysics Data System (ADS)
Beyon, J.; Ng, T. K.; Davis, M. J.; Adams, J. K.; Lin, B.
2015-12-01
The project called High-Speed On-Board Data Processing for Science Instruments (HOPS) has been funded by NASA Earth Science Technology Office (ESTO) Advanced Information Systems Technology (AIST) program during April, 2012 - April, 2015. HOPS is an enabler for science missions with extremely high data processing rates. In this three-year effort of HOPS, Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) and 3-D Winds were of interest in particular. As for ASCENDS, HOPS replaces time domain data processing with frequency domain processing while making the real-time on-board data processing possible. As for 3-D Winds, HOPS offers real-time high-resolution wind profiling with 4,096-point fast Fourier transform (FFT). HOPS is adaptable with quick turn-around time. Since HOPS offers reusable user-friendly computational elements, its FPGA IP Core can be modified for a shorter development period if the algorithm changes. The FPGA and memory bandwidth of HOPS is 20 GB/sec while the typical maximum processor-to-SDRAM bandwidth of the commercial radiation tolerant high-end processors is about 130-150 MB/sec. The inter-board communication bandwidth of HOPS is 4 GB/sec while the effective processor-to-cPCI bandwidth of commercial radiation tolerant high-end boards is about 50-75 MB/sec. Also, HOPS offers VHDL cores for the easy and efficient implementation of ASCENDS and 3-D Winds, and other similar algorithms. A general overview of the 3-year development of HOPS is the goal of this presentation.
High-Speed On-Board Data Processing for Science Instruments: HOPS
NASA Technical Reports Server (NTRS)
Beyon, Jeffrey
2015-01-01
The project called High-Speed On-Board Data Processing for Science Instruments (HOPS) has been funded by NASA Earth Science Technology Office (ESTO) Advanced Information Systems Technology (AIST) program during April, 2012 â€" April, 2015. HOPS is an enabler for science missions with extremely high data processing rates. In this three-year effort of HOPS, Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) and 3-D Winds were of interest in particular. As for ASCENDS, HOPS replaces time domain data processing with frequency domain processing while making the real-time on-board data processing possible. As for 3-D Winds, HOPS offers real-time high-resolution wind profiling with 4,096-point fast Fourier transform (FFT). HOPS is adaptable with quick turn-around time. Since HOPS offers reusable user-friendly computational elements, its FPGA IP Core can be modified for a shorter development period if the algorithm changes. The FPGA and memory bandwidth of HOPS is 20 GB/sec while the typical maximum processor-to-SDRAM bandwidth of the commercial radiation tolerant high-end processors is about 130-150 MB/sec. The inter-board communication bandwidth of HOPS is 4 GB/sec while the effective processor-to-cPCI bandwidth of commercial radiation tolerant high-end boards is about 50-75 MB/sec. Also, HOPS offers VHDL cores for the easy and efficient implementation of ASCENDS and 3-D Winds, and other similar algorithms. A general overview of the 3-year development of HOPS is the goal of this presentation.
A microprocessor based high speed packet switch for satellite communications
NASA Technical Reports Server (NTRS)
Arozullah, M.; Crist, S. C.
1980-01-01
The architectures of a single processor, a three processor, and a multiple processor system are described. The hardware circuits, and software routines required for implementing the three and multiple processor designs are presented. A bit-slice microprocessor was designed and microprogrammed. Maximum throughput was calculated for all three designs. Queue theoretic models for these three designs were developed and utilized to obtain analytical expressions for the average waiting times, overall average response times and average queue sizes. From these expressions, graphs were obtained showing the effect on the system performance of a number of design parameters.
Interactive high-resolution isosurface ray casting on multicore processors.
Wang, Qin; JaJa, Joseph
2008-01-01
We present a new method for the interactive rendering of isosurfaces using ray casting on multi-core processors. This method consists of a combination of an object-order traversal that coarsely identifies possible candidate 3D data blocks for each small set of contiguous pixels, and an isosurface ray casting strategy tailored for the resulting limited-size lists of candidate 3D data blocks. While static screen partitioning is widely used in the literature, our scheme performs dynamic allocation of groups of ray casting tasks to ensure almost equal loads among the different threads running on multi-cores while maintaining spatial locality. We also make careful use of memory management environment commonly present in multi-core processors. We test our system on a two-processor Clovertown platform, each consisting of a Quad-Core 1.86-GHz Intel Xeon Processor, for a number of widely different benchmarks. The detailed experimental results show that our system is efficient and scalable, and achieves high cache performance and excellent load balancing, resulting in an overall performance that is superior to any of the previous algorithms. In fact, we achieve an interactive isosurface rendering on a 1024(2) screen for all the datasets tested up to the maximum size of the main memory of our platform.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sitaraman, Hariswaran; Grout, Ray W
This work investigates novel algorithm designs and optimization techniques for restructuring chemistry integrators in zero and multidimensional combustion solvers, which can then be effectively used on the emerging generation of Intel's Many Integrated Core/Xeon Phi processors. These processors offer increased computing performance via large number of lightweight cores at relatively lower clock speeds compared to traditional processors (e.g. Intel Sandybridge/Ivybridge) used in current supercomputers. This style of processor can be productively used for chemistry integrators that form a costly part of computational combustion codes, in spite of their relatively lower clock speeds. Performance commensurate with traditional processors is achieved heremore » through the combination of careful memory layout, exposing multiple levels of fine grain parallelism and through extensive use of vendor supported libraries (Cilk Plus and Math Kernel Libraries). Important optimization techniques for efficient memory usage and vectorization have been identified and quantified. These optimizations resulted in a factor of ~ 3 speed-up using Intel 2013 compiler and ~ 1.5 using Intel 2017 compiler for large chemical mechanisms compared to the unoptimized version on the Intel Xeon Phi. The strategies, especially with respect to memory usage and vectorization, should also be beneficial for general purpose computational fluid dynamics codes.« less
JPRS Report, Science & Technology, China, High-Performance Computer Systems
1992-10-28
microprocessor array The microprocessor array in the AP85 system is com- posed of 16 completely identical array element micro - processors . Each array element...microprocessors and capable of host machine reading and writing. The memory capacity of the array element micro - processors as a whole can be expanded...transmission functions to carry out data transmission from array element micro - processor to array element microprocessor, from array element
Fabrication Security and Trust of Domain-Specific ASIC Processors
2016-10-30
embedded in the design. For example , an ASIC processor potentially has a 10-1,000X performance advantage over its FPGA and GPP counterparts, but...paper by summarizing our lessons learned from this project and suggests a few research directions. II. DOMAIN-SPECIFIC ASIC PROCESSORS As Figure 1 has...sponsored by the Assistant Secretary of Defense for Research & Engineering under Air Force Contract #FA8721-05-C-0002. Opinions, interpretations
Low latency messages on distributed memory multiprocessors
NASA Technical Reports Server (NTRS)
Rosing, Matthew; Saltz, Joel
1993-01-01
Many of the issues in developing an efficient interface for communication on distributed memory machines are described and a portable interface is proposed. Although the hardware component of message latency is less than one microsecond on many distributed memory machines, the software latency associated with sending and receiving typed messages is on the order of 50 microseconds. The reason for this imbalance is that the software interface does not match the hardware. By changing the interface to match the hardware more closely, applications with fine grained communication can be put on these machines. Based on several tests that were run on the iPSC/860, an interface that will better match current distributed memory machines is proposed. The model used in the proposed interface consists of a computation processor and a communication processor on each node. Communication between these processors and other nodes in the system is done through a buffered network. Information that is transmitted is either data or procedures to be executed on the remote processor. The dual processor system is better suited for efficiently handling asynchronous communications compared to a single processor system. The ability to send data or procedure is very flexible for minimizing message latency, based on the type of communication being performed. The test performed and the proposed interface are described.
NASA Astrophysics Data System (ADS)
Hayakawa, Hitoshi; Ogawa, Makoto; Shibata, Tadashi
2005-04-01
A very large scale integrated circuit (VLSI) architecture for a multiple-instruction-stream multiple-data-stream (MIMD) associative processor has been proposed. The processor employs an architecture that enables seamless switching from associative operations to arithmetic operations. The MIMD element is convertible to a regular central processing unit (CPU) while maintaining its high performance as an associative processor. Therefore, the MIMD associative processor can perform not only on-chip perception, i.e., searching for the vector most similar to an input vector throughout the on-chip cache memory, but also arithmetic and logic operations similar to those in ordinary CPUs, both simultaneously in parallel processing. Three key technologies have been developed to generate the MIMD element: associative-operation-and-arithmetic-operation switchable calculation units, a versatile register control scheme within the MIMD element for flexible operations, and a short instruction set for minimizing the memory size for program storage. Key circuit blocks were designed and fabricated using 0.18 μm complementary metal-oxide-semiconductor (CMOS) technology. As a result, the full-featured MIMD element is estimated to be 3 mm2, showing the feasibility of an 8-parallel-MIMD-element associative processor in a single chip of 5 mm× 5 mm.
Shehzad, Danish; Bozkuş, Zeki
2016-01-01
Increase in complexity of neuronal network models escalated the efforts to make NEURON simulation environment efficient. The computational neuroscientists divided the equations into subnets amongst multiple processors for achieving better hardware performance. On parallel machines for neuronal networks, interprocessor spikes exchange consumes large section of overall simulation time. In NEURON for communication between processors Message Passing Interface (MPI) is used. MPI_Allgather collective is exercised for spikes exchange after each interval across distributed memory systems. The increase in number of processors though results in achieving concurrency and better performance but it inversely affects MPI_Allgather which increases communication time between processors. This necessitates improving communication methodology to decrease the spikes exchange time over distributed memory systems. This work has improved MPI_Allgather method using Remote Memory Access (RMA) by moving two-sided communication to one-sided communication, and use of recursive doubling mechanism facilitates achieving efficient communication between the processors in precise steps. This approach enhanced communication concurrency and has improved overall runtime making NEURON more efficient for simulation of large neuronal network models.
Bozkuş, Zeki
2016-01-01
Increase in complexity of neuronal network models escalated the efforts to make NEURON simulation environment efficient. The computational neuroscientists divided the equations into subnets amongst multiple processors for achieving better hardware performance. On parallel machines for neuronal networks, interprocessor spikes exchange consumes large section of overall simulation time. In NEURON for communication between processors Message Passing Interface (MPI) is used. MPI_Allgather collective is exercised for spikes exchange after each interval across distributed memory systems. The increase in number of processors though results in achieving concurrency and better performance but it inversely affects MPI_Allgather which increases communication time between processors. This necessitates improving communication methodology to decrease the spikes exchange time over distributed memory systems. This work has improved MPI_Allgather method using Remote Memory Access (RMA) by moving two-sided communication to one-sided communication, and use of recursive doubling mechanism facilitates achieving efficient communication between the processors in precise steps. This approach enhanced communication concurrency and has improved overall runtime making NEURON more efficient for simulation of large neuronal network models. PMID:27413363
Onboard Interferometric SAR Processor for the Ka-Band Radar Interferometer (KaRIn)
NASA Technical Reports Server (NTRS)
Esteban-Fernandez, Daniel; Rodriquez, Ernesto; Peral, Eva; Clark, Duane I.; Wu, Xiaoqing
2011-01-01
An interferometric synthetic aperture radar (SAR) onboard processor concept and algorithm has been developed for the Ka-band radar interferometer (KaRIn) instrument on the Surface and Ocean Topography (SWOT) mission. This is a mission- critical subsystem that will perform interferometric SAR processing and multi-look averaging over the oceans to decrease the data rate by three orders of magnitude, and therefore enable the downlink of the radar data to the ground. The onboard processor performs demodulation, range compression, coregistration, and re-sampling, and forms nine azimuth squinted beams. For each of them, an interferogram is generated, including common-band spectral filtering to improve correlation, followed by averaging to the final 1 1-km ground resolution pixel. The onboard processor has been prototyped on a custom FPGA-based cPCI board, which will be part of the radar s digital subsystem. The level of complexity of this technology, dictated by the implementation of interferometric SAR processing at high resolution, the extremely tight level of accuracy required, and its implementation on FPGAs are unprecedented at the time of this reporting for an onboard processor for flight applications.
Computational efficiency of parallel combinatorial OR-tree searches
NASA Technical Reports Server (NTRS)
Li, Guo-Jie; Wah, Benjamin W.
1990-01-01
The performance of parallel combinatorial OR-tree searches is analytically evaluated. This performance depends on the complexity of the problem to be solved, the error allowance function, the dominance relation, and the search strategies. The exact performance may be difficult to predict due to the nondeterminism and anomalies of parallelism. The authors derive the performance bounds of parallel OR-tree searches with respect to the best-first, depth-first, and breadth-first strategies, and verify these bounds by simulation. They show that a near-linear speedup can be achieved with respect to a large number of processors for parallel OR-tree searches. Using the bounds developed, the authors derive sufficient conditions for assuring that parallelism will not degrade performance and necessary conditions for allowing parallelism to have a speedup greater than the ratio of the numbers of processors. These bounds and conditions provide the theoretical foundation for determining the number of processors required to assure a near-linear speedup.
Real-time machine vision system using FPGA and soft-core processor
NASA Astrophysics Data System (ADS)
Malik, Abdul Waheed; Thörnberg, Benny; Meng, Xiaozhou; Imran, Muhammad
2012-06-01
This paper presents a machine vision system for real-time computation of distance and angle of a camera from reference points in the environment. Image pre-processing, component labeling and feature extraction modules were modeled at Register Transfer (RT) level and synthesized for implementation on field programmable gate arrays (FPGA). The extracted image component features were sent from the hardware modules to a soft-core processor, MicroBlaze, for computation of distance and angle. A CMOS imaging sensor operating at a clock frequency of 27MHz was used in our experiments to produce a video stream at the rate of 75 frames per second. Image component labeling and feature extraction modules were running in parallel having a total latency of 13ms. The MicroBlaze was interfaced with the component labeling and feature extraction modules through Fast Simplex Link (FSL). The latency for computing distance and angle of camera from the reference points was measured to be 2ms on the MicroBlaze, running at 100 MHz clock frequency. In this paper, we present the performance analysis, device utilization and power consumption for the designed system. The FPGA based machine vision system that we propose has high frame speed, low latency and a power consumption that is much lower compared to commercially available smart camera solutions.
Job-mix modeling and system analysis of an aerospace multiprocessor.
NASA Technical Reports Server (NTRS)
Mallach, E. G.
1972-01-01
An aerospace guidance computer organization, consisting of multiple processors and memory units attached to a central time-multiplexed data bus, is described. A job mix for this type of computer is obtained by analysis of Apollo mission programs. Multiprocessor performance is then analyzed using: 1) queuing theory, under certain 'limiting case' assumptions; 2) Markov process methods; and 3) system simulation. Results of the analyses indicate: 1) Markov process analysis is a useful and efficient predictor of simulation results; 2) efficient job execution is not seriously impaired even when the system is so overloaded that new jobs are inordinately delayed in starting; 3) job scheduling is significant in determining system performance; and 4) a system having many slow processors may or may not perform better than a system of equal power having few fast processors, but will not perform significantly worse.
NASA Technical Reports Server (NTRS)
Kikuchi, Hideaki; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya; Shimojo, Fuyuki; Saini, Subhash
2003-01-01
Scalability of a low-cost, Intel Xeon-based, multi-Teraflop Linux cluster is tested for two high-end scientific applications: Classical atomistic simulation based on the molecular dynamics method and quantum mechanical calculation based on the density functional theory. These scalable parallel applications use space-time multiresolution algorithms and feature computational-space decomposition, wavelet-based adaptive load balancing, and spacefilling-curve-based data compression for scalable I/O. Comparative performance tests are performed on a 1,024-processor Linux cluster and a conventional higher-end parallel supercomputer, 1,184-processor IBM SP4. The results show that the performance of the Linux cluster is comparable to that of the SP4. We also study various effects, such as the sharing of memory and L2 cache among processors, on the performance.
Hyperswitch Communication Network Computer
NASA Technical Reports Server (NTRS)
Peterson, John C.; Chow, Edward T.; Priel, Moshe; Upchurch, Edwin T.
1993-01-01
Hyperswitch Communications Network (HCN) computer is prototype multiple-processor computer being developed. Incorporates improved version of hyperswitch communication network described in "Hyperswitch Network For Hypercube Computer" (NPO-16905). Designed to support high-level software and expansion of itself. HCN computer is message-passing, multiple-instruction/multiple-data computer offering significant advantages over older single-processor and bus-based multiple-processor computers, with respect to price/performance ratio, reliability, availability, and manufacturing. Design of HCN operating-system software provides flexible computing environment accommodating both parallel and distributed processing. Also achieves balance among following competing factors; performance in processing and communications, ease of use, and tolerance of (and recovery from) faults.
Comparing the Performance of Two Dynamic Load Distribution Methods
NASA Technical Reports Server (NTRS)
Kale, L. V.
1987-01-01
Parallel processing of symbolic computations on a message-passing multi-processor presents one challenge: To effectively utilize the available processors, the load must be distributed uniformly to all the processors. However, the structure of these computations cannot be predicted in advance. go, static scheduling methods are not applicable. In this paper, we compare the performance of two dynamic, distributed load balancing methods with extensive simulation studies. The two schemes are: the Contracting Within a Neighborhood (CWN) scheme proposed by us, and the Gradient Model proposed by Lin and Keller. We conclude that although simpler, the CWN is significantly more effective at distributing the work than the Gradient model.
Soft electron processor for surface sterilization of food material
NASA Astrophysics Data System (ADS)
Baba, Takashi; Kaneko, Hiromi; Taniguchi, Shuichi
2004-09-01
As frozen or chilled foods have become popular nowadays, it has become very important to provide raw materials with lower level microbial contamination to food processing companies. Consequently, the sterilization of food material is one of the major topics for food processing. Dried materials like grains, beans and spices, etc., are not typically deeply contaminated by microorganisms, which reside on the surfaces of materials, so it is very useful to take low energetic, lower than 300 keV, electrons with small penetration power (Soft-Electrons), as a sterilization method for such materials. Soft-Electrons is researched and named by Dr. Hayashi et al. This is a non-thermal method, so one can keep foods hygienic without serious deterioration. It is also a physical method, so is free from residues of chemicals in foods. Recently, Nissin-High Voltage Co., Ltd. have developed and manufactured equipment for commercial use of Soft-Electrons (Soft Electron Processor), which can process 500 kg/h of grains. This report introduces the Soft Electron Processor and shows the results of sterilization of wheat and brown rice by the equipment.
Matrix-vector multiplication using digital partitioning for more accurate optical computing
NASA Technical Reports Server (NTRS)
Gary, C. K.
1992-01-01
Digital partitioning offers a flexible means of increasing the accuracy of an optical matrix-vector processor. This algorithm can be implemented with the same architecture required for a purely analog processor, which gives optical matrix-vector processors the ability to perform high-accuracy calculations at speeds comparable with or greater than electronic computers as well as the ability to perform analog operations at a much greater speed. Digital partitioning is compared with digital multiplication by analog convolution, residue number systems, and redundant number representation in terms of the size and the speed required for an equivalent throughput as well as in terms of the hardware requirements. Digital partitioning and digital multiplication by analog convolution are found to be the most efficient alogrithms if coding time and hardware are considered, and the architecture for digital partitioning permits the use of analog computations to provide the greatest throughput for a single processor.
Integration, Development and Performance of the 500 TFLOPS Heterogeneous Cluster (Condor)
2012-08-01
PlayStation 3 for High Performance Cluster Computing” LAPACK Working Note 185, 2007. [ 4 ] Feng, W., X. Feng, and R. Ge, “Green Supercomputing Comes of...CONFERENCE PAPER (Post Print) 3. DATES COVERED (From - To) JUN 2010 – MAY 2013 4 . TITLE AND SUBTITLE INTEGRATION, DEVELOPMENT AND PERFORMANCE OF...and streaming processing; the PlayStation 3 uses the IBM Cell BE processor, which adopts the multi-processor, single-instruction-multiple- data (SIMD
Development of emission factors for polycarbonate processing.
Rhodes, Verne L; Kriek, George; Lazear, Nelson; Kasakevich, Jean; Martinko, Marie; Heggs, R P; Holdren, M W; Wisbith, A S; Keigley, G W; Williams, J D; Chuang, J C; Satola, J R
2002-07-01
Emission factors for selected volatile organic compounds (VOCs) and particulate emissions were developed while processing eight commercial grades of polycarbonate (PC) and one grade of a PC/acrylonitrile-butadiene-styrene (ABS) blend. A small commercial-type extruder was used, and the extrusion temperature was held constant at 304 degrees C. An emission factor was calculated for each substance measured and is reported as pounds released to the atmosphere/million pounds of polymer resin processed [ppm (wt/wt)]. Scaled to production volumes, these emission factors can be used by processors to estimate emission quantities from similar PC processing operations.
Improved Remapping Processor For Digital Imagery
NASA Technical Reports Server (NTRS)
Fisher, Timothy E.
1991-01-01
Proposed digital image processor improved version of Programmable Remapper, which performs geometric and radiometric transformations on digital images. Features include overlapping and variably sized preimages. Overcomes some of limitations of image-warping circuit boards implementing only those geometric tranformations expressible in terms of polynomials of limited order. Also overcomes limitations of existing Programmable Remapper and made to perform transformations at video rate.
Cognitive Tools and Thinking Performance: The Case of Word Processors and Writing.
ERIC Educational Resources Information Center
Kellogg, Ronald T.; Mueller, Suzanne
A study examined whether word processing amplifies writing performance and whether it restructures the process of writing. Sixteen college students wrote a short essay in a single session on either a word processor or in longhand. The quality of the essays was assessed by trained judges who rated their content and style. Text analysis methods were…
Parallel processing in a host plus multiple array processor system for radar
NASA Technical Reports Server (NTRS)
Barkan, B. Z.
1983-01-01
Host plus multiple array processor architecture is demonstrated to yield a modular, fast, and cost-effective system for radar processing. Software methodology for programming such a system is developed. Parallel processing with pipelined data flow among the host, array processors, and discs is implemented. Theoretical analysis of performance is made and experimentally verified. The broad class of problems to which the architecture and methodology can be applied is indicated.
Emergency product generation for disaster management using RISAT and DMSAR quick look SAR processors
NASA Astrophysics Data System (ADS)
Desai, Nilesh; Sharma, Ritesh; Kumar, Saravana; Misra, Tapan; Gujraty, Virendra; Rana, SurinderSingh
2006-12-01
Since last few years, ISRO has embarked upon the development of two complex Synthetic Aperture Radar (SAR) missions, viz. Spaceborne Radar Imaging Satellite (RISAT) and Airborne SAR for Disaster Mangement (DMSAR), as a capacity building measure under country's Disaster Management Support (DMS) Program, for estimating the extent of damage over large areas (~75 Km) and also assess the effectiveness of the relief measures undertaken during natural disasters such as cyclones, epidemics, earthquakes, floods and landslides, forest fires, crop diseases etc. Synthetic Aperture Radar (SAR) has an unique role to play in mapping and monitoring of large areas affected by natural disasters especially floods, owing to its unique capability to see through clouds as well as all-weather imaging capability. The generation of SAR images with quick turn around time is very essential to meet the above DMS objectives. Thus the development of SAR Processors, for these two SAR systems poses considerable challenges and design efforts. Considering the growing user demand and inevitable necessity for a full-fledged high throughput processor, to process SAR data and generate image in real or near-real time, the design and development of a generic SAR Processor has been taken up and evolved, which will meet the SAR processing requirements for both Airborne and Spaceborne SAR systems. This hardware SAR processor is being built, to the extent possible, using only Commercial-Off-The-Shelf (COTS) DSP and other hardware plug-in modules on a Compact PCI (cPCI) platform. Thus, the major thrust has been on working out Multi-processor Digital Signal Processor (DSP) architecture and algorithm development and optimization rather than hardware design and fabrication. For DMSAR, this generic SAR Processor operates as a Quick Look SAR Processor (QLP) on-board the aircraft to produce real time full swath DMSAR images and as a ground based Near-Real Time high precision full swath Processor (NRTP). It will generate full-swath (6 to 75 Kms) DMSAR images in 1m / 3m / 5m / 10m / 30m resolution SAR operating modes. For RISAT mission, this generic Quick Look SAR Processor will be mainly used for browse product generation at NRSA-Shadnagar (SAN) ground receive station. RISAT QLP/NRTP is also proposed to provide an alternative emergency SAR product generation chain. For this, the S/C aux data appended in Onboard SAR Frame Format (x, y, z, x', y', z', roll, pitch, yaw) and predicted orbit from previous days Orbit Determination data will be used. The QLP / NRTP will produce ground range images in real / near real time. For emergency data product generation, additional Off-line tasks like geo-tagging, masking, QC etc needs to be performed on the processed image. The QLP / NRTP would generate geo-tagged images from the annotation data available from the SAR P/L data itself. Since the orbit & attitude information are taken as it is, the location accuracy will be poorer compared to the product generated using ADIF, where smoothened attitude and orbit are made available. Additional tasks like masking, output formatting and Quality checking of the data product will be carried out at Balanagar, NRSA after the image annotated data from QLP / NRTP is sent to Balanagar. The necessary interfaces to the QLP/NRTP for Emergency product generation are also being worked out. As is widely acknowledged, QLP/NRTP for RISAT and DMSAR is an ambitious effort and the technology of future. It is expected that by the middle of next decade, the next generation SAR missions worldwide will have onboard SAR Processors of varying capabilities and generate SAR Data products and Information products onboard instead of SAR raw data. Thus, it is also envisaged that these activities related to QLP/NRTP implementation for RISAT ground segment and DMSAR will be a significant step which will directly feed into the development of onboard real time processing systems for ISRO's future space borne SAR missions. This paper describes the design requirements, configuration details and salient features, apart from highlighting the utility of these Quick Look SAR processors for RISAT and DMSAR, for generation of emergency products for Disaster management.
Wolfe, Jace; Schafer, Erin; Parkinson, Aaron; John, Andrew; Hudson, Mary; Wheeler, Julie; Mucci, Angie
2013-01-01
The objective of this study was to compare speech recognition in quiet and in noise for cochlear implant recipients using two different types of personal frequency modulation (FM) systems (directly coupled [direct auditory input] versus induction neckloop) with each of two sound processors (Cochlear Nucleus Freedom versus Cochlear Nucleus 5). Two different experiments were conducted within this study. In both these experiments, mixing of the FM signal within the Freedom processor was implemented via the same scheme used clinically for the Freedom sound processor. In Experiment 1, the aforementioned comparisons were conducted with the Nucleus 5 programmed so that the microphone and FM signals were mixed and then the mixed signals were subjected to autosensitivity control (ASC). In Experiment 2, comparisons between the two FM systems and processors were conducted again with the Nucleus 5 programmed to provide a more complex multistage implementation of ASC during the preprocessing stage. This study was a within-subject, repeated-measures design. Subjects were recruited from the patient population at the Hearts for Hearing Foundation in Oklahoma City, OK. Fifteen subjects participated in Experiment 1, and 16 subjects participated in Experiment 2. Subjects were adults who had used either unilateral or bilateral cochlear implants for at least 1 year. In this experiment, no differences were found in speech recognition in quiet obtained with the two different FM systems or the various sound-processor conditions. With each sound processor, speech recognition in noise was better with the directly coupled direct auditory input system relative to the neckloop system. The multistage ASC processing of the Nucleus 5 sound processor provided better performance than the single-stage approach for the Nucleus 5 and the Nucleus Freedom sound processor. Speech recognition in noise is substantially affected by the type of sound processor, FM system, and implementation of ASC used by a Cochlear implant recipient.
A digital retina-like low-level vision processor.
Mertoguno, S; Bourbakis, N G
2003-01-01
This correspondence presents the basic design and the simulation of a low level multilayer vision processor that emulates to some degree the functional behavior of a human retina. This retina-like multilayer processor is the lower part of an autonomous self-organized vision system, called Kydon, that could be used on visually impaired people with a damaged visual cerebral cortex. The Kydon vision system, however, is not presented in this paper. The retina-like processor consists of four major layers, where each of them is an array processor based on hexagonal, autonomous processing elements that perform a certain set of low level vision tasks, such as smoothing and light adaptation, edge detection, segmentation, line recognition and region-graph generation. At each layer, the array processor is a 2D array of k/spl times/m hexagonal identical autonomous cells that simultaneously execute certain low level vision tasks. Thus, the hardware design and the simulation at the transistor level of the processing elements (PEs) of the retina-like processor and its simulated functionality with illustrative examples are provided in this paper.
Support for Diagnosis of Custom Computer Hardware
NASA Technical Reports Server (NTRS)
Molock, Dwaine S.
2008-01-01
The Coldfire SDN Diagnostics software is a flexible means of exercising, testing, and debugging custom computer hardware. The software is a set of routines that, collectively, serve as a common software interface through which one can gain access to various parts of the hardware under test and/or cause the hardware to perform various functions. The routines can be used to construct tests to exercise, and verify the operation of, various processors and hardware interfaces. More specifically, the software can be used to gain access to memory, to execute timer delays, to configure interrupts, and configure processor cache, floating-point, and direct-memory-access units. The software is designed to be used on diverse NASA projects, and can be customized for use with different processors and interfaces. The routines are supported, regardless of the architecture of a processor that one seeks to diagnose. The present version of the software is configured for Coldfire processors on the Subsystem Data Node processor boards of the Solar Dynamics Observatory. There is also support for the software with respect to Mongoose V, RAD750, and PPC405 processors or their equivalents.
Solving the Cauchy-Riemann equations on parallel computers
NASA Technical Reports Server (NTRS)
Fatoohi, Raad A.; Grosch, Chester E.
1987-01-01
Discussed is the implementation of a single algorithm on three parallel-vector computers. The algorithm is a relaxation scheme for the solution of the Cauchy-Riemann equations; a set of coupled first order partial differential equations. The computers were chosen so as to encompass a variety of architectures. They are: the MPP, and SIMD machine with 16K bit serial processors; FLEX/32, an MIMD machine with 20 processors; and CRAY/2, an MIMD machine with four vector processors. The machine architectures are briefly described. The implementation of the algorithm is discussed in relation to these architectures and measures of the performance on each machine are given. Simple performance models are used to describe the performance. These models highlight the bottlenecks and limiting factors for this algorithm on these architectures. Conclusions are presented.
Landsat image registration for agricultural applications
NASA Technical Reports Server (NTRS)
Wolfe, R. H., Jr.; Juday, R. D.; Wacker, A. G.; Kaneko, T.
1982-01-01
An image registration system has been developed at the NASA Johnson Space Center (JSC) to spatially align multi-temporal Landsat acquisitions for use in agriculture and forestry research. Working in conjunction with the Master Data Processor (MDP) at the Goddard Space Flight Center, it functionally replaces the long-standing LACIE Registration Processor as JSC's data supplier. The system represents an expansion of the techniques developed for the MDP and LACIE Registration Processor, and it utilizes the experience gained in an IBM/JSC effort evaluating the performance of the latter. These techniques are discussed in detail. Several tests were developed to evaluate the registration performance of the system. The results indicate that 1/15-pixel accuracy (about 4m for Landsat MSS) is achievable in ideal circumstances, sub-pixel accuracy (often to 0.2 pixel or better) was attained on a representative set of U.S. acquisitions, and a success rate commensurate with the LACIE Registration Processor was realized. The system has been employed in a production mode on U.S. and foreign data, and a performance similar to the earlier tests has been noted.
Real time processor for array speckle interferometry
NASA Astrophysics Data System (ADS)
Chin, Gordon; Florez, Jose; Borelli, Renan; Fong, Wai; Miko, Joseph; Trujillo, Carlos
1989-02-01
The authors are constructing a real-time processor to acquire image frames, perform array flat-fielding, execute a 64 x 64 element two-dimensional complex FFT (fast Fourier transform) and average the power spectrum, all within the 25 ms coherence time for speckles at near-IR (infrared) wavelength. The processor will be a compact unit controlled by a PC with real-time display and data storage capability. This will provide the ability to optimize observations and obtain results on the telescope rather than waiting several weeks before the data can be analyzed and viewed with offline methods. The image acquisition and processing, design criteria, and processor architecture are described.
Real time processor for array speckle interferometry
NASA Technical Reports Server (NTRS)
Chin, Gordon; Florez, Jose; Borelli, Renan; Fong, Wai; Miko, Joseph; Trujillo, Carlos
1989-01-01
The authors are constructing a real-time processor to acquire image frames, perform array flat-fielding, execute a 64 x 64 element two-dimensional complex FFT (fast Fourier transform) and average the power spectrum, all within the 25 ms coherence time for speckles at near-IR (infrared) wavelength. The processor will be a compact unit controlled by a PC with real-time display and data storage capability. This will provide the ability to optimize observations and obtain results on the telescope rather than waiting several weeks before the data can be analyzed and viewed with offline methods. The image acquisition and processing, design criteria, and processor architecture are described.
The emerging conceptualization of groups as information processors.
Hinsz, V B; Tindale, R S; Vollrath, D A
1997-01-01
A selective review of research highlights the emerging view of groups as information processors. In this review, the authors include research on processing objectives, attention, encoding, storage, retrieval, processing, response, feedback, and learning in small interacting task groups. The groups as information processors perspective underscores several characteristic dimensions of variability in group performance of cognitive tasks, namely, commonality-uniqueness of information, convergence-diversity of ideas, accentuation-attenuation of cognitive processes, and belongingness-distinctiveness of members. A combination of contributions framework provides an additional conceptualization of information processing in groups. The authors also address implications, caveats, and questions for future research and theory regarding groups as information processors.
iHand: an interactive bare-hand-based augmented reality interface on commercial mobile phones
NASA Astrophysics Data System (ADS)
Choi, Junyeong; Park, Jungsik; Park, Hanhoon; Park, Jong-Il
2013-02-01
The performance of mobile phones has rapidly improved, and they are emerging as a powerful platform. In many vision-based applications, human hands play a key role in natural interaction. However, relatively little attention has been paid to the interaction between human hands and the mobile phone. Thus, we propose a vision- and hand gesture-based interface in which the user holds a mobile phone in one hand but sees the other hand's palm through a built-in camera. The virtual contents are faithfully rendered on the user's palm through palm pose estimation, and reaction with hand and finger movements is achieved that is recognized by hand shape recognition. Since the proposed interface is based on hand gestures familiar to humans and does not require any additional sensors or markers, the user can freely interact with virtual contents anytime and anywhere without any training. We demonstrate that the proposed interface works at over 15 fps on a commercial mobile phone with a 1.2-GHz dual core processor and 1 GB RAM.
Safeguards Technology Factsheet - Unattended Dual Current Monitor (UDCM)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Newell, Matthew R.
2016-04-13
The UDCM is a low-current measurement device designed to record sub-nano-amp to micro-amp currents from radiation detectors. The UDCM is a two-channel device that incorporates a Commercial-Off-The-Shelf (COTS) processor enabling both serial over USB as well as Ethernet communications. The instrument includes microSD and USB flash memory for data storage as well as a programmable High Voltage (HV) power supply for detector bias. The UDCM is packaged in the same enclosure, employs the same processor and has a similar user interface as the UMSR. A serial over USB communication line to the UDCM allows the use of existing versions ofmore » MIC software, while the Ethernet port is compatible with the new IAEA RAINSTORM communication protocol.« less
High-performance ultra-low power VLSI analog processor for data compression
NASA Technical Reports Server (NTRS)
Tawel, Raoul (Inventor)
1996-01-01
An apparatus for data compression employing a parallel analog processor. The apparatus includes an array of processor cells with N columns and M rows wherein the processor cells have an input device, memory device, and processor device. The input device is used for inputting a series of input vectors. Each input vector is simultaneously input into each column of the array of processor cells in a pre-determined sequential order. An input vector is made up of M components, ones of which are input into ones of M processor cells making up a column of the array. The memory device is used for providing ones of M components of a codebook vector to ones of the processor cells making up a column of the array. A different codebook vector is provided to each of the N columns of the array. The processor device is used for simultaneously comparing the components of each input vector to corresponding components of each codebook vector, and for outputting a signal representative of the closeness between the compared vector components. A combination device is used to combine the signal output from each processor cell in each column of the array and to output a combined signal. A closeness determination device is then used for determining which codebook vector is closest to an input vector from the combined signals, and for outputting a codebook vector index indicating which of the N codebook vectors was the closest to each input vector input into the array.
Offset Printing Plate Quality Sensor on a Low-Cost Processor
Poljak, Jelena; Botella, Guillermo; García, Carlos; Poljaček, Sanja Mahović; Prieto-Matías, Manuel; Tirado, Francisco
2013-01-01
The aim of this work is to develop a microprocessor-based sensor that measures the quality of the offset printing plate through the introduction of different image analysis applications. The main features of the presented system are the low cost, the low amount of power consumption, its modularity and easy integration with other industrial modules for printing plates, and its robustness against noise environments. For the sake of clarity, a viability analysis of previous software is presented through different strategies, based on dynamic histogram and Hough transform. This paper provides performance and scalability data compared with existing costly commercial devices. Furthermore, a general overview of quality control possibilities for printing plates is presented and could be useful to a system where such controls are regularly conducted. PMID:24284766
Software/hardware distributed processing network supporting the Ada environment
NASA Astrophysics Data System (ADS)
Wood, Richard J.; Pryk, Zen
1993-09-01
A high-performance, fault-tolerant, distributed network has been developed, tested, and demonstrated. The network is based on the MIPS Computer Systems, Inc. R3000 Risc for processing, VHSIC ASICs for high speed, reliable, inter-node communications and compatible commercial memory and I/O boards. The network is an evolution of the Advanced Onboard Signal Processor (AOSP) architecture. It supports Ada application software with an Ada- implemented operating system. A six-node implementation (capable of expansion up to 256 nodes) of the RISC multiprocessor architecture provides 120 MIPS of scalar throughput, 96 Mbytes of RAM and 24 Mbytes of non-volatile memory. The network provides for all ground processing applications, has merit for space-qualified RISC-based network, and interfaces to advanced Computer Aided Software Engineering (CASE) tools for application software development.
Spacesuit Data Display and Management System
NASA Technical Reports Server (NTRS)
Hall, David G.; Sells, Aaron; Shah, Hemal
2009-01-01
A prototype embedded avionics system has been designed for the next generation of NASA extra-vehicular-activity (EVA) spacesuits. The system performs biomedical and other sensor monitoring, image capture, data display, and data transmission. An existing NASA Phase I and II award winning design for an embedded computing system (ZIN vMetrics - BioWATCH) has been modified. The unit has a reliable, compact form factor with flexible packaging options. These innovations are significant, because current state-of-the-art EVA spacesuits do not provide capability for data displays or embedded data acquisition and management. The Phase 1 effort achieved Technology Readiness Level 4 (high fidelity breadboard demonstration). The breadboard uses a commercial-grade field-programmable gate array (FPGA) with embedded processor core that can be upgraded to a space-rated device for future revisions.
3D integrated superconducting qubits
NASA Astrophysics Data System (ADS)
Rosenberg, D.; Kim, D.; Das, R.; Yost, D.; Gustavsson, S.; Hover, D.; Krantz, P.; Melville, A.; Racz, L.; Samach, G. O.; Weber, S. J.; Yan, F.; Yoder, J. L.; Kerman, A. J.; Oliver, W. D.
2017-10-01
As the field of quantum computing advances from the few-qubit stage to larger-scale processors, qubit addressability and extensibility will necessitate the use of 3D integration and packaging. While 3D integration is well-developed for commercial electronics, relatively little work has been performed to determine its compatibility with high-coherence solid-state qubits. Of particular concern, qubit coherence times can be suppressed by the requisite processing steps and close proximity of another chip. In this work, we use a flip-chip process to bond a chip with superconducting flux qubits to another chip containing structures for qubit readout and control. We demonstrate that high qubit coherence (T1, T2,echo > 20 μs) is maintained in a flip-chip geometry in the presence of galvanic, capacitive, and inductive coupling between the chips.
Dense and Sparse Matrix Operations on the Cell Processor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Samuel W.; Shalf, John; Oliker, Leonid
2005-05-01
The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. Therefore, the high performance computing community is examining alternative architectures that address the limitations of modern superscalar designs. In this work, we examine STI's forthcoming Cell processor: a novel, low-power architecture that combines a PowerPC core with eight independent SIMD processing units coupled with a software-controlled memory to offer high FLOP/s/Watt. Since neither Cell hardware nor cycle-accurate simulators are currently publicly available, we develop an analytic framework to predict Cell performance on dense and sparse matrix operations, usingmore » a variety of algorithmic approaches. Results demonstrate Cell's potential to deliver more than an order of magnitude better GFLOP/s per watt performance, when compared with the Intel Itanium2 and Cray X1 processors.« less
Feasibility of optically interconnected parallel processors using wavelength division multiplexing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deri, R.J.; De Groot, A.J.; Haigh, R.E.
1996-03-01
New national security demands require enhanced computing systems for nearly ab initio simulations of extremely complex systems and analyzing unprecedented quantities of remote sensing data. This computational performance is being sought using parallel processing systems, in which many less powerful processors are ganged together to achieve high aggregate performance. Such systems require increased capability to communicate information between individual processor and memory elements. As it is likely that the limited performance of today`s electronic interconnects will prevent the system from achieving its ultimate performance, there is great interest in using fiber optic technology to improve interconnect communication. However, little informationmore » is available to quantify the requirements on fiber optical hardware technology for this application. Furthermore, we have sought to explore interconnect architectures that use the complete communication richness of the optical domain rather than using optics as a simple replacement for electronic interconnects. These considerations have led us to study the performance of a moderate size parallel processor with optical interconnects using multiple optical wavelengths. We quantify the bandwidth, latency, and concurrency requirements which allow a bus-type interconnect to achieve scalable computing performance using up to 256 nodes, each operating at GFLOP performance. Our key conclusion is that scalable performance, to {approx}150 GFLOPS, is achievable for several scientific codes using an optical bus with a small number of WDM channels (8 to 32), only one WDM channel received per node, and achievable optoelectronic bandwidth and latency requirements. 21 refs. , 10 figs.« less
NASA Astrophysics Data System (ADS)
Nishiura, Daisuke; Furuichi, Mikito; Sakaguchi, Hide
2015-09-01
The computational performance of a smoothed particle hydrodynamics (SPH) simulation is investigated for three types of current shared-memory parallel computer devices: many integrated core (MIC) processors, graphics processing units (GPUs), and multi-core CPUs. We are especially interested in efficient shared-memory allocation methods for each chipset, because the efficient data access patterns differ between compute unified device architecture (CUDA) programming for GPUs and OpenMP programming for MIC processors and multi-core CPUs. We first introduce several parallel implementation techniques for the SPH code, and then examine these on our target computer architectures to determine the most effective algorithms for each processor unit. In addition, we evaluate the effective computing performance and power efficiency of the SPH simulation on each architecture, as these are critical metrics for overall performance in a multi-device environment. In our benchmark test, the GPU is found to produce the best arithmetic performance as a standalone device unit, and gives the most efficient power consumption. The multi-core CPU obtains the most effective computing performance. The computational speed of the MIC processor on Xeon Phi approached that of two Xeon CPUs. This indicates that using MICs is an attractive choice for existing SPH codes on multi-core CPUs parallelized by OpenMP, as it gains computational acceleration without the need for significant changes to the source code.
Scan line graphics generation on the massively parallel processor
NASA Technical Reports Server (NTRS)
Dorband, John E.
1988-01-01
Described here is how researchers implemented a scan line graphics generation algorithm on the Massively Parallel Processor (MPP). Pixels are computed in parallel and their results are applied to the Z buffer in large groups. To perform pixel value calculations, facilitate load balancing across the processors and apply the results to the Z buffer efficiently in parallel requires special virtual routing (sort computation) techniques developed by the author especially for use on single-instruction multiple-data (SIMD) architectures.
Glaser, I
1982-04-01
By combining a lenslet array with masks it is possible to obtain a noncoherent optical processor capable of computing in parallel generalized 2-D discrete linear transformations. We present here an analysis of such lenslet array processors (LAP). The effect of several errors, including optical aberrations, diffraction, vignetting, and geometrical and mask errors, are calculated, and guidelines to optical design of LAP are derived. Using these results, both ultimate and practical performances of LAP are compared with those of competing techniques.
Cedar-a large scale multiprocessor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gajski, D.; Kuck, D.; Lawrie, D.
1983-01-01
This paper presents an overview of Cedar, a large scale multiprocessor being designed at the University of Illinois. This machine is designed to accommodate several thousand high performance processors which are capable of working together on a single job, or they can be partitioned into groups of processors where each group of one or more processors can work on separate jobs. Various aspects of the machine are described including the control methodology, communication network, optimizing compiler and plans for construction. 13 references.
Geospace simulations on the Cell BE processor
NASA Astrophysics Data System (ADS)
Germaschewski, K.; Raeder, J.; Larson, D.
2008-12-01
OpenGGCM (Open Geospace General circulation Model) is an established numerical code that simulates the Earth's space environment. The most computing intensive part is the MHD (magnetohydrodynamics) solver that models the plasma surrounding Earth and its interaction with Earth's magnetic field and the solar wind flowing in from the sun. Like other global magnetosphere codes, OpenGGCM's realism is limited by computational constraints on grid resolution. We investigate porting of the MHD solver to the Cell BE architecture, a novel inhomogeneous multicore architecture capable of up to 230 GFlops per processor. Realizing this high performance on the Cell processor is a programming challenge, though. We implemented the MHD solver using a multi-level parallel approach: On the coarsest level, the problem is distributed to processors based upon the usual domain decomposition approach. Then, on each processor, the problem is divided into 3D columns, each of which is handled by the memory limited SPEs (synergistic processing elements) slice by slice. Finally, SIMD instructions are used to fully exploit the vector/SIMD FPUs in each SPE. Memory management needs to be handled explicitly by the code, using DMA to move data from main memory to the per-SPE local store and vice versa. We obtained excellent performance numbers, a speed-up of a factor of 25 compared to just using the main processor, while still keeping the numerical implementation details of the code maintainable.
Smartphones as image processing systems for prosthetic vision.
Zapf, Marc P; Matteucci, Paul B; Lovell, Nigel H; Suaning, Gregg J
2013-01-01
The feasibility of implants for prosthetic vision has been demonstrated by research and commercial organizations. In most devices, an essential forerunner to the internal stimulation circuit is an external electronics solution for capturing, processing and relaying image information as well as extracting useful features from the scene surrounding the patient. The capabilities and multitude of image processing algorithms that can be performed by the device in real-time plays a major part in the final quality of the prosthetic vision. It is therefore optimal to use powerful hardware yet to avoid bulky, straining solutions. Recent publications have reported of portable single-board computers fast enough for computationally intensive image processing. Following the rapid evolution of commercial, ultra-portable ARM (Advanced RISC machine) mobile devices, the authors investigated the feasibility of modern smartphones running complex face detection as external processing devices for vision implants. The role of dedicated graphics processors in speeding up computation was evaluated while performing a demanding noise reduction algorithm (image denoising). The time required for face detection was found to decrease by 95% from 2.5 year old to recent devices. In denoising, graphics acceleration played a major role, speeding up denoising by a factor of 18. These results demonstrate that the technology has matured sufficiently to be considered as a valid external electronics platform for visual prosthetic research.
Broadcast-quality-stereoscopic video in a time-critical entertainment and corporate environment
NASA Astrophysics Data System (ADS)
Gay, Jean-Philippe
1995-03-01
`reality present: Peter Gabrial and Cirque du Soleil' is a 12 minute original work directed and produced by Doug Brown, Jean-Philippe Gay & A. Coogan, which showcases creative content applications of commercial stereoscopic video equipment. For production, a complete equipment package including a Steadicam mount was used in support of the Ikegami LK-33 camera. Remote production units were fielded in the time critical, on-stage and off-stage environments of 2 major live concerts: Peter Gabriel's Secret World performance at the San Diego Sports Arena, and Cirque du Soleil's Saltimbanco performance in Chicago. Twin 60 Hz video channels were captured on Beta SP for maximum post production flexibility. Digital post production and field sequential mastering were effected in D-2 format at studio facilities in Los Angeles. The program was world premiered to a large public at the World of Music, Arts and Dance festivals in Los Angeles and San Francisco, in late 1993. It was presented to the artists in Los Angeles, Montreal and Washington D.C. Additional presentations have been made using a broad range of commercial and experimental stereoscopic video equipment, including projection systems, LCD and passive eyewear, and digital signal processors. Technical packages for live presentation have been fielded on site and off, through to the present.
Potential of minicomputer/array-processor system for nonlinear finite-element analysis
NASA Technical Reports Server (NTRS)
Strohkorb, G. A.; Noor, A. K.
1983-01-01
The potential of using a minicomputer/array-processor system for the efficient solution of large-scale, nonlinear, finite-element problems is studied. A Prime 750 is used as the host computer, and a software simulator residing on the Prime is employed to assess the performance of the Floating Point Systems AP-120B array processor. Major hardware characteristics of the system such as virtual memory and parallel and pipeline processing are reviewed, and the interplay between various hardware components is examined. Effective use of the minicomputer/array-processor system for nonlinear analysis requires the following: (1) proper selection of the computational procedure and the capability to vectorize the numerical algorithms; (2) reduction of input-output operations; and (3) overlapping host and array-processor operations. A detailed discussion is given of techniques to accomplish each of these tasks. Two benchmark problems with 1715 and 3230 degrees of freedom, respectively, are selected to measure the anticipated gain in speed obtained by using the proposed algorithms on the array processor.
Vascular system modeling in parallel environment - distributed and shared memory approaches
Jurczuk, Krzysztof; Kretowski, Marek; Bezy-Wendling, Johanne
2011-01-01
The paper presents two approaches in parallel modeling of vascular system development in internal organs. In the first approach, new parts of tissue are distributed among processors and each processor is responsible for perfusing its assigned parts of tissue to all vascular trees. Communication between processors is accomplished by passing messages and therefore this algorithm is perfectly suited for distributed memory architectures. The second approach is designed for shared memory machines. It parallelizes the perfusion process during which individual processing units perform calculations concerning different vascular trees. The experimental results, performed on a computing cluster and multi-core machines, show that both algorithms provide a significant speedup. PMID:21550891
Parallel matrix multiplication on the Connection Machine
NASA Technical Reports Server (NTRS)
Tichy, Walter F.
1988-01-01
Matrix multiplication is a computation and communication intensive problem. Six parallel algorithms for matrix multiplication on the Connection Machine are presented and compared with respect to their performance and processor usage. For n by n matrices, the algorithms have theoretical running times of O(n to the 2nd power log n), O(n log n), O(n), and O(log n), and require n, n to the 2nd power, n to the 2nd power, and n to the 3rd power processors, respectively. With careful attention to communication patterns, the theoretically predicted runtimes can indeed be achieved in practice. The parallel algorithms illustrate the tradeoffs between performance, communication cost, and processor usage.
Error recovery in shared memory multiprocessors using private caches
NASA Technical Reports Server (NTRS)
Wu, Kun-Lung; Fuchs, W. Kent; Patel, Janak H.
1990-01-01
The problem of recovering from processor transient faults in shared memory multiprocesses systems is examined. A user-transparent checkpointing and recovery scheme using private caches is presented. Processes can recover from errors due to faulty processors by restarting from the checkpointed computation state. Implementation techniques using checkpoint identifiers and recovery stacks are examined as a means of reducing performance degradation in processor utilization during normal execution. This cache-based checkpointing technique prevents rollback propagation, provides rapid recovery, and can be integrated into standard cache coherence protocols. An analytical model is used to estimate the relative performance of the scheme during normal execution. Extensions to take error latency into account are presented.
Entropy Generation/Availability Energy Loss Analysis Inside MIT Gas Spring and "Two Space" Test Rigs
NASA Technical Reports Server (NTRS)
Ebiana, Asuquo B.; Savadekar, Rupesh T.; Patel, Kaushal V.
2006-01-01
The results of the entropy generation and availability energy loss analysis under conditions of oscillating pressure and oscillating helium gas flow in two Massachusetts Institute of Technology (MIT) test rigs piston-cylinder and piston-cylinder-heat exchanger are presented. Two solution domains, the gas spring (single-space) in the piston-cylinder test rig and the gas spring + heat exchanger (two-space) in the piston-cylinder-heat exchanger test rig are of interest. Sage and CFD-ACE+ commercial numerical codes are used to obtain 1-D and 2-D computer models, respectively, of each of the two solution domains and to simulate the oscillating gas flow and heat transfer effects in these domains. Second law analysis is used to characterize the entropy generation and availability energy losses inside the two solution domains. Internal and external entropy generation and availability energy loss results predicted by Sage and CFD-ACE+ are compared. Thermodynamic loss analysis of simple systems such as the MIT test rigs are often useful to understand some important features of complex pattern forming processes in more complex systems like the Stirling engine. This study is aimed at improving numerical codes for the prediction of thermodynamic losses via the development of a loss post-processor. The incorporation of loss post-processors in Stirling engine numerical codes will facilitate Stirling engine performance optimization. Loss analysis using entropy-generation rates due to heat and fluid flow is a relatively new technique for assessing component performance. It offers a deep insight into the flow phenomena, allows a more exact calculation of losses than is possible with traditional means involving the application of loss correlations and provides an effective tool for improving component and overall system performance.
Low latency memory access and synchronization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blumrich, Matthias A.; Chen, Dong; Coteus, Paul W.
A low latency memory system access is provided in association with a weakly-ordered multiprocessor system. Each processor in the multiprocessor shares resources, and each shared resource has an associated lock within a locking device that provides support for synchronization between the multiple processors in the multiprocessor and the orderly sharing of the resources. A processor only has permission to access a resource when it owns the lock associated with that resource, and an attempt by a processor to own a lock requires only a single load operation, rather than a traditional atomic load followed by store, such that the processormore » only performs a read operation and the hardware locking device performs a subsequent write operation rather than the processor. A simple prefetching for non-contiguous data structures is also disclosed. A memory line is redefined so that in addition to the normal physical memory data, every line includes a pointer that is large enough to point to any other line in the memory, wherein the pointers to determine which memory line to prefetch rather than some other predictive algorithm. This enables hardware to effectively prefetch memory access patterns that are non-contiguous, but repetitive.« less
Low latency memory access and synchronization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blumrich, Matthias A.; Chen, Dong; Coteus, Paul W.
A low latency memory system access is provided in association with a weakly-ordered multiprocessor system. Bach processor in the multiprocessor shares resources, and each shared resource has an associated lock within a locking device that provides support for synchronization between the multiple processors in the multiprocessor and the orderly sharing of the resources. A processor only has permission to access a resource when it owns the lock associated with that resource, and an attempt by a processor to own a lock requires only a single load operation, rather than a traditional atomic load followed by store, such that the processormore » only performs a read operation and the hardware locking device performs a subsequent write operation rather than the processor. A simple prefetching for non-contiguous data structures is also disclosed. A memory line is redefined so that in addition to the normal physical memory data, every line includes a pointer that is large enough to point to any other line in the memory, wherein the pointers to determine which memory line to prefetch rather than some other predictive algorithm. This enables hardware to effectively prefetch memory access patterns that are non-contiguous, but repetitive.« less
NASA Astrophysics Data System (ADS)
Wang, Hui; Chen, Huansheng; Wu, Qizhong; Lin, Junmin; Chen, Xueshun; Xie, Xinwei; Wang, Rongrong; Tang, Xiao; Wang, Zifa
2017-08-01
The Global Nested Air Quality Prediction Modeling System (GNAQPMS) is the global version of the Nested Air Quality Prediction Modeling System (NAQPMS), which is a multi-scale chemical transport model used for air quality forecast and atmospheric environmental research. In this study, we present the porting and optimisation of GNAQPMS on a second-generation Intel Xeon Phi processor, codenamed Knights Landing
(KNL). Compared with the first-generation Xeon Phi coprocessor (codenamed Knights Corner, KNC), KNL has many new hardware features such as a bootable processor, high-performance in-package memory and ISA compatibility with Intel Xeon processors. In particular, we describe the five optimisations we applied to the key modules of GNAQPMS, including the CBM-Z gas-phase chemistry, advection, convection and wet deposition modules. These optimisations work well on both the KNL 7250 processor and the Intel Xeon E5-2697 V4 processor. They include (1) updating the pure Message Passing Interface (MPI) parallel mode to the hybrid parallel mode with MPI and OpenMP in the emission, advection, convection and gas-phase chemistry modules; (2) fully employing the 512 bit wide vector processing units (VPUs) on the KNL platform; (3) reducing unnecessary memory access to improve cache efficiency; (4) reducing the thread local storage (TLS) in the CBM-Z gas-phase chemistry module to improve its OpenMP performance; and (5) changing the global communication from writing/reading interface files to MPI functions to improve the performance and the parallel scalability. These optimisations greatly improved the GNAQPMS performance. The same optimisations also work well for the Intel Xeon Broadwell processor, specifically E5-2697 v4. Compared with the baseline version of GNAQPMS, the optimised version was 3.51 × faster on KNL and 2.77 × faster on the CPU. Moreover, the optimised version ran at 26 % lower average power on KNL than on the CPU. With the combined performance and energy improvement, the KNL platform was 37.5 % more efficient on power consumption compared with the CPU platform. The optimisations also enabled much further parallel scalability on both the CPU cluster and the KNL cluster scaled to 40 CPU nodes and 30 KNL nodes, with a parallel efficiency of 70.4 and 42.2 %, respectively.
QERx- A Faster than Real-Time Emulator for Space Processors
NASA Astrophysics Data System (ADS)
Carvalho, B.; Pidgeon, A.; Robinson, P.
2012-08-01
Developing software for space systems is challenging. Especially because, in order to be sure it can cope with the harshness of the environment and the imperative requirements and constrains imposed by the platform were it will run, it needs to be tested exhaustively. Software Validation Facilities (SVF) are known to the industry and developers, and provide the means to run the On-Board Software (OBSW) in a realistic environment, allowing the development team to debug and test the software.But the challenge is to be able to keep up with the performance of the new processors (LEON2 and LEON3), which need to be emulated within the SVF. Such processor emulators are also used in Operational Simulators, used to support mission preparation and train mission operators. These simulators mimic the satellite and its behaviour, as realistically as possible. For test/operational efficiency reasons and because they will need to interact with external systems, both these uses cases require the processor emulators to provide real-time, or faster, performance.It is known to the industry that the performance of previously available emulators is not enough to cope with the performance of the new processors available in the market. SciSys approached this problem with dynamic translation technology trying to keep costs down by avoiding a hardware solution and keeping the integration flexibility of full software emulation.SciSys presented “QERx: A High Performance Emulator for Software Validation and Simulations” [1], in a previous DASIA event. Since then that idea has evolved and QERx has been successfully validated. SciSys is now presenting QERx as a product that can be tailored to fit different emulation needs. This paper will present QERx latest developments and current status.
Benefit of the UltraZoom beamforming technology in noise in cochlear implant users.
Mosnier, Isabelle; Mathias, Nathalie; Flament, Jonathan; Amar, Dorith; Liagre-Callies, Amelie; Borel, Stephanie; Ambert-Dahan, Emmanuèle; Sterkers, Olivier; Bernardeschi, Daniele
2017-09-01
The objectives of the study were to demonstrate the audiological and subjective benefits of the adaptive UltraZoom beamforming technology available in the Naída CI Q70 sound processor, in cochlear-implanted adults upgraded from a previous generation sound processor. Thirty-four adults aged between 21 and 89 years (mean 53 ± 19) were prospectively included. Nine subjects were unilaterally implanted, 11 bilaterally and 14 were bimodal users. The mean duration of cochlear implant use was 7 years (range 5-15 years). Subjects were tested in quiet with monosyllabic words and in noise with the adaptive French Matrix test in the best-aided conditions. The test setup contained a signal source in front of the subject and three noise sources at +/-90° and 180°. The noise was presented at a fixed level of 65 dB SPL and the level of speech signal was varied to obtain the speech reception threshold (SRT). During the upgrade visit, subjects were tested with the Harmony and with the Naída CI sound processors in omnidirectional microphone configuration. After a take-home phase of 2 months, tests were repeated with the Naída CI processor with and without UltraZoom. Subjective assessment of the sound quality in daily environments was recorded using the APHAB questionnaire. No difference in performance was observed in quiet between the two processors. The Matrix test in noise was possible in the 21 subjects with the better performance. No difference was observed between the two processors for performance in noise when using the omnidirectional microphone. At the follow-up session, the median SRT with the Naída CI processor with UltraZoom was -4 dB compared to -0.45 dB without UltraZoom. The use of UltraZoom improved the median SRT by 3.6 dB (p < 0.0001, Wilcoxon paired test). When looking at the APHAB outcome, improvement was observed for speech understanding in noisy environments (p < 0.01) and in aversive situations (p < 0.05) in the group of 21 subjects who were able to perform the Matrix test in noise and for speech understanding in noise (p < 0.05) in the group of 13 subjects with the poorest performance, who were not able to perform the Matrix test in noise. The use of UltraZoom beamforming technology, available on the new sound processor Naída CI, improves speech performance in difficult and realistic noisy conditions when the cochlear implant user needs to focus on the person speaking at the front. Using the APHAB questionnaire, a subjective benefit for listening in background noise was also observed in subjects with good performance as well as in those with poor performance. This study highlighted the importance of upgrading CI recipients to new technology and to include assessment in noise and subjective feedback evaluation as part of the process.
Geospace simulations using modern accelerator processor technology
NASA Astrophysics Data System (ADS)
Germaschewski, K.; Raeder, J.; Larson, D. J.
2009-12-01
OpenGGCM (Open Geospace General Circulation Model) is a well-established numerical code simulating the Earth's space environment. The most computing intensive part is the MHD (magnetohydrodynamics) solver that models the plasma surrounding Earth and its interaction with Earth's magnetic field and the solar wind flowing in from the sun. Like other global magnetosphere codes, OpenGGCM's realism is currently limited by computational constraints on grid resolution. OpenGGCM has been ported to make use of the added computational powerof modern accelerator based processor architectures, in particular the Cell processor. The Cell architecture is a novel inhomogeneous multicore architecture capable of achieving up to 230 GFLops on a single chip. The University of New Hampshire recently acquired a PowerXCell 8i based computing cluster, and here we will report initial performance results of OpenGGCM. Realizing the high theoretical performance of the Cell processor is a programming challenge, though. We implemented the MHD solver using a multi-level parallelization approach: On the coarsest level, the problem is distributed to processors based upon the usual domain decomposition approach. Then, on each processor, the problem is divided into 3D columns, each of which is handled by the memory limited SPEs (synergistic processing elements) slice by slice. Finally, SIMD instructions are used to fully exploit the SIMD FPUs in each SPE. Memory management needs to be handled explicitly by the code, using DMA to move data from main memory to the per-SPE local store and vice versa. We use a modern technique, automatic code generation, which shields the application programmer from having to deal with all of the implementation details just described, keeping the code much more easily maintainable. Our preliminary results indicate excellent performance, a speed-up of a factor of 30 compared to the unoptimized version.
Multiprocessor architectural study
NASA Technical Reports Server (NTRS)
Kosmala, A. L.; Stanten, S. F.; Vandever, W. H.
1972-01-01
An architectural design study was made of a multiprocessor computing system intended to meet functional and performance specifications appropriate to a manned space station application. Intermetrics, previous experience, and accumulated knowledge of the multiprocessor field is used to generate a baseline philosophy for the design of a future SUMC* multiprocessor. Interrupts are defined and the crucial questions of interrupt structure, such as processor selection and response time, are discussed. Memory hierarchy and performance is discussed extensively with particular attention to the design approach which utilizes a cache memory associated with each processor. The ability of an individual processor to approach its theoretical maximum performance is then analyzed in terms of a hit ratio. Memory management is envisioned as a virtual memory system implemented either through segmentation or paging. Addressing is discussed in terms of various register design adopted by current computers and those of advanced design.
Performance Models for Split-execution Computing Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humble, Travis S; McCaskey, Alex; Schrock, Jonathan
Split-execution computing leverages the capabilities of multiple computational models to solve problems, but splitting program execution across different computational models incurs costs associated with the translation between domains. We analyze the performance of a split-execution computing system developed from conventional and quantum processing units (QPUs) by using behavioral models that track resource usage. We focus on asymmetric processing models built using conventional CPUs and a family of special-purpose QPUs that employ quantum computing principles. Our performance models account for the translation of a classical optimization problem into the physical representation required by the quantum processor while also accounting for hardwaremore » limitations and conventional processor speed and memory. We conclude that the bottleneck in this split-execution computing system lies at the quantum-classical interface and that the primary time cost is independent of quantum processor behavior.« less
Efficacy of Code Optimization on Cache-Based Processors
NASA Technical Reports Server (NTRS)
VanderWijngaart, Rob F.; Saphir, William C.; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
In this paper a number of techniques for improving the cache performance of a representative piece of numerical software is presented. Target machines are popular processors from several vendors: MIPS R5000 (SGI Indy), MIPS R8000 (SGI PowerChallenge), MIPS R10000 (SGI Origin), DEC Alpha EV4 + EV5 (Cray T3D & T3E), IBM RS6000 (SP Wide-node), Intel PentiumPro (Ames' Whitney), Sun UltraSparc (NERSC's NOW). The optimizations all attempt to increase the locality of memory accesses. But they meet with rather varied and often counterintuitive success on the different computing platforms. We conclude that it may be genuinely impossible to obtain portable performance on the current generation of cache-based machines. At the least, it appears that the performance of modern commodity processors cannot be described with parameters defining the cache alone.
MAP3D: a media processor approach for high-end 3D graphics
NASA Astrophysics Data System (ADS)
Darsa, Lucia; Stadnicki, Steven; Basoglu, Chris
1999-12-01
Equator Technologies, Inc. has used a software-first approach to produce several programmable and advanced VLIW processor architectures that have the flexibility to run both traditional systems tasks and an array of media-rich applications. For example, Equator's MAP1000A is the world's fastest single-chip programmable signal and image processor targeted for digital consumer and office automation markets. The Equator MAP3D is a proposal for the architecture of the next generation of the Equator MAP family. The MAP3D is designed to achieve high-end 3D performance and a variety of customizable special effects by combining special graphics features with high performance floating-point and media processor architecture. As a programmable media processor, it offers the advantages of a completely configurable 3D pipeline--allowing developers to experiment with different algorithms and to tailor their pipeline to achieve the highest performance for a particular application. With the support of Equator's advanced C compiler and toolkit, MAP3D programs can be written in a high-level language. This allows the compiler to successfully find and exploit any parallelism in a programmer's code, thus decreasing the time to market of a given applications. The ability to run an operating system makes it possible to run concurrent applications in the MAP3D chip, such as video decoding while executing the 3D pipelines, so that integration of applications is easily achieved--using real-time decoded imagery for texturing 3D objects, for instance. This novel architecture enables an affordable, integrated solution for high performance 3D graphics.
Performance of VPIC on Sequoia
NASA Astrophysics Data System (ADS)
Nystrom, William
2014-10-01
Sequoia is a major DOE computing resource which is characteristic of future resources in that it has many threads per compute node, 64, and the individual processor cores are simpler and less powerful than cores on previous processors like Intel's Sandy Bridge or AMD's Opteron. An effort is in progress to port VPIC to the Blue Gene Q architecture of Sequoia and evaluate its performance. Results of this work will be presented on single node performance of VPIC as well as multi-node scaling.
Implicit, nonswitching, vector-oriented algorithm for steady transonic flow
NASA Technical Reports Server (NTRS)
Lottati, I.
1983-01-01
A rapid computation of a sequence of transonic flow solutions has to be performed in many areas of aerodynamic technology. The employment of low-cost vector array processors makes the conduction of such calculations economically feasible. However, for a full utilization of the new hardware, the developed algorithms must take advantage of the special characteristics of the vector array processor. The present investigation has the objective to develop an efficient algorithm for solving transonic flow problems governed by mixed partial differential equations on an array processor.
Multi-Core Programming Design Patterns: Stream Processing Algorithms for Dynamic Scene Perceptions
2014-05-01
processor developed by IBM and other companies , incorpo- rates the verb—POWER5— processor as the Power Processor Element (PPE), one of the early general...deliver an power efficient single-precision peak performance of more than 256 GFlops. Substantially more raw power became available later, when nVIDIA ...algorithms, including IBM’s Cell/B.E., GPUs from NVidia and AMD and many-core CPUs from Intel.27 The vast growth of digital video content has been a
First Results of an “Artificial Retina” Processor Prototype
Cenci, Riccardo; Bedeschi, Franco; Marino, Pietro; ...
2016-11-15
We report on the performance of a specialized processor capable of reconstructing charged particle tracks in a realistic LHC silicon tracker detector, at the same speed of the readout and with sub-microsecond latency. The processor is based on an innovative pattern-recognition algorithm, called “artificial retina algorithm”, inspired from the vision system of mammals. A prototype of the processor has been designed, simulated, and implemented on Tel62 boards equipped with high-bandwidth Altera Stratix III FPGA devices. Also, the prototype is the first step towards a real-time track reconstruction device aimed at processing complex events of high-luminosity LHC experiments at 40 MHzmore » crossing rate.« less
A wideband software reconfigurable modem
NASA Astrophysics Data System (ADS)
Turner, J. H., Jr.; Vickers, H.
A wideband modem is described which provides signal processing capability for four Lx-band signals employing QPSK, MSK and PPM waveforms and employs a software reconfigurable architecture for maximum system flexibility and graceful degradation. The current processor uses a 2901 and two 8086 microprocessors per channel and performs acquisition, tracking, and data demodulation for JITDS, GPS, IFF and TACAN systems. The next generation processor will be implemented using a VHSIC chip set employing a programmable complex array vector processor module, a GP computer module, customized gate array modules, and a digital array correlator. This integrated processor has application to a wide number of diverse system waveforms, and will bring the benefits of VHSIC technology insertion into avionic antijam communications systems.
NASA Astrophysics Data System (ADS)
Liu, Fenglai; Kong, Jing
2018-07-01
Unique technical challenges and their solutions for implementing semi-numerical Hartree-Fock exchange on the Phil Processor are discussed, especially concerning the single- instruction-multiple-data type of processing and small cache size. Benchmark calculations on a series of buckyball molecules with various Gaussian basis sets on a Phi processor and a six-core CPU show that the Phi processor provides as much as 12 times of speedup with large basis sets compared with the conventional four-center electron repulsion integration approach performed on the CPU. The accuracy of the semi-numerical scheme is also evaluated and found to be comparable to that of the resolution-of-identity approach.
First Results of an “Artificial Retina” Processor Prototype
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cenci, Riccardo; Bedeschi, Franco; Marino, Pietro
We report on the performance of a specialized processor capable of reconstructing charged particle tracks in a realistic LHC silicon tracker detector, at the same speed of the readout and with sub-microsecond latency. The processor is based on an innovative pattern-recognition algorithm, called “artificial retina algorithm”, inspired from the vision system of mammals. A prototype of the processor has been designed, simulated, and implemented on Tel62 boards equipped with high-bandwidth Altera Stratix III FPGA devices. Also, the prototype is the first step towards a real-time track reconstruction device aimed at processing complex events of high-luminosity LHC experiments at 40 MHzmore » crossing rate.« less
NASA Astrophysics Data System (ADS)
Heckman, S.
2015-12-01
Modern lightning locating systems (LLS) provide real-time monitoring and early warning of lightningactivities. In addition, LLS provide valuable data for statistical analysis in lightning research. It isimportant to know the performance of such LLS. In the present study, the performance of the EarthNetworks Total Lightning Network (ENTLN) is studied using rocket-triggered lightning data acquired atthe International Center for Lightning Research and Testing (ICLRT), Camp Blanding, Florida.In the present study, 18 flashes triggered at ICLRT in 2014 were analyzed and they comprise of 78negative cloud-to-ground return strokes. The geometric mean, median, minimum, and maximum for thepeak currents of the 78 return strokes are 13.4 kA, 13.6 kA, 3.7 kA, and 38.4 kA, respectively. The peakcurrents represent typical subsequent return strokes in natural cloud-to-ground lightning.Earth Networks has developed a new data processor to improve the performance of their network. Inthis study, results are presented for the ENTLN data using the old processor (originally reported in 2014)and the ENTLN data simulated using the new processor. The flash detection efficiency, stroke detectionefficiency, percentage of misclassification, median location error, median peak current estimation error,and median absolute peak current estimation error for the originally reported data from old processorare 100%, 94%, 49%, 271 m, 5%, and 13%, respectively, and those for the simulated data using the newprocessor are 100%, 99%, 9%, 280 m, 11%, and 15%, respectively. The use of new processor resulted inhigher stroke detection efficiency and lower percentage of misclassification. It is worth noting that theslight differences in median location error, median peak current estimation error, and median absolutepeak current estimation error for the two processors are due to the fact that the new processordetected more number of return strokes than the old processor.
Optoelectronic switch matrix as a look-up table for residue arithmetic.
Macdonald, R I
1987-10-01
The use of optoelectronic matrix switches to perform look-up table functions in residue arithmetic processors is proposed. In this application, switchable detector arrays give the advantage of a greatly reduced requirement for optical sources by comparison with previous optoelectronic residue processors.
Hybridization of biomedical circuitry
NASA Technical Reports Server (NTRS)
Rinard, G. A.
1978-01-01
The design and fabrication of low power hybrid circuits to perform vital signs monitoring are reported. The circuits consist of: (1) clock; (2) ECG amplifier and cardiotachometer signal conditioner; (3) impedance pneumobraph and respiration rate processor; (4) hear/breath rate processor; (5) temperature monitor; and (6) LCD display.
2004-07-01
steadily for the past fifteen years, while memory latency and bandwidth have improved much more slowly. For example, Intel processor clock rates38 have... processor and memory performance) all greatly restrict the ability to achieve high levels of performance for science, engineering, and national...sub-nuclear distances. Guide experiments to identify transition from quantum chromodynamics to quark -gluon plasma. Accelerator Physics Accurate
Power processor for a 20CM ion thruster
NASA Technical Reports Server (NTRS)
Biess, J. J.; Schoenfeld, A. D.; Cohen, E.
1973-01-01
A power processor breadboard for the JPL 20CM Ion Engine was designed, fabricated, and tested to determine compliance with the electrical specification. The power processor breadboard used the silicon-controlled rectifier (SCR) series resonant inverter as the basic power stage to process all the power to the ion engine. The breadboard power processor was integrated with the JPL 20CM ion engine and complete testing was performed. The integration tests were performed without any silicon-controlled rectifier failure. This demonstrated the ruggedness of the series resonant inverter in protecting the switching elements during arcing in the ion engine. A method of fault clearing the ion engine and returning back to normal operation without elaborate sequencing and timing control logic was evolved. In this method, the main vaporizer was turned off and the discharge current limit was reduced when an overload existed on the screen/accelerator supply. After the high voltage returned to normal, both the main vaporizer and the discharge were returned to normal.
Environmentally adaptive processing for shallow ocean applications: A sequential Bayesian approach.
Candy, J V
2015-09-01
The shallow ocean is a changing environment primarily due to temperature variations in its upper layers directly affecting sound propagation throughout. The need to develop processors capable of tracking these changes implies a stochastic as well as an environmentally adaptive design. Bayesian techniques have evolved to enable a class of processors capable of performing in such an uncertain, nonstationary (varying statistics), non-Gaussian, variable shallow ocean environment. A solution to this problem is addressed by developing a sequential Bayesian processor capable of providing a joint solution to the modal function tracking and environmental adaptivity problem. Here, the focus is on the development of both a particle filter and an unscented Kalman filter capable of providing reasonable performance for this problem. These processors are applied to hydrophone measurements obtained from a vertical array. The adaptivity problem is attacked by allowing the modal coefficients and/or wavenumbers to be jointly estimated from the noisy measurement data along with tracking of the modal functions while simultaneously enhancing the noisy pressure-field measurements.
Optical interconnection using polyimide waveguide for multichip module
NASA Astrophysics Data System (ADS)
Koyanagi, Mitsumasa
1996-01-01
We have developed a parallel processor system with 152 RISC processor chips specific for Monte-Carlo analysis. This system has the ring-bus architecture. The performance of several Gflops is expected in this system according to the computer simulation. However, it was revealed that the data transfer speed of the bus has to be increased more dramatically in order to further increase the performance. Then, we propose to introduce the optical interconnection into the parallel processor system to increase the data transfer speed of the buses. The double ringbus architecture is employed in this new parallel processor system with optical interconnection. The free-space optical interconnection arid the optical waveguide are used for the optical ring-bus. Thin polyimide film was used to form the optical waveguide. A relatively low propagation loss was achieved in the polyimide optical waveguide. In addition, it was confirmed that the propagation direction of signal light can be easily changed by using a micro-mirror.
Optical interconnection using polyimide waveguide for multichip module
NASA Astrophysics Data System (ADS)
Koyanagi, Mitsumasa
1996-01-01
We have developed a parallel processor system with 152 RISC processor chips specific for Monte-Carlo analysis. This system has the ring-bus architecture. The performance of several Gflops is expected in this system according to the computer simulation. However, it was revealed that the data transfer speed of the bus has to be increased more dramatically in order to further increase the performance. Then, we propose to introduce the optical interconnection into the parallel processor system to increase the data transfer speed of the buses. The double ring-bus architecture is employed in this new parallel processor system with optical interconnection. The free-space optical interconnection and the optical waveguide are used for the optical ring-bus. Thin polyimide film was used to form the optical waveguide. A relatively low propagation loss was achieved in the polyimide optical waveguide. In addition, it was confirmed that the propagation direction of signal light can be easily changed by using a micro-mirror.
NASA Astrophysics Data System (ADS)
Yokoyama, Yoshiaki; Kim, Minseok; Arai, Hiroyuki
At present, when using space-time processing techniques with multiple antennas for mobile radio communication, real-time weight adaptation is necessary. Due to the progress of integrated circuit technology, dedicated processor implementation with ASIC or FPGA can be employed to implement various wireless applications. This paper presents a resource and performance evaluation of the QRD-RLS systolic array processor based on fixed-point CORDIC algorithm with FPGA. In this paper, to save hardware resources, we propose the shared architecture of a complex CORDIC processor. The required precision of internal calculation, the circuit area for the number of antenna elements and wordlength, and the processing speed will be evaluated. The resource estimation provides a possible processor configuration with a current FPGA on the market. Computer simulations assuming a fading channel will show a fast convergence property with a finite number of training symbols. The proposed architecture has also been implemented and its operation was verified by beamforming evaluation through a radio propagation experiment.
Software-Reconfigurable Processors for Spacecraft
NASA Technical Reports Server (NTRS)
Farrington, Allen; Gray, Andrew; Bell, Bryan; Stanton, Valerie; Chong, Yong; Peters, Kenneth; Lee, Clement; Srinivasan, Jeffrey
2005-01-01
A report presents an overview of an architecture for a software-reconfigurable network data processor for a spacecraft engaged in scientific exploration. When executed on suitable electronic hardware, the software performs the functions of a physical layer (in effect, acts as a software radio in that it performs modulation, demodulation, pulse-shaping, error correction, coding, and decoding), a data-link layer, a network layer, a transport layer, and application-layer processing of scientific data. The software-reconfigurable network processor is undergoing development to enable rapid prototyping and rapid implementation of communication, navigation, and scientific signal-processing functions; to provide a long-lived communication infrastructure; and to provide greatly improved scientific-instrumentation and scientific-data-processing functions by enabling science-driven in-flight reconfiguration of computing resources devoted to these functions. This development is an extension of terrestrial radio and network developments (e.g., in the cellular-telephone industry) implemented in software running on such hardware as field-programmable gate arrays, digital signal processors, traditional digital circuits, and mixed-signal application-specific integrated circuits (ASICs).
A sensitometric evaluation of a dental radiographic developer additive.
Price, C
1982-04-01
The efficacy of a commercially available developer additive was investigated by a sensitometric method. The effects of the additive on two different manual developing solutions were evaluated immediately after mixing and at intervals of 5, 10, and 15 weeks. An automatic film processor was used to process control radiographs. Under the conditions of this investigation, no advantages were demonstrated following the use of the additive.
2011-12-29
ISS030-E-017789 (29 Dec. 2011) --- Working in chorus with the International Space Station team in Houston?s Mission Control Center, this astronaut and his Expedition 30 crewmates on the station install a set of Enhanced Processor and Integrated Communications (EPIC) computer cards in one of seven primary computers onboard. The upgrade will allow more experiments to operate simultaneously, and prepare for the arrival of commercial cargo ships later this year.
2011-12-29
ISS030-E-017776 (29 Dec. 2011) --- Working in chorus with the International Space Station team in Houston?s Mission Control Center, this astronaut and his Expedition 30 crewmates on the station install a set of Enhanced Processor and Integrated Communications (EPIC) computer cards in one of seven primary computers onboard. The upgrade will allow more experiments to operate simultaneously, and prepare for the arrival of commercial cargo ships later this year.
Radiation-Hardened Wafer Scale Integration
1989-10-25
unlimited. LEXINGTON MASSACHUSETTS EXECUTIVE SUMMARY A focal plane processor (FPP) for a large array of LWIR photodetectors on a space platform must...It seems certain that large. scanning LWIR arrays will once again be of interest in the future, though their specifications will differ from those... nonuniformity and defects in the ZMR material, but films of good quality produced by this technique are now available commercially from Kopin Corporation. Such
Cyberspace and Posse Comitatus: Legal Implications of a Borderless Domain
2010-03-01
technology infrastructures, including the Internet , telecommunications networks, computer systems, and embedded processors and controllers.” 9 This...the people, and stopped just short of shutting down economic markets . 2 Though never admitted, all indications point to a coordinated attack from...control orders transit many of the same, generally commercially-owned, routers, switches, computers, and wires, each with the goal of passing information
The Application of Microwave Incineration to Regenerative Life Support
NASA Technical Reports Server (NTRS)
Sun, Sidney C.; Srinivasan, Venkatesh; Covington, Al (Technical Monitor)
1995-01-01
Future human exploration missions will require life support systems that are highly regenerative, requiring minimum resupply, enabling the crews to be largely self-sufficient. Solid wastes generated in space will be processed to recover usable material. Researchers at NASA Ames Research Center are studying a commercially-produced microwave incinerator as a solid waste processor. This paper will describe the results of testing to-date.
Advanced Physiological Estimation of Cognitive Status. Part 2
2011-05-24
Neurofeedback Algorithms and Gaze Controller EEG Sensor System g.USBamp *, ** • internal 24-bit ADC and digital signal processor • 16 channels (expandable...SUBJECT TERMS EEG eye-tracking mental state estimation machine learning Leonard J. Trejo Pacific Development and Technology LLC 999 Commercial St. Palo...fatigue, overload) Technology Transfer Opportunity Technology from PDT – Methods to acquire various physiological signals ( EEG , EOG, EMG, ECG, etc
Laboratory systems integration: robotics and automation.
Felder, R A
1991-01-01
Robotic technology is going to have a profound impact on the clinical laboratory of the future. Faced with increased pressure to reduce health care spending yet increase services to patients, many laboratories are looking for alternatives to the inflexible or "fixed" automation found in many clinical analyzers. Robots are being examined by many clinical pathologists as an attractive technology which can adapt to the constant changes in laboratory testing. Already, laboratory designs are being altered to accommodate robotics and automated specimen processors. However, the use of robotics and computer intelligence in the clinical laboratory is still in its infancy. Successful examples of robotic automation exist in several laboratories. Investigators have used robots to automate endocrine testing, high performance liquid chromatography, and specimen transportation. Large commercial laboratories are investigating the use of specimen processors which combine the use of fixed automation and robotics. Robotics have also reduced the exposure of medical technologists to specimens infected with viral pathogens. The successful examples of clinical robotics applications were a result of the cooperation of clinical chemists, engineers, and medical technologists. At the University of Virginia we have designed and implemented a robotic critical care laboratory. Initial clinical experience suggests that robotic performance is reliable, however, staff acceptance and utilization requires continuing education. We are also developing a robotic cyclosporine which promises to greatly reduce the labor costs of this analysis. The future will bring lab wide automation that will fully integrate computer artificial intelligence and robotics. Specimens will be transported by mobile robots. Specimen processing, aliquotting, and scheduling will be automated.(ABSTRACT TRUNCATED AT 250 WORDS)
An Efficient Solution Method for Multibody Systems with Loops Using Multiple Processors
NASA Technical Reports Server (NTRS)
Ghosh, Tushar K.; Nguyen, Luong A.; Quiocho, Leslie J.
2015-01-01
This paper describes a multibody dynamics algorithm formulated for parallel implementation on multiprocessor computing platforms using the divide-and-conquer approach. The system of interest is a general topology of rigid and elastic articulated bodies with or without loops. The algorithm divides the multibody system into a number of smaller sets of bodies in chain or tree structures, called "branches" at convenient joints called "connection points", and uses an Order-N (O (N)) approach to formulate the dynamics of each branch in terms of the unknown spatial connection forces. The equations of motion for the branches, leaving the connection forces as unknowns, are implemented in separate processors in parallel for computational efficiency, and the equations for all the unknown connection forces are synthesized and solved in one or several processors. The performances of two implementations of this divide-and-conquer algorithm in multiple processors are compared with an existing method implemented on a single processor.
Solving very large, sparse linear systems on mesh-connected parallel computers
NASA Technical Reports Server (NTRS)
Opsahl, Torstein; Reif, John
1987-01-01
The implementation of Pan and Reif's Parallel Nested Dissection (PND) algorithm on mesh connected parallel computers is described. This is the first known algorithm that allows very large, sparse linear systems of equations to be solved efficiently in polylog time using a small number of processors. How the processor bound of PND can be matched to the number of processors available on a given parallel computer by slowing down the algorithm by constant factors is described. Also, for the important class of problems where G(A) is a grid graph, a unique memory mapping that reduces the inter-processor communication requirements of PND to those that can be executed on mesh connected parallel machines is detailed. A description of an implementation on the Goodyear Massively Parallel Processor (MPP), located at Goddard is given. Also, a detailed discussion of data mappings and performance issues is given.
Safe and Efficient Support for Embeded Multi-Processors in ADA
NASA Astrophysics Data System (ADS)
Ruiz, Jose F.
2010-08-01
New software demands increasing processing power, and multi-processor platforms are spreading as the answer to achieve the required performance. Embedded real-time systems are also subject to this trend, but in the case of real-time mission-critical systems, the properties of reliability, predictability and analyzability are also paramount. The Ada 2005 language defined a subset of its tasking model, the Ravenscar profile, that provides the basis for the implementation of deterministic and time analyzable applications on top of a streamlined run-time system. This Ravenscar tasking profile, originally designed for single processors, has proven remarkably useful for modelling verifiable real-time single-processor systems. This paper proposes a simple extension to the Ravenscar profile to support multi-processor systems using a fully partitioned approach. The implementation of this scheme is simple, and it can be used to develop applications amenable to schedulability analysis.
Parallel volume ray-casting for unstructured-grid data on distributed-memory architectures
NASA Technical Reports Server (NTRS)
Ma, Kwan-Liu
1995-01-01
As computing technology continues to advance, computational modeling of scientific and engineering problems produces data of increasing complexity: large in size and unstructured in shape. Volume visualization of such data is a challenging problem. This paper proposes a distributed parallel solution that makes ray-casting volume rendering of unstructured-grid data practical. Both the data and the rendering process are distributed among processors. At each processor, ray-casting of local data is performed independent of the other processors. The global image composing processes, which require inter-processor communication, are overlapped with the local ray-casting processes to achieve maximum parallel efficiency. This algorithm differs from previous ones in four ways: it is completely distributed, less view-dependent, reasonably scalable, and flexible. Without using dynamic load balancing, test results on the Intel Paragon using from two to 128 processors show, on average, about 60% parallel efficiency.
Multiprocessing on supercomputers for computational aerodynamics
NASA Technical Reports Server (NTRS)
Yarrow, Maurice; Mehta, Unmeel B.
1990-01-01
Very little use is made of multiple processors available on current supercomputers (computers with a theoretical peak performance capability equal to 100 MFLOPs or more) in computational aerodynamics to significantly improve turnaround time. The productivity of a computer user is directly related to this turnaround time. In a time-sharing environment, the improvement in this speed is achieved when multiple processors are used efficiently to execute an algorithm. The concept of multiple instructions and multiple data (MIMD) through multi-tasking is applied via a strategy which requires relatively minor modifications to an existing code for a single processor. Essentially, this approach maps the available memory to multiple processors, exploiting the C-FORTRAN-Unix interface. The existing single processor code is mapped without the need for developing a new algorithm. The procedure for building a code utilizing this approach is automated with the Unix stream editor. As a demonstration of this approach, a Multiple Processor Multiple Grid (MPMG) code is developed. It is capable of using nine processors, and can be easily extended to a larger number of processors. This code solves the three-dimensional, Reynolds averaged, thin-layer and slender-layer Navier-Stokes equations with an implicit, approximately factored and diagonalized method. The solver is applied to generic oblique-wing aircraft problem on a four processor Cray-2 computer. A tricubic interpolation scheme is developed to increase the accuracy of coupling of overlapped grids. For the oblique-wing aircraft problem, a speedup of two in elapsed (turnaround) time is observed in a saturated time-sharing environment.
Design and implementation of a high performance network security processor
NASA Astrophysics Data System (ADS)
Wang, Haixin; Bai, Guoqiang; Chen, Hongyi
2010-03-01
The last few years have seen many significant progresses in the field of application-specific processors. One example is network security processors (NSPs) that perform various cryptographic operations specified by network security protocols and help to offload the computation intensive burdens from network processors (NPs). This article presents a high performance NSP system architecture implementation intended for both internet protocol security (IPSec) and secure socket layer (SSL) protocol acceleration, which are widely employed in virtual private network (VPN) and e-commerce applications. The efficient dual one-way pipelined data transfer skeleton and optimised integration scheme of the heterogenous parallel crypto engine arrays lead to a Gbps rate NSP, which is programmable with domain specific descriptor-based instructions. The descriptor-based control flow fragments large data packets and distributes them to the crypto engine arrays, which fully utilises the parallel computation resources and improves the overall system data throughput. A prototyping platform for this NSP design is implemented with a Xilinx XC3S5000 based FPGA chip set. Results show that the design gives a peak throughput for the IPSec ESP tunnel mode of 2.85 Gbps with over 2100 full SSL handshakes per second at a clock rate of 95 MHz.
Advanced Avionics and Processor Systems for a Flexible Space Exploration Architecture
NASA Technical Reports Server (NTRS)
Keys, Andrew S.; Adams, James H.; Smith, Leigh M.; Johnson, Michael A.; Cressler, John D.
2010-01-01
The Advanced Avionics and Processor Systems (AAPS) project, formerly known as the Radiation Hardened Electronics for Space Environments (RHESE) project, endeavors to develop advanced avionic and processor technologies anticipated to be used by NASA s currently evolving space exploration architectures. The AAPS project is a part of the Exploration Technology Development Program, which funds an entire suite of technologies that are aimed at enabling NASA s ability to explore beyond low earth orbit. NASA s Marshall Space Flight Center (MSFC) manages the AAPS project. AAPS uses a broad-scoped approach to developing avionic and processor systems. Investment areas include advanced electronic designs and technologies capable of providing environmental hardness, reconfigurable computing techniques, software tools for radiation effects assessment, and radiation environment modeling tools. Near-term emphasis within the multiple AAPS tasks focuses on developing prototype components using semiconductor processes and materials (such as Silicon-Germanium (SiGe)) to enhance a device s tolerance to radiation events and low temperature environments. As the SiGe technology will culminate in a delivered prototype this fiscal year, the project emphasis shifts its focus to developing low-power, high efficiency total processor hardening techniques. In addition to processor development, the project endeavors to demonstrate techniques applicable to reconfigurable computing and partially reconfigurable Field Programmable Gate Arrays (FPGAs). This capability enables avionic architectures the ability to develop FPGA-based, radiation tolerant processor boards that can serve in multiple physical locations throughout the spacecraft and perform multiple functions during the course of the mission. The individual tasks that comprise AAPS are diverse, yet united in the common endeavor to develop electronics capable of operating within the harsh environment of space. Specifically, the AAPS tasks for the Federal fiscal year of 2010 are: Silicon-Germanium (SiGe) Integrated Electronics for Extreme Environments, Modeling of Radiation Effects on Electronics, Radiation Hardened High Performance Processors (HPP), and and Reconfigurable Computing.
Multicore Programming Challenges
NASA Astrophysics Data System (ADS)
Perrone, Michael
The computer industry is facing fundamental challenges that are driving a major change in the design of computer processors. Due to restrictions imposed by quantum physics, one historical path to higher computer processor performance - by increased clock frequency - has come to an end. Increasing clock frequency now leads to power consumption costs that are too high to justify. As a result, we have seen in recent years that the processor frequencies have peaked and are receding from their high point. At the same time, competitive market conditions are giving business advantage to those companies that can field new streaming applications, handle larger data sets, and update their models to market conditions faster. The desire for newer, faster and larger is driving continued demand for higher computer performance.
Exact diagonalization of quantum lattice models on coprocessors
NASA Astrophysics Data System (ADS)
Siro, T.; Harju, A.
2016-10-01
We implement the Lanczos algorithm on an Intel Xeon Phi coprocessor and compare its performance to a multi-core Intel Xeon CPU and an NVIDIA graphics processor. The Xeon and the Xeon Phi are parallelized with OpenMP and the graphics processor is programmed with CUDA. The performance is evaluated by measuring the execution time of a single step in the Lanczos algorithm. We study two quantum lattice models with different particle numbers, and conclude that for small systems, the multi-core CPU is the fastest platform, while for large systems, the graphics processor is the clear winner, reaching speedups of up to 7.6 compared to the CPU. The Xeon Phi outperforms the CPU with sufficiently large particle number, reaching a speedup of 2.5.
Scalable Motion Estimation Processor Core for Multimedia System-on-Chip Applications
NASA Astrophysics Data System (ADS)
Lai, Yeong-Kang; Hsieh, Tian-En; Chen, Lien-Fei
2007-04-01
In this paper, we describe a high-throughput and scalable motion estimation processor architecture for multimedia system-on-chip applications. The number of processing elements (PEs) is scalable according to the variable algorithm parameters and the performance required for different applications. Using the PE rings efficiently and an intelligent memory-interleaving organization, the efficiency of the architecture can be increased. Moreover, using efficient on-chip memories and a data management technique can effectively decrease the power consumption and memory bandwidth. Techniques for reducing the number of interconnections and external memory accesses are also presented. Our results demonstrate that the proposed scalable PE-ringed architecture is a flexible and high-performance processor core in multimedia system-on-chip applications.
An update to CMAQ's Meteorology/Chemistry Interface Processor Version 2 (MCIP2) will be released in August 2004 in conjunction with the next public release of the CMAQ model. MCIP2 is the pre-processor in the CMAQ system that is typically used to perform off-line linkage between...
7 CFR 252.4 - Application to participate and agreement.
Code of Federal Regulations, 2010 CFR
2010-01-01
... integrity, business ethics and performance. In addition, the processors must demonstrate their ability to sell end products under NCP by submitting supporting documentation such as written intent to purchase... purchased, the processor shall invoice the recipient agency at the net case price which shall reflect the...
An Intrinsically Digital Amplification Scheme for Hearing Aids
NASA Astrophysics Data System (ADS)
Blamey, Peter J.; Macfarlane, David S.; Steele, Brenton R.
2005-12-01
Results for linear and wide-dynamic range compression were compared with a new 64-channel digital amplification strategy in three separate studies. The new strategy addresses the requirements of the hearing aid user with efficient computations on an open-platform digital signal processor (DSP). The new amplification strategy is not modeled on prior analog strategies like compression and linear amplification, but uses statistical analysis of the signal to optimize the output dynamic range in each frequency band independently. Using the open-platform DSP processor also provided the opportunity for blind trial comparisons of the different processing schemes in BTE and ITE devices of a high commercial standard. The speech perception scores and questionnaire results show that it is possible to provide improved audibility for sound in many narrow frequency bands while simultaneously improving comfort, speech intelligibility in noise, and sound quality.
Clouser, C S; Doores, S; Mast, M G; Knabel, S J
1995-04-01
This study was undertaken to determine whether the incidence of either Salmonella spp. or Listeria monocytogenes on turkeys at three commercial processors could be related to the type of defeathering system: 1) conventional, 58 C common bath scald; 2) kosher, 7 C common bath scald; or 3) steam-spray, 62 C nonimmersion scald. Flocks were sampled before defeathering, after defeathering, and after chill at each facility. The incidence of Salmonella-positive turkeys significantly increased subsequent to conventional defeathering (10 positive out of 14) as compared with before defeathering (3/14). The number of Salmonella-positive carcasses following kosher (0/14) and steam-spray (2/14) defeathering were similar to the number of Salmonella-positive carcasses found prior to defeathering (1/14 and 3/14, respectively). The incidence of Salmonella-positive carcasses following chill was slightly lower, but not significantly different than the number of Salmonella-positive carcasses found immediately following defeathering at all processors (8/14, 0/14, 1/14 for conventional, kosher, and steam-spray processors, respectively). Although L. monocytogenes was detected on turkeys sampled before chilling (2/10, kosher) and after chilling (8/14, kosher; 1/14, conventional), no L. monocytogenes was detected on turkeys at any of the processors prior to the evisceration process. Flocks with high aerobic plate counts prior to processing were more likely to contain Salmonella-positive birds throughout processing. Aerobic plate counts of all flocks were similar after chill whether or not Salmonella spp. and L. monocytogenes were detected.
AFOSR BRI: Co-Design of Hardware/Software for Predicting MAV Aerodynamics
2016-09-27
DOCUMENTATION PAGE Form ApprovedOMB No. 0704-0188 1. REPORT DATE (DD-MM-YYYY) 2. REPORT TYPE 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER 6. AUTHOR(S) 7...703-588-8494 AFOSR BRI While Moore’s Law theoretically doubles processor performance every 24 months, much of the realizable performance remains...past efforts to develop such CFD codes on accelerated processors showed limited success, our hardware/software co-design approach created malleable
Modeling Large Scale Circuits Using Massively Parallel Descrete-Event Simulation
2013-06-01
exascale levels of performance, the smallest elements of a single processor can greatly affect the entire computer system (e.g. its power consumption...grow to exascale levels of performance, the smallest elements of a single processor can greatly affect the entire computer system (e.g. its power...Warp Speed 10.0. 2.0 INTRODUCTION As supercomputer systems approach exascale , the core count will exceed 1024 and number of transistors used in
NASA Technical Reports Server (NTRS)
Nguyen, Duc T.; Storaasli, Olaf O.; Qin, Jiangning; Qamar, Ramzi
1994-01-01
An automatic differentiation tool (ADIFOR) is incorporated into a finite element based structural analysis program for shape and non-shape design sensitivity analysis of structural systems. The entire analysis and sensitivity procedures are parallelized and vectorized for high performance computation. Small scale examples to verify the accuracy of the proposed program and a medium scale example to demonstrate the parallel vector performance on multiple CRAY C90 processors are included.
Implementation of an ADI method on parallel computers
NASA Technical Reports Server (NTRS)
Fatoohi, Raad A.; Grosch, Chester E.
1987-01-01
The implementation of an ADI method for solving the diffusion equation on three parallel/vector computers is discussed. The computers were chosen so as to encompass a variety of architectures. They are: the MPP, an SIMD machine with 16K bit serial processors; FLEX/32, an MIMD machine with 20 processors; and CRAY/2, an MIMD machine with four vector processors. The Gaussian elimination algorithm is used to solve a set of tridiagonal systems on the FLEX/32 and CRAY/2 while the cyclic elimination algorithm is used to solve these systems on the MPP. The implementation of the method is discussed in relation to these architectures and measures of the performance on each machine are given. Simple performance models are used to describe the performance. These models highlight the bottlenecks and limiting factors for this algorithm on these architectures. Finally, conclusions are presented.
Implementation of an ADI method on parallel computers
NASA Technical Reports Server (NTRS)
Fatoohi, Raad A.; Grosch, Chester E.
1987-01-01
In this paper the implementation of an ADI method for solving the diffusion equation on three parallel/vector computers is discussed. The computers were chosen so as to encompass a variety of architectures. They are the MPP, an SIMD machine with 16-Kbit serial processors; Flex/32, an MIMD machine with 20 processors; and Cray/2, an MIMD machine with four vector processors. The Gaussian elimination algorithm is used to solve a set of tridiagonal systems on the Flex/32 and Cray/2 while the cyclic elimination algorithm is used to solve these systems on the MPP. The implementation of the method is discussed in relation to these architectures and measures of the performance on each machine are given. Simple performance models are used to describe the performance. These models highlight the bottlenecks and limiting factors for this algorithm on these architectures. Finally conclusions are presented.
Comparing an FPGA to a Cell for an Image Processing Application
NASA Astrophysics Data System (ADS)
Rakvic, Ryan N.; Ngo, Hau; Broussard, Randy P.; Ives, Robert W.
2010-12-01
Modern advancements in configurable hardware, most notably Field-Programmable Gate Arrays (FPGAs), have provided an exciting opportunity to discover the parallel nature of modern image processing algorithms. On the other hand, PlayStation3 (PS3) game consoles contain a multicore heterogeneous processor known as the Cell, which is designed to perform complex image processing algorithms at a high performance. In this research project, our aim is to study the differences in performance of a modern image processing algorithm on these two hardware platforms. In particular, Iris Recognition Systems have recently become an attractive identification method because of their extremely high accuracy. Iris matching, a repeatedly executed portion of a modern iris recognition algorithm, is parallelized on an FPGA system and a Cell processor. We demonstrate a 2.5 times speedup of the parallelized algorithm on the FPGA system when compared to a Cell processor-based version.
Martin, N H; Ranieri, M L; Murphy, S C; Ralyea, R D; Wiedmann, M; Boor, K J
2011-03-01
Analytical tools that accurately predict the performance of raw milk following its manufacture into commercial food products are of economic interest to the dairy industry. To evaluate the ability of currently applied raw milk microbiological tests to predict the quality of commercially pasteurized fluid milk products, samples of raw milk and 2% fat pasteurized milk were obtained from 4 New York State fluid milk processors for a 1-yr period. Raw milk samples were examined using a variety of tests commonly applied to raw milk, including somatic cell count, standard plate count, psychrotrophic bacteria count, ropy milk test, coliform count, preliminary incubation count, laboratory pasteurization count, and spore pasteurization count. Differential and selective media were used to identify groups of bacteria present in raw milk. Pasteurized milk samples were held at 6°C for 21 d and evaluated for standard plate count, coliform count, and sensory quality throughout shelf-life. Bacterial isolates from select raw and pasteurized milk tests were identified using 16S ribosomal DNA sequencing. Linear regression analysis of raw milk test results versus results reflecting pasteurized milk quality consistently showed low R(2) values (<0.45); the majority of R(2) values were <0.25, indicating small relationship between the results from the raw milk tests and results from tests used to evaluate pasteurized milk quality. Our findings suggest the need for new raw milk tests that measure the specific biological barriers that limit shelf-life and quality of fluid milk products. Copyright © 2011 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Pierce, Paul E.
1986-01-01
A hardware processor is disclosed which in the described embodiment is a memory mapped multiplier processor that can operate in parallel with a 16 bit microcomputer. The multiplier processor decodes the address bus to receive specific instructions so that in one access it can write and automatically perform single or double precision multiplication involving a number written to it with or without addition or subtraction with a previously stored number. It can also, on a single read command automatically round and scale a previously stored number. The multiplier processor includes two concatenated 16 bit multiplier registers, two 16 bit concatenated 16 bit multipliers, and four 16 bit product registers connected to an internal 16 bit data bus. A high level address decoder determines when the multiplier processor is being addressed and first and second low level address decoders generate control signals. In addition, certain low order address lines are used to carry uncoded control signals. First and second control circuits coupled to the decoders generate further control signals and generate a plurality of clocking pulse trains in response to the decoded and address control signals.
Apparatus for and method of testing an electrical ground fault circuit interrupt device
Andrews, L.B.
1998-08-18
An apparatus for testing a ground fault circuit interrupt device includes a processor, an input device connected to the processor for receiving input from an operator, a storage media connected to the processor for storing test data, an output device connected to the processor for outputting information corresponding to the test data to the operator, and a calibrated variable load circuit connected between the processor and the ground fault circuit interrupt device. The ground fault circuit interrupt device is configured to trip a corresponding circuit breaker. The processor is configured to receive signals from the calibrated variable load circuit and to process the signals to determine a trip threshold current and/or a trip time. A method of testing the ground fault circuit interrupt device includes a first step of providing an identification for the ground fault circuit interrupt device. Test data is then recorded in accordance with the identification. By comparing test data from an initial test with test data from a subsequent test, a trend of performance for the ground fault circuit interrupt device is determined. 17 figs.
Apparatus for and method of testing an electrical ground fault circuit interrupt device
Andrews, Lowell B.
1998-01-01
An apparatus for testing a ground fault circuit interrupt device includes a processor, an input device connected to the processor for receiving input from an operator, a storage media connected to the processor for storing test data, an output device connected to the processor for outputting information corresponding to the test data to the operator, and a calibrated variable load circuit connected between the processor and the ground fault circuit interrupt device. The ground fault circuit interrupt device is configured to trip a corresponding circuit breaker. The processor is configured to receive signals from the calibrated variable load circuit and to process the signals to determine a trip threshold current and/or a trip time. A method of testing the ground fault circuit interrupt device includes a first step of providing an identification for the ground fault circuit interrupt device. Test data is then recorded in accordance with the identification. By comparing test data from an initial test with test data from a subsequent test, a trend of performance for the ground fault circuit interrupt device is determined.
NASA Astrophysics Data System (ADS)
Erez, Mattan; Dally, William J.
Stream processors, like other multi core architectures partition their functional units and storage into multiple processing elements. In contrast to typical architectures, which contain symmetric general-purpose cores and a cache hierarchy, stream processors have a significantly leaner design. Stream processors are specifically designed for the stream execution model, in which applications have large amounts of explicit parallel computation, structured and predictable control, and memory accesses that can be performed at a coarse granularity. Applications in the streaming model are expressed in a gather-compute-scatter form, yielding programs with explicit control over transferring data to and from on-chip memory. Relying on these characteristics, which are common to many media processing and scientific computing applications, stream architectures redefine the boundary between software and hardware responsibilities with software bearing much of the complexity required to manage concurrency, locality, and latency tolerance. Thus, stream processors have minimal control consisting of fetching medium- and coarse-grained instructions and executing them directly on the many ALUs. Moreover, the on-chip storage hierarchy of stream processors is under explicit software control, as is all communication, eliminating the need for complex reactive hardware mechanisms.
Pierce, P.E.
A hardware processor is disclosed which in the described embodiment is a memory mapped multiplier processor that can operate in parallel with a 16 bit microcomputer. The multiplier processor decodes the address bus to receive specific instructions so that in one access it can write and automatically perform single or double precision multiplication involving a number written to it with or without addition or subtraction with a previously stored number. It can also, on a single read command automatically round and scale a previously stored number. The multiplier processor includes two concatenated 16 bit multiplier registers, two 16 bit concatenated 16 bit multipliers, and four 16 bit product registers connected to an internal 16 bit data bus. A high level address decoder determines when the multiplier processor is being addressed and first and second low level address decoders generate control signals. In addition, certain low order address lines are used to carry uncoded control signals. First and second control circuits coupled to the decoders generate further control signals and generate a plurality of clocking pulse trains in response to the decoded and address control signals.
Progress in video immersion using Panospheric imaging
NASA Astrophysics Data System (ADS)
Bogner, Stephen L.; Southwell, David T.; Penzes, Steven G.; Brosinsky, Chris A.; Anderson, Ron; Hanna, Doug M.
1998-09-01
Having demonstrated significant technical and marketplace advantages over other modalities for video immersion, PanosphericTM Imaging (PI) continues to evolve rapidly. This paper reports on progress achieved since AeroSense 97. The first practical field deployment of the technology occurred in June-August 1997 during the NASA-CMU 'Atacama Desert Trek' activity, where the Nomad mobile robot was teleoperated via immersive PanosphericTM imagery from a distance of several thousand kilometers. Research using teleoperated vehicles at DRES has also verified the exceptional utility of the PI technology for achieving high levels of situational awareness, operator confidence, and mission effectiveness. Important performance enhancements have been achieved with the completion of the 4th Generation PI DSP-based array processor system. The system is now able to provide dynamic full video-rate generation of spatial and computational transformations, resulting in a programmable and fully interactive immersive video telepresence. A new multi- CCD camera architecture has been created to exploit the bandwidth of this processor, yielding a well-matched PI system with greatly improved resolution. While the initial commercial application for this technology is expected to be video tele- conferencing, it also appears to have excellent potential for application in the 'Immersive Cockpit' concept. Additional progress is reported in the areas of Long Wave Infrared PI Imaging, Stereo PI concepts, PI based Video-Servoing concepts, PI based Video Navigation concepts, and Foveation concepts (to merge localized high-resolution views with immersive views).
50 CFR 679.52 - Observer provider permitting and responsibilities.
Code of Federal Regulations, 2014 CFR
2014-10-01
... samples from the observer's deployment by the completion of the electronic vessel and/or processor survey(s); (ii) Complete NMFS electronic vessel and/or processor surveys before performing other jobs or... or experimental fishing as defined in § 600.10 of this chapter. (2) Must not have a direct financial...
50 CFR 679.52 - Observer provider permitting and responsibilities.
Code of Federal Regulations, 2013 CFR
2013-10-01
... samples from the observer's deployment by the completion of the electronic vessel and/or processor survey(s); (ii) Complete NMFS electronic vessel and/or processor surveys before performing other jobs or... or experimental fishing as defined in § 600.10 of this chapter. (2) Must not have a direct financial...
Parallel eigenanalysis of finite element models in a completely connected architecture
NASA Technical Reports Server (NTRS)
Akl, F. A.; Morel, M. R.
1989-01-01
A parallel algorithm is presented for the solution of the generalized eigenproblem in linear elastic finite element analysis, (K)(phi) = (M)(phi)(omega), where (K) and (M) are of order N, and (omega) is order of q. The concurrent solution of the eigenproblem is based on the multifrontal/modified subspace method and is achieved in a completely connected parallel architecture in which each processor is allowed to communicate with all other processors. The algorithm was successfully implemented on a tightly coupled multiple-instruction multiple-data parallel processing machine, Cray X-MP. A finite element model is divided into m domains each of which is assumed to process n elements. Each domain is then assigned to a processor or to a logical processor (task) if the number of domains exceeds the number of physical processors. The macrotasking library routines are used in mapping each domain to a user task. Computational speed-up and efficiency are used to determine the effectiveness of the algorithm. The effect of the number of domains, the number of degrees-of-freedom located along the global fronts and the dimension of the subspace on the performance of the algorithm are investigated. A parallel finite element dynamic analysis program, p-feda, is documented and the performance of its subroutines in parallel environment is analyzed.
Power processor for a 30cm ion thruster
NASA Technical Reports Server (NTRS)
Biess, J. J.; Inouye, L. Y.
1974-01-01
A thermal vacuum power processor for the NASA Lewis 30cm Mercury Ion Engine was designed, fabricated and tested to determine compliance with electrical specifications. The power processor breadboard used the silicon controlled rectifier (SCR) series resonant inverter as the basic power stage to process all the power to an ion engine. The power processor includes a digital interface unit to process all input commands and internal telemetry signals so that operation is compatible with a central computer system. The breadboard was tested in a thermal vacuum environment. Integration tests were performed with the ion engine and demonstrate operational compatibility and reliable operation without any component failures. Electromagnetic interference data were also recorded on the design to provide information on the interaction with total spacecraft.
Stencils and problem partitionings: Their influence on the performance of multiple processor systems
NASA Technical Reports Server (NTRS)
Reed, D. A.; Adams, L. M.; Patrick, M. L.
1986-01-01
Given a discretization stencil, partitioning the problem domain is an important first step for the efficient solution of partial differential equations on multiple processor systems. Partitions are derived that minimize interprocessor communication when the number of processors is known a priori and each domain partition is assigned to a different processor. This partitioning technique uses the stencil structure to select appropriate partition shapes. For square problem domains, it is shown that non-standard partitions (e.g., hexagons) are frequently preferable to the standard square partitions for a variety of commonly used stencils. This investigation is concluded with a formalization of the relationship between partition shape, stencil structure, and architecture, allowing selection of optimal partitions for a variety of parallel systems.
Development for SSV on a parallel processing system (PARAGON)
NASA Astrophysics Data System (ADS)
Gothard, Benny M.; Allmen, Mark; Carroll, Michael J.; Rich, Dan
1995-12-01
A goal of the surrogate semi-autonomous vehicle (SSV) program is to have multiple vehicles navigate autonomously and cooperatively with other vehicles. This paper describes the process and tools used in porting UGV/SSV (unmanned ground vehicle) autonomous mobility and target recognition algorithms from a SISD (single instruction single data) processor architecture (i.e., a Sun SPARC workstation running C/UNIX) to a MIMD (multiple instruction multiple data) parallel processor architecture (i.e., PARAGON-a parallel set of i860 processors running C/UNIX). It discusses the gains in performance and the pitfalls of such a venture. It also examines the merits of this processor architecture (based on this conceptual prototyping effort) and programming paradigm to meet the final SSV demonstration requirements.
A Low-Cost and Energy-Efficient Multiprocessor System-on-Chip for UWB MAC Layer
NASA Astrophysics Data System (ADS)
Xiao, Hao; Isshiki, Tsuyoshi; Khan, Arif Ullah; Li, Dongju; Kunieda, Hiroaki; Nakase, Yuko; Kimura, Sadahiro
Ultra-wideband (UWB) technology has attracted much attention recently due to its high data rate and low emission power. Its media access control (MAC) protocol, WiMedia MAC, promises a lot of facilities for high-speed and high-quality wireless communication. However, these benefits in turn involve a large amount of computational load, which challenges the traditional uniprocessor architecture based implementation method to provide the required performance. However, the constrained cost and power budget, on the other hand, makes using commercial multiprocessor solutions unrealistic. In this paper, a low-cost and energy-efficient multiprocessor system-on-chip (MPSoC), which tackles at once the aspects of system design, software migration and hardware architecture, is presented for the implementation of UWB MAC layer. Experimental results show that the proposed MPSoC, based on four simple RISC processors and shared-memory infrastructure, achieves up to 45% performance improvement and 65% power saving, but takes 15% less area than the uniprocessor implementation.
NASA Astrophysics Data System (ADS)
Kepner, J. V.; Janka, R. S.; Lebak, J.; Richards, M. A.
1999-12-01
The Vector/Signal/Image Processing Library (VSIPL) is a DARPA initiated effort made up of industry, government and academic representatives who have defined an industry standard API for vector, signal, and image processing primitives for real-time signal processing on high performance systems. VSIPL supports a wide range of data types (int, float, complex, ...) and layouts (vectors, matrices and tensors) and is ideal for astronomical data processing. The VSIPL API is intended to serve as an open, vendor-neutral, industry standard interface. The object-based VSIPL API abstracts the memory architecture of the underlying machine by using the concept of memory blocks and views. Early experiments with VSIPL code conversions have been carried out by the High Performance Computing Program team at the UCSD. Commercially, several major vendors of signal processors are actively developing implementations. VSIPL has also been explicitly required as part of a recent Rome Labs teraflop procurement. This poster presents the VSIPL API, its functionality and the status of various implementations.
Soft control of scanning probe microscope with high flexibility.
Liu, Zhenghui; Guo, Yuzheng; Zhang, Zhaohui; Zhu, Xing
2007-01-01
Most commercial scanning probe microscopes have multiple embedded digital microprocessors and utilize complex software for system control, which is not easily obtained or modified by researchers wishing to perform novel and special applications. In this paper, we present a simple and flexible control solution that just depends on software running on a single-processor personal computer with real-time Linux operating system to carry out all the control tasks including negative feedback, tip moving, data processing and user interface. In this way, we fully exploit the potential of a personal computer in calculating and programming, enabling us to manipulate the scanning probe as required without any special digital control circuits and related technical know-how. This solution has been successfully applied to a homemade ultrahigh vacuum scanning tunneling microscope and a multiprobe scanning tunneling microscope.
Global synchronization of parallel processors using clock pulse width modulation
Chen, Dong; Ellavsky, Matthew R.; Franke, Ross L.; Gara, Alan; Gooding, Thomas M.; Haring, Rudolf A.; Jeanson, Mark J.; Kopcsay, Gerard V.; Liebsch, Thomas A.; Littrell, Daniel; Ohmacht, Martin; Reed, Don D.; Schenck, Brandon E.; Swetz, Richard A.
2013-04-02
A circuit generates a global clock signal with a pulse width modification to synchronize processors in a parallel computing system. The circuit may include a hardware module and a clock splitter. The hardware module may generate a clock signal and performs a pulse width modification on the clock signal. The pulse width modification changes a pulse width within a clock period in the clock signal. The clock splitter may distribute the pulse width modified clock signal to a plurality of processors in the parallel computing system.
Ring-array processor distribution topology for optical interconnects
NASA Technical Reports Server (NTRS)
Li, Yao; Ha, Berlin; Wang, Ting; Wang, Sunyu; Katz, A.; Lu, X. J.; Kanterakis, E.
1992-01-01
The existing linear and rectangular processor distribution topologies for optical interconnects, although promising in many respects, cannot solve problems such as clock skews, the lack of supporting elements for efficient optical implementation, etc. The use of a ring-array processor distribution topology, however, can overcome these problems. Here, a study of the ring-array topology is conducted with an aim of implementing various fast clock rate, high-performance, compact optical networks for digital electronic multiprocessor computers. Practical design issues are addressed. Some proof-of-principle experimental results are included.
Image Matrix Processor for Volumetric Computations Final Report CRADA No. TSB-1148-95
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roberson, G. Patrick; Browne, Jolyon
The development of an Image Matrix Processor (IMP) was proposed that would provide an economical means to perform rapid ray-tracing processes on volume "Giga Voxel" data sets. This was a multi-phased project. The objective of the first phase of the IMP project was to evaluate the practicality of implementing a workstation-based Image Matrix Processor for use in volumetric reconstruction and rendering using hardware simulation techniques. Additionally, ARACOR and LLNL worked together to identify and pursue further funding sources to complete a second phase of this project.
Language and Program for Documenting Software Design
NASA Technical Reports Server (NTRS)
Kleine, H.; Zepko, T. M.
1986-01-01
Software Design and Documentation Language (SDDL) provides effective communication medium to support design and documentation of complex software applications. SDDL supports communication among all members of software design team and provides for production of informative documentation on design effort. Use of SDDL-generated document to analyze design makes it possible to eliminate many errors not detected until coding and testing attempted. SDDL processor program translates designer's creative thinking into effective document for communication. Processor performs as many automatic functions as possible, freeing designer's energy for creative effort. SDDL processor program written in PASCAL.
2010-03-01
DATES COVERED (From - To) October 2008 – October 2009 4 . TITLE AND SUBTITLE PERFORMANCE AND POWER OPTIMIZATION FOR COGNITIVE PROCESSOR DESIGN USING...Computations 2 2.2 Cognitive Models and Algorithms for Intelligent Text Recognition 4 2.2.1 Brain-State-in-a-Box Neural Network Model. 4 2.2.2...The ASIC-style design and synthesis flow for FPU 8 Figure 4 : Screen shots of the final layouts 10 Figure 5: Projected performance and power roadmap
Rapid Prototyping of Application Specific Signal Processors (RASSP)
1992-10-01
as well as government, research and and COMPASS , and how the improved plan academic institutions. CFI believes that effective might fit in with the... Compass ). libraries for COTS parts Tools and standards would be strongly based on - Ease of Use VHDL in its latest form(s). Block 2 would take * Open...EDIF Comrcial Rel:wased * Logic Inc. capture for Proprietary boards graphical language Logic Compass Schematic Proprietary EDIF; Commercial Released
ORNL Cray X1 evaluation status report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agarwal, P.K.; Alexander, R.A.; Apra, E.
2004-05-01
On August 15, 2002 the Department of Energy (DOE) selected the Center for Computational Sciences (CCS) at Oak Ridge National Laboratory (ORNL) to deploy a new scalable vector supercomputer architecture for solving important scientific problems in climate, fusion, biology, nanoscale materials and astrophysics. ''This program is one of the first steps in an initiative designed to provide U.S. scientists with the computational power that is essential to 21st century scientific leadership,'' said Dr. Raymond L. Orbach, director of the department's Office of Science. In FY03, CCS procured a 256-processor Cray X1 to evaluate the processors, memory subsystem, scalability of themore » architecture, software environment and to predict the expected sustained performance on key DOE applications codes. The results of the micro-benchmarks and kernel bench marks show the architecture of the Cray X1 to be exceptionally fast for most operations. The best results are shown on large problems, where it is not possible to fit the entire problem into the cache of the processors. These large problems are exactly the types of problems that are important for the DOE and ultra-scale simulation. Application performance is found to be markedly improved by this architecture: - Large-scale simulations of high-temperature superconductors run 25 times faster than on an IBM Power4 cluster using the same number of processors. - Best performance of the parallel ocean program (POP v1.4.3) is 50 percent higher than on Japan s Earth Simulator and 5 times higher than on an IBM Power4 cluster. - A fusion application, global GYRO transport, was found to be 16 times faster on the X1 than on an IBM Power3. The increased performance allowed simulations to fully resolve questions raised by a prior study. - The transport kernel in the AGILE-BOLTZTRAN astrophysics code runs 15 times faster than on an IBM Power4 cluster using the same number of processors. - Molecular dynamics simulations related to the phenomenon of photon echo run 8 times faster than previously achieved. Even at 256 processors, the Cray X1 system is already outperforming other supercomputers with thousands of processors for a certain class of applications such as climate modeling and some fusion applications. This evaluation is the outcome of a number of meetings with both high-performance computing (HPC) system vendors and application experts over the past 9 months and has received broad-based support from the scientific community and other agencies.« less
The RTE inversion on FPGA aboard the solar orbiter PHI instrument
NASA Astrophysics Data System (ADS)
Cobos Carrascosa, J. P.; Aparicio del Moral, B.; Ramos Mas, J. L.; Balaguer, M.; López Jiménez, A. C.; del Toro Iniesta, J. C.
2016-07-01
In this work we propose a multiprocessor architecture to reach high performance in floating point operations by using radiation tolerant FPGA devices, and under narrow time and power constraints. This architecture is used in the PHI instrument that carries out the scientific analysis aboard the ESA's Solar Orbiter mission. The proposed architecture, in a SIMD flavor, is aimed to be an accelerator within the Data Processing Unit (it is composed by a main Leon processor and two FPGAs) for carrying out the RTE inversion on board the spacecraft using a relatively slow FPGA device - Xilinx XQR4VSX55-. The proposed architecture squeezes the FPGA resources in order to reach the computational requirements and improves the ground-based system performance based on commercial CPUs regarding time and power consumption. In this work we demonstrate the feasibility of using this FPGA devices embedded in the SO/PHI instrument. With that goal in mind, we perform tests to evaluate the scientific results and to measure the processing time and power consumption for carrying out the RTE inversion.
NASA Astrophysics Data System (ADS)
Mehring, James W.; Thomas, Scott D.
1995-11-01
The Data Services Segment of the Defense Mapping Agency's Digital Production System provides a digital archive of imagery source data for use by DMA's cartographic user's. This system was developed in the mid-1980's and is currently undergoing modernization. This paper addresses the modernization of the imagery buffer function that was performed by custom hardware in the baseline system and is being replaced by a RAID Server based on commercial off the shelf (COTS) hardware. The paper briefly describes the baseline DMA image system and the modernization program, that is currently under way. Throughput benchmark measurements were made to make design configuration decisions for a commercial off the shelf (COTS) RAID Server to perform as system image buffer. The test program began with performance measurements of the RAID read and write operations between the RAID arrays and the server CPU for RAID levels 0, 5 and 0+1. Interface throughput measurements were made for the HiPPI interface between the RAID Server and the image archive and processing system as well as the client side interface between a custom interface board that provides the interface between the internal bus of the RAID Server and the Input- Output Processor (IOP) external wideband network currently in place in the DMA system to service client workstations. End to end measurements were taken from the HiPPI interface through the RAID write and read operations to the IOP output interface.
1985-01-01
7-Ai6i 817 ARTIFICIAL INTELLIGENCE AND ITS USE IN COST TYE1/I ANALYSES WdITH ANt EXAMPLE IN COST PERFORMANCE I MERSUREMENT(U) DEFENSE SYSTEMS...INTELLIGENCE-THE EMERGING TECHNOLOGY/ NATURAL LANGUAGE PROCESSORS K ~ With the advent of ARTIFICAL INTELLEGENCE (AI), we are entering into a new era of...language processor which is commerically available is INTELLECT, by Artifical Intellegence Incorporated, Waltham, Mass. To illustrate what a natural
Expedition Seven CDR Malenkenko performs IFM on Condensate Water Processor
2003-07-03
ISS007-E-09229 (3 July 2003) --- Cosmonaut Yuri I. Malenchenko, Expedition 7 mission commander, performs scheduled in-flight maintenance (IFM) on the condensate water processor (SRV-K2M) by removing and replacing its BKO multifiltration/purification column unit, which has reached its service life limit (450 liters min.). The old unit will be discarded on Progress. The IFM took place in the Zvezda Service Module on the International Space Station (ISS). Malenchenko represents Rosaviakosmos.
Expedition Seven CDR Malenkenko performs IFM on Condensate Water Processor
2003-07-03
ISS007-E-09231 (3 July 2003) --- Cosmonaut Yuri I. Malenchenko, Expedition 7 mission commander, performs scheduled in-flight maintenance (IFM) on the condensate water processor (SRV-K2M) by removing and replacing its BKO multifiltration/purification column unit, which has reached its service life limit (450 liters min.). The old unit will be discarded on Progress. The IFM took place in the Zvezda Service Module on the International Space Station (ISS). Malenchenko represents Rosaviakosmos.
Instruction-level performance modeling and characterization of multimedia applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Y.; Cameron, K.W.
1999-06-01
One of the challenges for characterizing and modeling realistic multimedia applications is the lack of access to source codes. On-chip performance counters effectively resolve this problem by monitoring run-time behaviors at the instruction-level. This paper presents a novel technique of characterizing and modeling workloads at the instruction level for realistic multimedia applications using hardware performance counters. A variety of instruction counts are collected from some multimedia applications, such as RealPlayer, GSM Vocoder, MPEG encoder/decoder, and speech synthesizer. These instruction counts can be used to form a set of abstract characteristic parameters directly related to a processor`s architectural features. Based onmore » microprocessor architectural constraints and these calculated abstract parameters, the architectural performance bottleneck for a specific application can be estimated. Meanwhile, the bottleneck estimation can provide suggestions about viable architectural/functional improvement for certain workloads. The biggest advantage of this new characterization technique is a better understanding of processor utilization efficiency and architectural bottleneck for each application. This technique also provides predictive insight of future architectural enhancements and their affect on current codes. In this paper the authors also attempt to model architectural effect on processor utilization without memory influence. They derive formulas for calculating CPI{sub 0}, CPI without memory effect, and they quantify utilization of architectural parameters. These equations are architecturally diagnostic and predictive in nature. Results provide promise in code characterization, and empirical/analytical modeling.« less
Acousto-optic time- and space-integrating spotlight-mode SAR processor
NASA Astrophysics Data System (ADS)
Haney, Michael W.; Levy, James J.; Michael, Robert R., Jr.
1993-09-01
The technical approach and recent experimental results for the acousto-optic time- and space- integrating real-time SAR image formation processor program are reported. The concept overcomes the size and power consumption limitations of electronic approaches by using compact, rugged, and low-power analog optical signal processing techniques for the most computationally taxing portions of the SAR imaging problem. Flexibility and performance are maintained by the use of digital electronics for the critical low-complexity filter generation and output image processing functions. The results include a demonstration of the processor's ability to perform high-resolution spotlight-mode SAR imaging by simultaneously compensating for range migration and range/azimuth coupling in the analog optical domain, thereby avoiding a highly power-consuming digital interpolation or reformatting operation usually required in all-electronic approaches.
NASA Astrophysics Data System (ADS)
Rumsewicz, Michael
1994-04-01
In this paper, we examine call completion performance, rather than message throughput, in a Common Channel Signaling network in which the processing resources, and not transmission resources, of a Signaling Transfer Point (STP) are overloaded. Specifically, we perform a transient analysis, via simulation, of a network consisting of a single Central Processor-based STP connecting many local exchanges. We consider the efficacy of using the Transfer Controlled (TFC) procedure when the network call attempt rate exceeds the processing capability of the STP. We find the following: (1) the success of the control depends critically on the rate at which TFC's are sent; (2) use of the TFC procedure in theevent of processor overload can provide reasonable call completion rates.
Optimistic barrier synchronization
NASA Technical Reports Server (NTRS)
Nicol, David M.
1992-01-01
Barrier synchronization is fundamental operation in parallel computation. In many contexts, at the point a processor enters a barrier it knows that it has already processed all the work required of it prior to synchronization. The alternative case, when a processor cannot enter a barrier with the assurance that it has already performed all the necessary pre-synchronization computation, is treated. The problem arises when the number of pre-sychronization messages to be received by a processor is unkown, for example, in a parallel discrete simulation or any other computation that is largely driven by an unpredictable exchange of messages. We describe an optimistic O(log sup 2 P) barrier algorithm for such problems, study its performance on a large-scale parallel system, and consider extensions to general associative reductions as well as associative parallel prefix computations.
Mechanism to support generic collective communication across a variety of programming models
Almasi, Gheorghe [Ardsley, NY; Dozsa, Gabor [Ardsley, NY; Kumar, Sameer [White Plains, NY
2011-07-19
A system and method for supporting collective communications on a plurality of processors that use different parallel programming paradigms, in one aspect, may comprise a schedule defining one or more tasks in a collective operation, an executor that executes the task, a multisend module to perform one or more data transfer functions associated with the tasks, and a connection manager that controls one or more connections and identifies an available connection. The multisend module uses the available connection in performing the one or more data transfer functions. A plurality of processors that use different parallel programming paradigms can use a common implementation of the schedule module, the executor module, the connection manager and the multisend module via a language adaptor specific to a parallel programming paradigm implemented on a processor.
A Parallel Algorithm for Contact in a Finite Element Hydrocode
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pierce, Timothy G.
A parallel algorithm is developed for contact/impact of multiple three dimensional bodies undergoing large deformation. As time progresses the relative positions of contact between the multiple bodies changes as collision and sliding occurs. The parallel algorithm is capable of tracking these changes and enforcing an impenetrability constraint and momentum transfer across the surfaces in contact. Portions of the various surfaces of the bodies are assigned to the processors of a distributed-memory parallel machine in an arbitrary fashion, known as the primary decomposition. A secondary, dynamic decomposition is utilized to bring opposing sections of the contacting surfaces together on the samemore » processors, so that opposing forces may be balanced and the resultant deformation of the bodies calculated. The secondary decomposition is accomplished and updated using only local communication with a limited subset of neighbor processors. Each processor represents both a domain of the primary decomposition and a domain of the secondary, or contact, decomposition. Thus each processor has four sets of neighbor processors: (a) those processors which represent regions adjacent to it in the primary decomposition, (b) those processors which represent regions adjacent to it in the contact decomposition, (c) those processors which send it the data from which it constructs its contact domain, and (d) those processors to which it sends its primary domain data, from which they construct their contact domains. The latter three of these neighbor sets change dynamically as the simulation progresses. By constraining all communication to these sets of neighbors, all global communication, with its attendant nonscalable performance, is avoided. A set of tests are provided to measure the degree of scalability achieved by this algorithm on up to 1024 processors. Issues related to the operating system of the test platform which lead to some degradation of the results are analyzed. This algorithm has been implemented as the contact capability of the ALE3D multiphysics code, and is currently in production use.« less
Processor Emulator with Benchmark Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lloyd, G. Scott; Pearce, Roger; Gokhale, Maya
2015-11-13
A processor emulator and a suite of benchmark applications have been developed to assist in characterizing the performance of data-centric workloads on current and future computer architectures. Some of the applications have been collected from other open source projects. For more details on the emulator and an example of its usage, see reference [1].
Assignment Of Finite Elements To Parallel Processors
NASA Technical Reports Server (NTRS)
Salama, Moktar A.; Flower, Jon W.; Otto, Steve W.
1990-01-01
Elements assigned approximately optimally to subdomains. Mapping algorithm based on simulated-annealing concept used to minimize approximate time required to perform finite-element computation on hypercube computer or other network of parallel data processors. Mapping algorithm needed when shape of domain complicated or otherwise not obvious what allocation of elements to subdomains minimizes cost of computation.
Space Station Water Processor Process Pump
NASA Technical Reports Server (NTRS)
Parker, David
1995-01-01
This report presents the results of the development program conducted under contract NAS8-38250-12 related to the International Space Station (ISS) Water Processor (WP) Process Pump. The results of the Process Pumps evaluation conducted on this program indicates that further development is required in order to achieve the performance and life requirements for the ISSWP.
NASA Technical Reports Server (NTRS)
Whetstone, W. D.
1976-01-01
The functions and operating rules of the SPAR system, which is a group of computer programs used primarily to perform stress, buckling, and vibrational analyses of linear finite element systems, were given. The following subject areas were discussed: basic information, structure definition, format system matrix processors, utility programs, static solutions, stresses, sparse matrix eigensolver, dynamic response, graphics, and substructure processors.
Design of a massively parallel computer using bit serial processing elements
NASA Technical Reports Server (NTRS)
Aburdene, Maurice F.; Khouri, Kamal S.; Piatt, Jason E.; Zheng, Jianqing
1995-01-01
A 1-bit serial processor designed for a parallel computer architecture is described. This processor is used to develop a massively parallel computational engine, with a single instruction-multiple data (SIMD) architecture. The computer is simulated and tested to verify its operation and to measure its performance for further development.
Using a Multicore Processor for Rover Autonomous Science
NASA Technical Reports Server (NTRS)
Bornstein, Benjamin; Estlin, Tara; Clement, Bradley; Springer, Paul
2011-01-01
Multicore processing promises to be a critical component of future spacecraft. It provides immense increases in onboard processing power and provides an environment for directly supporting fault-tolerant computing. This paper discusses using a state-of-the-art multicore processor to efficiently perform image analysis onboard a Mars rover in support of autonomous science activities.
Parallel text rendering by a PostScript interpreter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kritskii, S.P.; Zastavnoi, B.A.
1994-11-01
The most radical method of increasing the performance of devices controlled by PostScript interpreters may be the use of multiprocessor controllers. This paper presents a method for parallelizing the operation of a PostScript interpreter for rendering text. The proposed method is based on decomposition of the outlines of letters into horizontal strips covering equal areas. The subroutines thus obtained are distributed to the processors in a network and then filled in by conventional sequential algorithms. A special algorithm has been developed for dividing the outlines of characters into subroutines so that each may be colored independently of the others. Themore » algorithm uses special estimates for estimating the correct partition so that the corresponding outlines are divided into horizontal strips. A method is presented for finding such estimates. Two different processing approaches are presented. In the first, one of the processors performs the decomposition of the outlines and distributes the strips to the remaining processors, which are responsible for the rendering. In the second approach, the decomposition process is itself distributed among the processors in the network.« less
NASA Astrophysics Data System (ADS)
Rakvic, Ryan N.; Ives, Robert W.; Lira, Javier; Molina, Carlos
2011-01-01
General purpose computer designers have recently begun adding cores to their processors in order to increase performance. For example, Intel has adopted a homogeneous quad-core processor as a base for general purpose computing. PlayStation3 (PS3) game consoles contain a multicore heterogeneous processor known as the Cell, which is designed to perform complex image processing algorithms at a high level. Can modern image-processing algorithms utilize these additional cores? On the other hand, modern advancements in configurable hardware, most notably field-programmable gate arrays (FPGAs) have created an interesting question for general purpose computer designers. Is there a reason to combine FPGAs with multicore processors to create an FPGA multicore hybrid general purpose computer? Iris matching, a repeatedly executed portion of a modern iris-recognition algorithm, is parallelized on an Intel-based homogeneous multicore Xeon system, a heterogeneous multicore Cell system, and an FPGA multicore hybrid system. Surprisingly, the cheaper PS3 slightly outperforms the Intel-based multicore on a core-for-core basis. However, both multicore systems are beaten by the FPGA multicore hybrid system by >50%.
Control Software for Advanced Video Guidance Sensor
NASA Technical Reports Server (NTRS)
Howard, Richard T.; Book, Michael L.; Bryan, Thomas C.
2006-01-01
Embedded software has been developed specifically for controlling an Advanced Video Guidance Sensor (AVGS). A Video Guidance Sensor is an optoelectronic system that provides guidance for automated docking of two vehicles. Such a system includes pulsed laser diodes and a video camera, the output of which is digitized. From the positions of digitized target images and known geometric relationships, the relative position and orientation of the vehicles are computed. The present software consists of two subprograms running in two processors that are parts of the AVGS. The subprogram in the first processor receives commands from an external source, checks the commands for correctness, performs commanded non-image-data-processing control functions, and sends image data processing parts of commands to the second processor. The subprogram in the second processor processes image data as commanded. Upon power-up, the software performs basic tests of functionality, then effects a transition to a standby mode. When a command is received, the software goes into one of several operational modes (e.g. acquisition or tracking). The software then returns, to the external source, the data appropriate to the command.
Fast 2D FWI on a multi and many-cores workstation.
NASA Astrophysics Data System (ADS)
Thierry, Philippe; Donno, Daniela; Noble, Mark
2014-05-01
Following the introduction of x86 co-processors (Xeon Phi) and the performance increase of standard 2-socket workstations using the latest 12 cores E5-v2 x86-64 CPU, we present here a MPI + OpenMP implementation of an acoustic 2D FWI (full waveform inversion) code which simultaneously runs on the CPUs and on the co-processors installed in a workstation. The main advantage of running a 2D FWI on a workstation is to be able to quickly evaluate new features such as more complicated wave equations, new cost functions, finite-difference stencils or boundary conditions. Since the co-processor is made of 61 in-order x86 cores, each of them having up to 4 threads, this many-core can be seen as a shared memory SMP (symmetric multiprocessing) machine with its own IP address. Depending on the vendor, a single workstation can handle several co-processors making the workstation as a personal cluster under the desk. The original Fortran 90 CPU version of the 2D FWI code is just recompiled to get a Xeon Phi x86 binary. This multi and many-core configuration uses standard compilers and associated MPI as well as math libraries under Linux; therefore, the cost of code development remains constant, while improving computation time. We choose to implement the code with the so-called symmetric mode to fully use the capacity of the workstation, but we also evaluate the scalability of the code in native mode (i.e running only on the co-processor) thanks to the Linux ssh and NFS capabilities. Usual care of optimization and SIMD vectorization is used to ensure optimal performances, and to analyze the application performances and bottlenecks on both platforms. The 2D FWI implementation uses finite-difference time-domain forward modeling and a quasi-Newton (with L-BFGS algorithm) optimization scheme for the model parameters update. Parallelization is achieved through standard MPI shot gathers distribution and OpenMP for domain decomposition within the co-processor. Taking advantage of the 16 GB of memory available on the co-processor we are able to keep wavefields in memory to achieve the gradient computation by cross-correlation of forward and back-propagated wavefields needed by our time-domain FWI scheme, without heavy traffic on the i/o subsystem and PCIe bus. In this presentation we will also review some simple methodologies to determine performance expectation compared to real performances in order to get optimization effort estimation before starting any huge modification or rewriting of research codes. The key message is the ease of use and development of this hybrid configuration to reach not the absolute peak performance value but the optimal one that ensures the best balance between geophysical and computer developments.
Generic Divide and Conquer Internet-Based Computing
NASA Technical Reports Server (NTRS)
Radenski, Atanas; Follen, Gregory J. (Technical Monitor)
2001-01-01
The rapid growth of internet-based applications and the proliferation of networking technologies have been transforming traditional commercial application areas as well as computer and computational sciences and engineering. This growth stimulates the exploration of new, internet-oriented software technologies that can open new research and application opportunities not only for the commercial world, but also for the scientific and high -performance computing applications community. The general goal of this research project is to contribute to better understanding of the transition to internet-based high -performance computing and to develop solutions for some of the difficulties of this transition. More specifically, our goal is to design an architecture for generic divide and conquer internet-based computing, to develop a portable implementation of this architecture, to create an example library of high-performance divide-and-conquer computing agents that run on top of this architecture, and to evaluate the performance of these agents. We have been designing an architecture that incorporates a master task-pool server and utilizes satellite computational servers that operate on the Internet in a dynamically changing large configuration of lower-end nodes provided by volunteer contributors. Our designed architecture is intended to be complementary to and accessible from computational grids such as Globus, Legion, and Condor. Grids provide remote access to existing high-end computing resources; in contrast, our goal is to utilize idle processor time of lower-end internet nodes. Our project is focused on a generic divide-and-conquer paradigm and its applications that operate on a loose and ever changing pool of lower-end internet nodes.
Initial Performance Results on IBM POWER6
NASA Technical Reports Server (NTRS)
Saini, Subbash; Talcott, Dale; Jespersen, Dennis; Djomehri, Jahed; Jin, Haoqiang; Mehrotra, Piysuh
2008-01-01
The POWER5+ processor has a faster memory bus than that of the previous generation POWER5 processor (533 MHz vs. 400 MHz), but the measured per-core memory bandwidth of the latter is better than that of the former (5.7 GB/s vs. 4.3 GB/s). The reason for this is that in the POWER5+, the two cores on the chip share the L2 cache, L3 cache and memory bus. The memory controller is also on the chip and is shared by the two cores. This serializes the path to memory. For consistently good performance on a wide range of applications, the performance of the processor, the memory subsystem, and the interconnects (both latency and bandwidth) should be balanced. Recognizing this, IBM has designed the Power6 processor so as to avoid the bottlenecks due to the L2 cache, memory controller and buffer chips of the POWER5+. Unlike the POWER5+, each core in the POWER6 has its own L2 cache (4 MB - double that of the Power5+), memory controller and buffer chips. Each core in the POWER6 runs at 4.7 GHz instead of 1.9 GHz in POWER5+. In this paper, we evaluate the performance of a dual-core Power6 based IBM p6-570 system, and we compare its performance with that of a dual-core Power5+ based IBM p575+ system. In this evaluation, we have used the High- Performance Computing Challenge (HPCC) benchmarks, NAS Parallel Benchmarks (NPB), and four real-world applications--three from computational fluid dynamics and one from climate modeling.
Stateless and stateful implementations of faithful execution
Pierson, Lyndon G; Witzke, Edward L; Tarman, Thomas D; Robertson, Perry J; Eldridge, John M; Campbell, Philip L
2014-12-16
A faithful execution system includes system memory, a target processor, and protection engine. The system memory stores a ciphertext including value fields and integrity fields. The value fields each include an encrypted executable instruction and the integrity fields each include an encrypted integrity value for determining whether a corresponding one of the value fields has been modified. The target processor executes plaintext instructions decoded from the ciphertext while the protection engine is coupled between the system memory and the target processor. The protection engine includes logic to retrieve the ciphertext from the system memory, decrypt the value fields into the plaintext instructions, perform an integrity check based on the integrity fields to determine whether any of the corresponding value fields have been modified, and provide the plaintext instructions to the target processor for execution.
Efficient Load Balancing and Data Remapping for Adaptive Grid Calculations
NASA Technical Reports Server (NTRS)
Oliker, Leonid; Biswas, Rupak
1997-01-01
Mesh adaption is a powerful tool for efficient unstructured- grid computations but causes load imbalance among processors on a parallel machine. We present a novel method to dynamically balance the processor workloads with a global view. This paper presents, for the first time, the implementation and integration of all major components within our dynamic load balancing strategy for adaptive grid calculations. Mesh adaption, repartitioning, processor assignment, and remapping are critical components of the framework that must be accomplished rapidly and efficiently so as not to cause a significant overhead to the numerical simulation. Previous results indicated that mesh repartitioning and data remapping are potential bottlenecks for performing large-scale scientific calculations. We resolve these issues and demonstrate that our framework remains viable on a large number of processors.
Friedmann, Simon; Frémaux, Nicolas; Schemmel, Johannes; Gerstner, Wulfram; Meier, Karlheinz
2013-01-01
In this study, we propose and analyze in simulations a new, highly flexible method of implementing synaptic plasticity in a wafer-scale, accelerated neuromorphic hardware system. The study focuses on globally modulated STDP, as a special use-case of this method. Flexibility is achieved by embedding a general-purpose processor dedicated to plasticity into the wafer. To evaluate the suitability of the proposed system, we use a reward modulated STDP rule in a spike train learning task. A single layer of neurons is trained to fire at specific points in time with only the reward as feedback. This model is simulated to measure its performance, i.e., the increase in received reward after learning. Using this performance as baseline, we then simulate the model with various constraints imposed by the proposed implementation and compare the performance. The simulated constraints include discretized synaptic weights, a restricted interface between analog synapses and embedded processor, and mismatch of analog circuits. We find that probabilistic updates can increase the performance of low-resolution weights, a simple interface between analog synapses and processor is sufficient for learning, and performance is insensitive to mismatch. Further, we consider communication latency between wafer and the conventional control computer system that is simulating the environment. This latency increases the delay, with which the reward is sent to the embedded processor. Because of the time continuous operation of the analog synapses, delay can cause a deviation of the updates as compared to the not delayed situation. We find that for highly accelerated systems latency has to be kept to a minimum. This study demonstrates the suitability of the proposed implementation to emulate the selected reward modulated STDP learning rule. It is therefore an ideal candidate for implementation in an upgraded version of the wafer-scale system developed within the BrainScaleS project.
Friedmann, Simon; Frémaux, Nicolas; Schemmel, Johannes; Gerstner, Wulfram; Meier, Karlheinz
2013-01-01
In this study, we propose and analyze in simulations a new, highly flexible method of implementing synaptic plasticity in a wafer-scale, accelerated neuromorphic hardware system. The study focuses on globally modulated STDP, as a special use-case of this method. Flexibility is achieved by embedding a general-purpose processor dedicated to plasticity into the wafer. To evaluate the suitability of the proposed system, we use a reward modulated STDP rule in a spike train learning task. A single layer of neurons is trained to fire at specific points in time with only the reward as feedback. This model is simulated to measure its performance, i.e., the increase in received reward after learning. Using this performance as baseline, we then simulate the model with various constraints imposed by the proposed implementation and compare the performance. The simulated constraints include discretized synaptic weights, a restricted interface between analog synapses and embedded processor, and mismatch of analog circuits. We find that probabilistic updates can increase the performance of low-resolution weights, a simple interface between analog synapses and processor is sufficient for learning, and performance is insensitive to mismatch. Further, we consider communication latency between wafer and the conventional control computer system that is simulating the environment. This latency increases the delay, with which the reward is sent to the embedded processor. Because of the time continuous operation of the analog synapses, delay can cause a deviation of the updates as compared to the not delayed situation. We find that for highly accelerated systems latency has to be kept to a minimum. This study demonstrates the suitability of the proposed implementation to emulate the selected reward modulated STDP learning rule. It is therefore an ideal candidate for implementation in an upgraded version of the wafer-scale system developed within the BrainScaleS project. PMID:24065877
Performance characterization of image and video analysis systems at Siemens Corporate Research
NASA Astrophysics Data System (ADS)
Ramesh, Visvanathan; Jolly, Marie-Pierre; Greiffenhagen, Michael
2000-06-01
There has been a significant increase in commercial products using imaging analysis techniques to solve real-world problems in diverse fields such as manufacturing, medical imaging, document analysis, transportation and public security, etc. This has been accelerated by various factors: more advanced algorithms, the availability of cheaper sensors, and faster processors. While algorithms continue to improve in performance, a major stumbling block in translating improvements in algorithms to faster deployment of image analysis systems is the lack of characterization of limits of algorithms and how they affect total system performance. The research community has realized the need for performance analysis and there have been significant efforts in the last few years to remedy the situation. Our efforts at SCR have been on statistical modeling and characterization of modules and systems. The emphasis is on both white-box and black box methodologies to evaluate and optimize vision systems. In the first part of this paper we review the literature on performance characterization and then provide an overview of the status of research in performance characterization of image and video understanding systems. The second part of the paper is on performance evaluation of medical image segmentation algorithms. Finally, we highlight some research issues in performance analysis in medical imaging systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bylaska, Eric J.; Jacquelin, Mathias; De Jong, Wibe A.
2017-10-20
Ab-initio Molecular Dynamics (AIMD) methods are an important class of algorithms, as they enable scientists to understand the chemistry and dynamics of molecular and condensed phase systems while retaining a first-principles-based description of their interactions. Many-core architectures such as the Intel® Xeon Phi™ processor are an interesting and promising target for these algorithms, as they can provide the computational power that is needed to solve interesting problems in chemistry. In this paper, we describe the efforts of refactoring the existing AIMD plane-wave method of NWChem from an MPI-only implementation to a scalable, hybrid code that employs MPI and OpenMP tomore » exploit the capabilities of current and future many-core architectures. We describe the optimizations required to get close to optimal performance for the multiplication of the tall-and-skinny matrices that form the core of the computational algorithm. We present strong scaling results on the complete AIMD simulation for a test case that simulates 256 water molecules and that strong-scales well on a cluster of 1024 nodes of Intel Xeon Phi processors. We compare the performance obtained with a cluster of dual-socket Intel® Xeon® E5–2698v3 processors.« less
Visualization Co-Processing of a CFD Simulation
NASA Technical Reports Server (NTRS)
Vaziri, Arsi
1999-01-01
OVERFLOW, a widely used CFD simulation code, is combined with a visualization system, pV3, to experiment with an environment for simulation/visualization co-processing on a SGI Origin 2000 computer(O2K) system. The shared memory version of the solver is used with the O2K 'pfa' preprocessor invoked to automatically discover parallelism in the source code. No other explicit parallelism is enabled. In order to study the scaling and performance of the visualization co-processing system, sample runs are made with different processor groups in the range of 1 to 254 processors. The data exchange between the visualization system and the simulation system is rapid enough for user interactivity when the problem size is small. This shared memory version of OVERFLOW, with minimal parallelization, does not scale well to an increasing number of available processors. The visualization task takes about 18 to 30% of the total processing time and does not appear to be a major contributor to the poor scaling. Improper load balancing and inter-processor communication overhead are contributors to this poor performance. Work is in progress which is aimed at obtaining improved parallel performance of the solver and removing the limitations of serial data transfer to pV3 by examining various parallelization/communication strategies, including the use of the explicit message passing.
Optimizing the inner loop of the gravitational force interaction on modern processors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warren, Michael S
2010-12-08
We have achieved superior performance on multiple generations of the fastest supercomputers in the world with our hashed oct-tree N-body code (HOT), spanning almost two decades and garnering multiple Gordon Bell Prizes for significant achievement in parallel processing. Execution time for our N-body code is largely influenced by the force calculation in the inner loop. Improvements to the inner loop using SSE3 instructions has enabled the calculation of over 200 million gravitational interactions per second per processor on a 2.6 GHz Opteron, for a computational rate of over 7 Gflops in single precision (700/0 of peak). We obtain optimal performancemore » some processors (including the Cell) by decomposing the reciprocal square root function required for a gravitational interaction into a table lookup, Chebychev polynomial interpolation, and Newton-Raphson iteration, using the algorithm of Karp. By unrolling the loop by a factor of six, and using SPU intrinsics to compute on vectors, we obtain performance of over 16 Gflops on a single Cell SPE. Aggregated over the 8 SPEs on a Cell processor, the overall performance is roughly 130 Gflops. In comparison, the ordinary C version of our inner loop only obtains 1.6 Gflops per SPE with the spuxlc compiler.« less
Generalized Nanosatellite Avionics Testbed Lab
NASA Technical Reports Server (NTRS)
Frost, Chad R.; Sorgenfrei, Matthew C.; Nehrenz, Matt
2015-01-01
The Generalized Nanosatellite Avionics Testbed (G-NAT) lab at NASA Ames Research Center provides a flexible, easily accessible platform for developing hardware and software for advanced small spacecraft. A collaboration between the Mission Design Division and the Intelligent Systems Division, the objective of the lab is to provide testing data and general test protocols for advanced sensors, actuators, and processors for CubeSat-class spacecraft. By developing test schemes for advanced components outside of the standard mission lifecycle, the lab is able to help reduce the risk carried by advanced nanosatellite or CubeSat missions. Such missions are often allocated very little time for testing, and too often the test facilities must be custom-built for the needs of the mission at hand. The G-NAT lab helps to eliminate these problems by providing an existing suite of testbeds that combines easily accessible, commercial-offthe- shelf (COTS) processors with a collection of existing sensors and actuators.
Application of convolve-multiply-convolve SAW processor for satellite communications
NASA Technical Reports Server (NTRS)
Lie, Y. S.; Ching, M.
1991-01-01
There is a need for a satellite communications receiver than can perform simultaneous multi-channel processing of single channel per carrier (SCPC) signals originating from various small (mobile or fixed) earth stations. The number of ground users can be as many as 1000. Conventional techniques of simultaneously processing these signals is by employing as many RF-bandpass filters as the number of channels. Consequently, such an approach would result in a bulky receiver, which becomes impractical for satellite applications. A unique approach utilizing a realtime surface acoustic wave (SAW) chirp transform processor is presented. The application of a Convolve-Multiply-Convolve (CMC) chirp transform processor is described. The CMC processor transforms each input channel into a unique timeslot, while preserving its modulation content (in this case QPSK). Subsequently, each channel is individually demodulated without the need of input channel filters. Circuit complexity is significantly reduced, because the output frequency of the CMC processor is common for all input channel frequencies. The results of theoretical analysis and experimental results are in good agreement.
NASA Astrophysics Data System (ADS)
Zhang, Yuli; Han, Jun; Weng, Xinqian; He, Zhongzhu; Zeng, Xiaoyang
This paper presents an Application Specific Instruction-set Processor (ASIP) for the SHA-3 BLAKE algorithm family by instruction set extensions (ISE) from an RISC (reduced instruction set computer) processor. With a design space exploration for this ASIP to increase the performance and reduce the area cost, we accomplish an efficient hardware and software implementation of BLAKE algorithm. The special instructions and their well-matched hardware function unit improve the calculation of the key section of the algorithm, namely G-functions. Also, relaxing the time constraint of the special function unit can decrease its hardware cost, while keeping the high data throughput of the processor. Evaluation results reveal the ASIP achieves 335Mbps and 176Mbps for BLAKE-256 and BLAKE-512. The extra area cost is only 8.06k equivalent gates. The proposed ASIP outperforms several software approaches on various platforms in cycle per byte. In fact, both high throughput and low hardware cost achieved by this programmable processor are comparable to that of ASIC implementations.
NASA Astrophysics Data System (ADS)
Xie, Yiwei; Geng, Zihan; Zhuang, Leimeng; Burla, Maurizio; Taddei, Caterina; Hoekman, Marcel; Leinse, Arne; Roeloffzen, Chris G. H.; Boller, Klaus-J.; Lowery, Arthur J.
2017-12-01
Integrated optical signal processors have been identified as a powerful engine for optical processing of microwave signals. They enable wideband and stable signal processing operations on miniaturized chips with ultimate control precision. As a promising application, such processors enables photonic implementations of reconfigurable radio frequency (RF) filters with wide design flexibility, large bandwidth, and high-frequency selectivity. This is a key technology for photonic-assisted RF front ends that opens a path to overcoming the bandwidth limitation of current digital electronics. Here, the recent progress of integrated optical signal processors for implementing such RF filters is reviewed. We highlight the use of a low-loss, high-index-contrast stoichiometric silicon nitride waveguide which promises to serve as a practical material platform for realizing high-performance optical signal processors and points toward photonic RF filters with digital signal processing (DSP)-level flexibility, hundreds-GHz bandwidth, MHz-band frequency selectivity, and full system integration on a chip scale.
NASA Astrophysics Data System (ADS)
Echigo, Mitsuaki; Shinke, Norihisa; Takami, Susumu; Tabata, Takeshi
Natural gas fuel processors have been developed for 500 W and 1 kW class residential polymer electrolyte fuel cell (PEFC) systems. These fuel processors contain all the elements—desulfurizers, steam reformers, CO shift converters, CO preferential oxidation (PROX) reactors, steam generators, burners and heat exchangers—in one package. For the PROX reactor, a single-stage PROX process using a novel PROX catalyst was adopted. In the 1 kW class fuel processor, thermal efficiency of 83% at HHV was achieved at nominal output assuming a H 2 utilization rate in the cell stack of 76%. CO concentration below 1 ppm in the product gas was achieved even under the condition of [O 2]/[CO]=1.5 at the PROX reactor. The long-term durability of the fuel processor was demonstrated with almost no deterioration in thermal efficiency and CO concentration for 10,000 h, 1000 times start and stop cycles, 25,000 cycles of load change.
An extensible infrastructure for fully automated spike sorting during online experiments.
Santhanam, Gopal; Sahani, Maneesh; Ryu, Stephen; Shenoy, Krishna
2004-01-01
When recording extracellular neural activity, it is often necessary to distinguish action potentials arising from distinct cells near the electrode tip, a process commonly referred to as "spike sorting." In a number of experiments, notably those that involve direct neuroprosthetic control of an effector, this cell-by-cell classification of the incoming signal must be achieved in real time. Several commercial offerings are available for this task, but all of these require some manual supervision per electrode, making each scheme cumbersome with large electrode counts. We present a new infrastructure that leverages existing unsupervised algorithms to sort and subsequently implement the resulting signal classification rules for each electrode using a commercially available Cerebus neural signal processor. We demonstrate an implementation of this infrastructure to classify signals from a cortical electrode array, using a probabilistic clustering algorithm (described elsewhere). The data were collected from a rhesus monkey performing a delayed center-out reach task. We used both sorted and unsorted (thresholded) action potentials from an array implanted in pre-motor cortex to "predict" the reach target, a common decoding operation in neuroprosthetic research. The use of sorted spikes led to an improvement in decoding accuracy of between 3.6 and 6.4%.
NASA Technical Reports Server (NTRS)
1994-01-01
A Small Business Innovation Research (SBIR) contract resulted in a series of commercially available lasers, which have application in fiber optic communications, difference frequency generation, fiber optic sensing and general laboratory use. Developed under a Small Business Innovation Research (SBIR) contract, the Phase Doppler Particles Analyzer is a non-disruptive, highly accurate laser-based method of determining particle size, number density, trajectory, turbulence and other information about particles passing through a measurement probe volume. The system consists of an optical transmitter and receiver, signal processor and computer with data acquisition and analysis software. A variety of systems are offered for applications including spray characterization for paint, and agricultural and other sprays. The Microsizer, a related product, is used in medical equipment manufacturing and analysis of contained flows. High frequency components and subsystems produced by Millitech Corporation are marketed for both research and commercial use. These systems, which operate in the upper portion of the millimeter wave, resulted from a number of Small Business Innovation Research (SBIR) projects. By developing very high performance mixers and multipliers, the company has advanced the state of the art in sensitive receiver technology. Components are used in receivers and transceivers for monitoring chlorine monoxides, ozone, in plasma characterization and in material properties characterization.
A Cost Effective System Design Approach for Critical Space Systems
NASA Technical Reports Server (NTRS)
Abbott, Larry Wayne; Cox, Gary; Nguyen, Hai
2000-01-01
NASA-JSC required an avionics platform capable of serving a wide range of applications in a cost-effective manner. In part, making the avionics platform cost effective means adhering to open standards and supporting the integration of COTS products with custom products. Inherently, operation in space requires low power, mass, and volume while retaining high performance, reconfigurability, scalability, and upgradability. The Universal Mini-Controller project is based on a modified PC/104-Plus architecture while maintaining full compatibility with standard COTS PC/104 products. The architecture consists of a library of building block modules, which can be mixed and matched to meet a specific application. A set of NASA developed core building blocks, processor card, analog input/output card, and a Mil-Std-1553 card, have been constructed to meet critical functions and unique interfaces. The design for the processor card is based on the PowerPC architecture. This architecture provides an excellent balance between power consumption and performance, and has an upgrade path to the forthcoming radiation hardened PowerPC processor. The processor card, which makes extensive use of surface mount technology, has a 166 MHz PowerPC 603e processor, 32 Mbytes of error detected and corrected RAM, 8 Mbytes of Flash, and I Mbytes of EPROM, on a single PC/104-Plus card. Similar densities have been achieved with the quad channel Mil-Std-1553 card and the analog input/output cards. The power management built into the processor and its peripheral chip allows the power and performance of the system to be adjusted to meet the requirements of the application, allowing another dimension to the flexibility of the Universal Mini-Controller. Unique mechanical packaging allows the Universal Mini-Controller to accommodate standard COTS and custom oversized PC/104-Plus cards. This mechanical packaging also provides thermal management via conductive cooling of COTS boards, which are typically designed for convection cooling methods.
Software for embedded processors: Problems and solutions
NASA Astrophysics Data System (ADS)
Bogaerts, J. A. C.
1990-08-01
Data Acquistion systems in HEP experiments use a wide spectrum of computers to cope with two major problems: high event rates and a large data volume. They do this by using special fast trigger processors at the source to reduce the event rate by several orders of magnitude. The next stage of a data acquisition system consists of a network of fast but conventional microprocessors which are embedded in high speed bus systems where data is still further reduced, filtered and merged. In the final stage complete events are farmed out to a another collection of processors, which reconstruct the events and perhaps achieve a further event rejection by a small factor, prior to recording onto magnetic tape. Detectors are monitored by analyzing a fraction of the data. This may be done for individual detectors at an early state of the data acquisition or it may be delayed till the complete events are available. A network of workstations is used for monitoring, displays and run control. Software for trigger processors must have a simple structure. Rejection algorithms are carefully optimized, and overheads introduced by system software cannot be tolerated. The embedded microprocessors have to co-operate, and need to be synchronized with the preceding and following stages. Real time kernels are typically used to solve synchronization and communication problems. Applications are usually coded in C, which is reasonably efficient and allows direct control over low level hardware functions. Event reconstruction software is very similar or even identical to offline software, predominantly written in FORTRAN. With the advent of powerful RISC processors, and with manufacturers tending to adopt open bus architectures, there is a move towards commercial processors and hence the introduction of the UNIX operating system. Building and controlling such a heterogeneous data acquisition system puts a heavy strain on the software. Communications is now as important as CPU capacity and I/O bandwidth, the traditional key parameters of a HEP data acquisition system. Software engineering and real time system simulation tools are becoming indispensible for the design of future data acquisition systems.
NASA Astrophysics Data System (ADS)
Imamura, Taro; Miyanoshita, Akihiro; Todoriki, Setsuko; Hayashi, Toru
2004-09-01
Efficacy of soft-electron treatment for disinfestations of grains was investigated by treating pre-infested brown rice and adzuki bean with a commercial-scale soft-electron machine (soft-electron processor). Soft-electrons at 150 kV efficiently disinfested brown rice grains pre-infested with maize weevil ( Stiophilus zeamais Motchulsky) and Indian meal moth ( Plodia interpunctella (Hübner)) and adzuki beans with adzuki bean weevil ( Callosobruchus chinensis (Linne)), although small numbers of the internal feeders such as C. chinensis in adzuki bean and S. zeamais in brown rice survived. The results indicate that the commercial-scale soft-electron machine can disinfest grains and beans, especially those contaminated with external feeders.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aristos Aristidou Natureworks); Robert Kean; Tom Schechinger
2007-10-01
The two main objectives of this project were: 1) to develop and test technologies to harvest, transport, store, and separate corn stover to supply a clean raw material to the bioproducts industry, and 2) engineer fermentation systems to meet performance targets for lactic acid and ethanol manufacturers. Significant progress was made in testing methods to harvest corn stover in a “single pass” harvest mode (collect corn grain and stover at the same time). This is technically feasible on small scale, but additional equipment refinements will be needed to facilitate cost effective harvest on a larger scale. Transportation models were developed,more » which indicate that at a corn stover yield of 2.8 tons/acre and purchase price of $35/ton stover, it would be unprofitable to transport stover more than about 25 miles; thus suggesting the development of many regional collection centers. Therefore, collection centers should be located within about 30 miles of the farm, to keep transportation costs to an acceptable level. These collection centers could then potentially do some preprocessing (to fractionate or increase bulk density) and/or ship the biomass by rail or barge to the final customers. Wet storage of stover via ensilage was tested, but no clear economic advantages were evident. Wet storage eliminates fire risk, but increases the complexity of component separation and may result in a small loss of carbohydrate content (fermentation potential). A study of possible supplier-producer relationships, concluded that a “quasi-vertical” integration model would be best suited for new bioproducts industries based on stover. In this model, the relationship would involve a multiyear supply contract (processor with purchase guarantees, producer group with supply guarantees). Price will likely be fixed or calculated based on some formula (possibly a cost plus). Initial quality requirements will be specified (but subject to refinement).Producers would invest in harvest/storage/transportation equipment and the processor would build and operate the plant. Pilot fermentation studies demonstrated dramatic improvements in yields and rates with optimization of batch fermentor parameters. Demonstrated yields and rates are approaching those necessary for profitable commercial operation for production of ethanol or lactic acid. The ability of the biocatalyst to adapt to biomass hydrolysate (both biomass sugars and toxins in the hydrolysate) was demonstrated and points towards ultimate successful commercialization of the technology. However, some of this work will need to be repeated and possibly extended to adapt the final selected biocatalyst for the specific commercial hydrolysate composition. The path from corn stover in the farm field to final products, involves a number of steps. Each of these steps has options, problems, and uncertainties; thus creating a very complex multidimensional obstacle to successful commercial development. Through the tasks of this project, the technical and commercial uncertainties of many of these steps have been addressed; thus providing for a clearer understanding of paths forward and commercial viability of a corn stover-based biorefinery.« less
120-MHz BiCMOS superscalar RISC processor
NASA Astrophysics Data System (ADS)
Tanaka, Shigeya; Hotta, Takashi; Murabayashi, Fumio; Yamada, Hiromichi; Yoshida, Shoji; Shimamura, Kotaro; Katsura, Koyo; Bandoh, Tadaaki; Ikeda, Koichi; Matsubara, Kenji
1994-04-01
A superscalar RISC processor contains 2.8 million transistors in a die size of 16.2 mm x 16.5 mm, and utilizes 3.3 V/0.5 micron BiCMOS technology. In order to take advantage of superscalar performance without incurring penalties from a slower clock or a longer pipeline, a tag bit is implemented in the instruction cache to indicate dependency between two instructions. A performance gain of up to 37% is obtained with only a 3.5% area overhead from our superscalar design.
Ansari, A H; Cherian, P J; Dereymaeker, A; Matic, V; Jansen, K; De Wispelaere, L; Dielman, C; Vervisch, J; Swarte, R M; Govaert, P; Naulaers, G; De Vos, M; Van Huffel, S
2016-09-01
After identifying the most seizure-relevant characteristics by a previously developed heuristic classifier, a data-driven post-processor using a novel set of features is applied to improve the performance. The main characteristics of the outputs of the heuristic algorithm are extracted by five sets of features including synchronization, evolution, retention, segment, and signal features. Then, a support vector machine and a decision making layer remove the falsely detected segments. Four datasets including 71 neonates (1023h, 3493 seizures) recorded in two different university hospitals, are used to train and test the algorithm without removing the dubious seizures. The heuristic method resulted in a false alarm rate of 3.81 per hour and good detection rate of 88% on the entire test databases. The post-processor, effectively reduces the false alarm rate by 34% while the good detection rate decreases by 2%. This post-processing technique improves the performance of the heuristic algorithm. The structure of this post-processor is generic, improves our understanding of the core visually determined EEG features of neonatal seizures and is applicable for other neonatal seizure detectors. The post-processor significantly decreases the false alarm rate at the expense of a small reduction of the good detection rate. Copyright © 2016 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Efficient Multicriteria Protein Structure Comparison on Modern Processor Architectures
Manolakos, Elias S.
2015-01-01
Fast increasing computational demand for all-to-all protein structures comparison (PSC) is a result of three confounding factors: rapidly expanding structural proteomics databases, high computational complexity of pairwise protein comparison algorithms, and the trend in the domain towards using multiple criteria for protein structures comparison (MCPSC) and combining results. We have developed a software framework that exploits many-core and multicore CPUs to implement efficient parallel MCPSC in modern processors based on three popular PSC methods, namely, TMalign, CE, and USM. We evaluate and compare the performance and efficiency of the two parallel MCPSC implementations using Intel's experimental many-core Single-Chip Cloud Computer (SCC) as well as Intel's Core i7 multicore processor. We show that the 48-core SCC is more efficient than the latest generation Core i7, achieving a speedup factor of 42 (efficiency of 0.9), making many-core processors an exciting emerging technology for large-scale structural proteomics. We compare and contrast the performance of the two processors on several datasets and also show that MCPSC outperforms its component methods in grouping related domains, achieving a high F-measure of 0.91 on the benchmark CK34 dataset. The software implementation for protein structure comparison using the three methods and combined MCPSC, along with the developed underlying rckskel algorithmic skeletons library, is available via GitHub. PMID:26605332
Efficient Multicriteria Protein Structure Comparison on Modern Processor Architectures.
Sharma, Anuj; Manolakos, Elias S
2015-01-01
Fast increasing computational demand for all-to-all protein structures comparison (PSC) is a result of three confounding factors: rapidly expanding structural proteomics databases, high computational complexity of pairwise protein comparison algorithms, and the trend in the domain towards using multiple criteria for protein structures comparison (MCPSC) and combining results. We have developed a software framework that exploits many-core and multicore CPUs to implement efficient parallel MCPSC in modern processors based on three popular PSC methods, namely, TMalign, CE, and USM. We evaluate and compare the performance and efficiency of the two parallel MCPSC implementations using Intel's experimental many-core Single-Chip Cloud Computer (SCC) as well as Intel's Core i7 multicore processor. We show that the 48-core SCC is more efficient than the latest generation Core i7, achieving a speedup factor of 42 (efficiency of 0.9), making many-core processors an exciting emerging technology for large-scale structural proteomics. We compare and contrast the performance of the two processors on several datasets and also show that MCPSC outperforms its component methods in grouping related domains, achieving a high F-measure of 0.91 on the benchmark CK34 dataset. The software implementation for protein structure comparison using the three methods and combined MCPSC, along with the developed underlying rckskel algorithmic skeletons library, is available via GitHub.
NASA Technical Reports Server (NTRS)
Kemeny, Sabrina E.
1994-01-01
Electronic and optoelectronic hardware implementations of highly parallel computing architectures address several ill-defined and/or computation-intensive problems not easily solved by conventional computing techniques. The concurrent processing architectures developed are derived from a variety of advanced computing paradigms including neural network models, fuzzy logic, and cellular automata. Hardware implementation technologies range from state-of-the-art digital/analog custom-VLSI to advanced optoelectronic devices such as computer-generated holograms and e-beam fabricated Dammann gratings. JPL's concurrent processing devices group has developed a broad technology base in hardware implementable parallel algorithms, low-power and high-speed VLSI designs and building block VLSI chips, leading to application-specific high-performance embeddable processors. Application areas include high throughput map-data classification using feedforward neural networks, terrain based tactical movement planner using cellular automata, resource optimization (weapon-target assignment) using a multidimensional feedback network with lateral inhibition, and classification of rocks using an inner-product scheme on thematic mapper data. In addition to addressing specific functional needs of DOD and NASA, the JPL-developed concurrent processing device technology is also being customized for a variety of commercial applications (in collaboration with industrial partners), and is being transferred to U.S. industries. This viewgraph p resentation focuses on two application-specific processors which solve the computation intensive tasks of resource allocation (weapon-target assignment) and terrain based tactical movement planning using two extremely different topologies. Resource allocation is implemented as an asynchronous analog competitive assignment architecture inspired by the Hopfield network. Hardware realization leads to a two to four order of magnitude speed-up over conventional techniques and enables multiple assignments, (many to many), not achievable with standard statistical approaches. Tactical movement planning (finding the best path from A to B) is accomplished with a digital two-dimensional concurrent processor array. By exploiting the natural parallel decomposition of the problem in silicon, a four order of magnitude speed-up over optimized software approaches has been demonstrated.
Electrically reconfigurable logic array
NASA Technical Reports Server (NTRS)
Agarwal, R. K.
1982-01-01
To compose the complicated systems using algorithmically specialized logic circuits or processors, one solution is to perform relational computations such as union, division and intersection directly on hardware. These relations can be pipelined efficiently on a network of processors having an array configuration. These processors can be designed and implemented with a few simple cells. In order to determine the state-of-the-art in Electrically Reconfigurable Logic Array (ERLA), a survey of the available programmable logic array (PLA) and the logic circuit elements used in such arrays was conducted. Based on this survey some recommendations are made for ERLA devices.
Is random access memory random?
NASA Technical Reports Server (NTRS)
Denning, P. J.
1986-01-01
Most software is contructed on the assumption that the programs and data are stored in random access memory (RAM). Physical limitations on the relative speeds of processor and memory elements lead to a variety of memory organizations that match processor addressing rate with memory service rate. These include interleaved and cached memory. A very high fraction of a processor's address requests can be satified from the cache without reference to the main memory. The cache requests information from main memory in blocks that can be transferred at the full memory speed. Programmers who organize algorithms for locality can realize the highest performance from these computers.
Network Coding on Heterogeneous Multi-Core Processors for Wireless Sensor Networks
Kim, Deokho; Park, Karam; Ro, Won W.
2011-01-01
While network coding is well known for its efficiency and usefulness in wireless sensor networks, the excessive costs associated with decoding computation and complexity still hinder its adoption into practical use. On the other hand, high-performance microprocessors with heterogeneous multi-cores would be used as processing nodes of the wireless sensor networks in the near future. To this end, this paper introduces an efficient network coding algorithm developed for the heterogenous multi-core processors. The proposed idea is fully tested on one of the currently available heterogeneous multi-core processors referred to as the Cell Broadband Engine. PMID:22164053
Atmospheric Correction Inter-comparison Exercise (ACIX)
NASA Astrophysics Data System (ADS)
Vermote, E.; Doxani, G.; Gascon, F.; Roger, J. C.; Skakun, S.
2017-12-01
The free and open data access policy to Landsat-8 (L-8) and Sentinel-2 (S-2) satellite imagery has encouraged the development of atmospheric correction (AC) approaches for generating Bottom-of-Atmosphere (BOA) products. Several entities have started to generate (or plan to generate in the short term) BOA reflectance products at global scale for L-8 and S-2 missions. To this end, the European Space Agency (ESA) and National Aeronautics and Space Administration (NASA) have initiated an exercise on the inter-comparison of the available AC processors. The results of the exercise are expected to point out the strengths and weaknesses, as well as communalities and discrepancies of various AC processors, in order to suggest and define ways for their further improvement. In particular, 11 atmospheric processors from five different countries participate in ACIX with the aim to inter-compare their performance when applied to L-8 and S-2 data. All the processors should be operational without requiring parametrization when applied on different areas. A protocol describing in details the inter-comparison metrics and the test dataset based on the AERONET sites has been agreed unanimously during the 1st ACIX workshop in June 2016. In particular, a basic and an advanced run of each of the processor were requested in the frame of ACIX, with the aim to draw robust and reliable conclusions on the processors' performance. The protocol also describes the comparison metrics of the aerosol optical thickness and water vapour products of the processors with the corresponding AERONET measurements. Moreover, concerning the surface reflectances, the inter-comparison among the processors is defined, as well as the comparison with the MODIS surface reflectance and with a reference surface reflectance product. Such a reference product will be obtained using the AERONET characterization of the aerosol (size distribution and refractive indices) and an accurate radiative transfer code. The inter-comparison outcomes were presented and discussed among the ACIX participants in the 2nd ACIX workshop, which was held on 11-12 April 2017 (ESRIN/ESA) and a detailed report was compiled. The proposed presentation is an opportunity for the user community to be informed about the ACIX results and conclusions.
Waterway Shielding System and Method
2003-04-30
In early October 12 2002, A French VLCC (Very Large Crude Carrier) chartered by 13 Malaysian state oil company Petronas was attacked by terrorist 14...Combiner 72 may provide a 11 processor with algorithms that are used to add candidates when 12 uncertain and delete candidates when analysis data is fairly...commercial interests that normally have a need to know where 18 shipments of interest on any particular ship are presently 19 located, where competitor
2012-02-17
to be solved. Disclaimer: Reference herein to any specific commercial company , product, process, or service by trade name, trademark...data processing rather than data caching and control flow. To make use of this computational power, NVIDIA introduced a general purpose parallel...GPU implementations were run on an Intel Nehalem Xeon E5520 2.26GHz processor with an NVIDIA Tesla C2070 graphics card for varying numbers of
Analysis of Acoustic Depth Sounder Signals with Artificial Neural Networks
1991-04-01
battery pack, processor, and mode switches and (2) a stainless steel shaft 1 meter long and 27 millimeters in diameter, containing 8 milliCurie of...returned signal which is not used in conventional depth sounders due to lack of real-time tools for interpreting the 36 information. The shape and...develop some software tools for conducting the research. Commercial programs for neural network implementation were available, but were "black box" in
Methods for operating parallel computing systems employing sequenced communications
Benner, Robert E.; Gustafson, John L.; Montry, Gary R.
1999-01-01
A parallel computing system and method having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system.
Design and Performance of the Astro-E/XRS Signal Processing System
NASA Technical Reports Server (NTRS)
Boyce, Kevin R.; Audley, M. D.; Baker, R. G.; Dumonthier, J. J.; Fujimoto, R.; Gendreau, K. C.; Ishisaki, Y.; Kelley, R. L.; Stahle, C. K.; Szymkowiak, A. E.
1999-01-01
We describe the signal processing system of the Astro-E XRS instrument. The Calorimeter Analog Processor (CAP) provides bias and power for the detectors and amplifies the detector signals by a factor of 20,000. The Calorimeter Digital Processor (CDP) performs the digital processing of the calorimeter signals, detecting X-ray pulses and analyzing them by optimal filtering. We describe the operation of pulse detection, Pulse height analysis. and risetime determination. We also discuss performance, including the three event grades (hi-res mid-res, and low-res). anticoincidence detection, counting rate dependence, and noise rejection.
Algorithms and software for solving finite element equations on serial and parallel architectures
NASA Technical Reports Server (NTRS)
Chu, Eleanor; George, Alan
1988-01-01
The primary objective was to compare the performance of state-of-the-art techniques for solving sparse systems with those that are currently available in the Computational Structural Mechanics (MSC) testbed. One of the first tasks was to become familiar with the structure of the testbed, and to install some or all of the SPARSPAK package in the testbed. A brief overview of the CSM Testbed software and its usage is presented. An overview of the sparse matrix research for the Testbed currently employed in the CSM Testbed is given. An interface which was designed and implemented as a research tool for installing and appraising new matrix processors in the CSM Testbed is described. The results of numerical experiments performed in solving a set of testbed demonstration problems using the processor SPK and other experimental processors are contained.
Performance of Distributed CFAR Processors in Pearson Distributed Clutter
NASA Astrophysics Data System (ADS)
Messali, Zoubeida; Soltani, Faouzi
2006-12-01
This paper deals with the distributed constant false alarm rate (CFAR) radar detection of targets embedded in heavy-tailed Pearson distributed clutter. In particular, we extend the results obtained for the cell averaging (CA), order statistics (OS), and censored mean level CMLD CFAR processors operating in positive alpha-stable (P&S) random variables to more general situations, specifically to the presence of interfering targets and distributed CFAR detectors. The receiver operating characteristics of the greatest of (GO) and the smallest of (SO) CFAR processors are also determined. The performance characteristics of distributed systems are presented and compared in both homogeneous and in presence of interfering targets. We demonstrate, via simulation results, that the distributed systems when the clutter is modelled as positive alpha-stable distribution offer robustness properties against multiple target situations especially when using the "OR" fusion rule.
NASA Astrophysics Data System (ADS)
Peschmann, K. R.; Parker, D. L.; Smith, V.
1982-11-01
An abundant number of different CT scanner models has been developed in the past ten years, meeting increasing standards of performance. From the beginning they remained a comparatively expensive piece of equipment. This is due not only to their technical complexity but is also due to the difficulties involved in assessing "true" specifications (avoiding "overde-sign"). Our aim has been to provide, for Radiation Therapy Treatment Planning, a low cost CT scanner system featuring large freedom in patient positioning. We have taken advantage of the concurrent tremendously increased amount of knowledge and experience in the technical area of CT1 . By way of extensive computer simulations we gained confidence that an inexpensive C-arm simulator gantry and a simple one phase-two pulse generator in connection with a standard x-ray tube could be used, without sacrificing image quality. These components have been complemented by a commercial high precision shaft encoder, a simple and effective fan beam collimator, a high precision, high efficiency, luminescence crystal-silicon photodiode detector with 256 channels, low noise electronic preamplifier and sampling filter stages, a simplified data aquisition system furnished by Toshiba/ Analogic and an LSI 11/23 microcomputer plus data storage disk as well as various smaller interfaces linking the electrical components. The quality of CT scan pictures of phantoms,performed by the end of last year confirmed that this simple approach is working well. As a next step we intend to upgrade this system with an array processor in order to shorten recon-struction time to one minute per slice. We estimate that the system including this processor could be manufactured for a selling price of $210,000.
HodDB: Design and Analysis of a Query Processor for Brick.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fierro, Gabriel; Culler, David
Brick is a recently proposed metadata schema and ontology for describing building components and the relationships between them. It represents buildings as directed labeled graphs using the RDF data model. Using the SPARQL query language, building-agnostic applications query a Brick graph to discover the set of resources and relationships they require to operate. Latency-sensitive applications, such as user interfaces, demand response and modelpredictive control, require fast queries — conventionally less than 100ms. We benchmark a set of popular open-source and commercial SPARQL databases against three real Brick models using seven application queries and find that none of them meet thismore » performance target. This lack of performance can be attributed to design decisions that optimize for queries over large graphs consisting of billions of triples, but give poor spatial locality and join performance on the small dense graphs typical of Brick. We present the design and evaluation of HodDB, a RDF/SPARQL database for Brick built over a node-based index structure. HodDB performs Brick queries 3-700x faster than leading SPARQL databases and consistently meets the 100ms threshold, enabling the portability of important latency-sensitive building applications.« less
Comparison of Monte Carlo simulated and measured performance parameters of miniPET scanner
NASA Astrophysics Data System (ADS)
Kis, S. A.; Emri, M.; Opposits, G.; Bükki, T.; Valastyán, I.; Hegyesi, Gy.; Imrek, J.; Kalinka, G.; Molnár, J.; Novák, D.; Végh, J.; Kerek, A.; Trón, L.; Balkay, L.
2007-02-01
In vivo imaging of small laboratory animals is a valuable tool in the development of new drugs. For this purpose, miniPET, an easy to scale modular small animal PET camera has been developed at our institutes. The system has four modules, which makes it possible to rotate the whole detector system around the axis of the field of view. Data collection and image reconstruction are performed using a data acquisition (DAQ) module with Ethernet communication facility and a computer cluster of commercial PCs. Performance tests were carried out to determine system parameters, such as energy resolution, sensitivity and noise equivalent count rate. A modified GEANT4-based GATE Monte Carlo software package was used to simulate PET data analogous to those of the performance measurements. GATE was run on a Linux cluster of 10 processors (64 bit, Xeon with 3.0 GHz) and controlled by a SUN grid engine. The application of this special computer cluster reduced the time necessary for the simulations by an order of magnitude. The simulated energy spectra, maximum rate of true coincidences and sensitivity of the camera were in good agreement with the measured parameters.
Monte Carlo dose calculation using a cell processor based PlayStation 3 system
NASA Astrophysics Data System (ADS)
Chow, James C. L.; Lam, Phil; Jaffray, David A.
2012-02-01
This study investigates the performance of the EGSnrc computer code coupled with a Cell-based hardware in Monte Carlo simulation of radiation dose in radiotherapy. Performance evaluations of two processor-intensive functions namely, HOWNEAR and RANMAR_GET in the EGSnrc code were carried out basing on the 20-80 rule (Pareto principle). The execution speeds of the two functions were measured by the profiler gprof specifying the number of executions and total time spent on the functions. A testing architecture designed for Cell processor was implemented in the evaluation using a PlayStation3 (PS3) system. The evaluation results show that the algorithms examined are readily parallelizable on the Cell platform, provided that an architectural change of the EGSnrc was made. However, as the EGSnrc performance was limited by the PowerPC Processing Element in the PS3, PC coupled with graphics processing units or GPCPU may provide a more viable avenue for acceleration.
NASA Technical Reports Server (NTRS)
Cudmore, Alan; Leath, Tim; Ferrer, Art; Miller, Todd; Walters, Mark; Savadkin, Bruce; Wu, Ji-Wei; Slegel, Steve; Stagmer, Emory
2007-01-01
The command-and-data-handling (C&DH) software of the Wilkinson Microwave Anisotropy Probe (WMAP) spacecraft functions as the sole interface between (1) the spacecraft and its instrument subsystem and (2) ground operations equipment. This software includes a command-decoding and -distribution system, a telemetry/data-handling system, and a data-storage-and-playback system. This software performs onboard processing of attitude sensor data and generates commands for attitude-control actuators in a closed-loop fashion. It also processes stored commands and monitors health and safety functions for the spacecraft and its instrument subsystems. The basic functionality of this software is the same of that of the older C&DH software of the Rossi X-Ray Timing Explorer (RXTE) spacecraft, the main difference being the addition of the attitude-control functionality. Previously, the C&DH and attitude-control computations were performed by different processors because a single RXTE processor did not have enough processing power. The WMAP spacecraft includes a more-powerful processor capable of performing both computations.
ISFET-based sensor signal processor chip design for environment monitoring applications
NASA Astrophysics Data System (ADS)
Chung, Wen-Yaw; Yang, Chung-Huang; Wang, Ming-Ga
2004-12-01
In recent years Ion-Sensitive Field Effect Transistor (ISFET) based transducers create valuable applications in physiological data acquisition and environment monitoring. This paper presents a mixed-mode ASIC design for potentiometric ISFET-based bio-chemical sensor applications including H+ sensing and hand-held pH meter. For battery power consideration, the proposed system consists of low voltage (3V) analog front-end readout circuits and digital processor has been developed and fabricated in a 0.5mm double-poly double-metal CMOS technology. To assure that the correct pH value can be measured, the two-point calibration circuitry based on the response of standard pH4 and pH7 buffer solution has been implemented by using algorithmic state machine hardware algorithms. The measurement accuracy of the chip is 10 bits and the measured range between pH 2 to pH 12 compared to ideal values is within the accuracy of 0.1pH. For homeland environmental applications, the system provide rapid, easy to use, and cost-effective on-site testing on the quality of water, such as drinking water, ground water and river water. The processor has a potential usage in battery-operated and portable devices in environmental monitoring applications compared to commercial hand-held pH meter.
Evaluation of fault-tolerant parallel-processor architectures over long space missions
NASA Technical Reports Server (NTRS)
Johnson, Sally C.
1989-01-01
The impact of a five year space mission environment on fault-tolerant parallel processor architectures is examined. The target application is a Strategic Defense Initiative (SDI) satellite requiring 256 parallel processors to provide the computation throughput. The reliability requirements are that the system still be operational after five years with .99 probability and that the probability of system failure during one-half hour of full operation be less than 10(-7). The fault tolerance features an architecture must possess to meet these reliability requirements are presented, many potential architectures are briefly evaluated, and one candidate architecture, the Charles Stark Draper Laboratory's Fault-Tolerant Parallel Processor (FTPP) is evaluated in detail. A methodology for designing a preliminary system configuration to meet the reliability and performance requirements of the mission is then presented and demonstrated by designing an FTPP configuration.
Multiprocessing on supercomputers for computational aerodynamics
NASA Technical Reports Server (NTRS)
Yarrow, Maurice; Mehta, Unmeel B.
1991-01-01
Little use is made of multiple processors available on current supercomputers (computers with a theoretical peak performance capability equal to 100 MFLOPS or more) to improve turnaround time in computational aerodynamics. The productivity of a computer user is directly related to this turnaround time. In a time-sharing environment, such improvement in this speed is achieved when multiple processors are used efficiently to execute an algorithm. The concept of multiple instructions and multiple data (MIMD) is applied through multitasking via a strategy that requires relatively minor modifications to an existing code for a single processor. This approach maps the available memory to multiple processors, exploiting the C-Fortran-Unix interface. The existing code is mapped without the need for developing a new algorithm. The procedure for building a code utilizing this approach is automated with the Unix stream editor.
NCC Simulation Model: Simulating the operations of the network control center, phase 2
NASA Technical Reports Server (NTRS)
Benjamin, Norman M.; Paul, Arthur S.; Gill, Tepper L.
1992-01-01
The simulation of the network control center (NCC) is in the second phase of development. This phase seeks to further develop the work performed in phase one. Phase one concentrated on the computer systems and interconnecting network. The focus of phase two will be the implementation of the network message dialogues and the resources controlled by the NCC. These resources are requested, initiated, monitored and analyzed via network messages. In the NCC network messages are presented in the form of packets that are routed across the network. These packets are generated, encoded, decoded and processed by the network host processors that generate and service the message traffic on the network that connects these hosts. As a result, the message traffic is used to characterize the work done by the NCC and the connected network. Phase one of the model development represented the NCC as a network of bi-directional single server queues and message generating sources. The generators represented the external segment processors. The served based queues represented the host processors. The NCC model consists of the internal and external processors which generate message traffic on the network that links these hosts. To fully realize the objective of phase two it is necessary to identify and model the processes in each internal processor. These processes live in the operating system of the internal host computers and handle tasks such as high speed message exchanging, ISN and NFE interface, event monitoring, network monitoring, and message logging. Inter process communication is achieved through the operating system facilities. The overall performance of the host is determined by its ability to service messages generated by both internal and external processors.
Magalhães, Ana Tereza de Matos; Goffi-Gomez, M Valéria Schmidt; Hoshino, Ana Cristina; Tsuji, Robinson Koji; Bento, Ricardo Ferreira; Brito, Rubens
2013-09-01
To identify the technological contributions of the newer version of speech processor to the first generation of multichannel cochlear implant and the satisfaction of users of the new technology. Among the new features available, we focused on the effect of the frequency allocation table, the T-SPL and C-SPL, and the preprocessing gain adjustments (adaptive dynamic range optimization). Prospective exploratory study. Cochlear implant center at hospital. Cochlear implant users of the Spectra processor with speech recognition in closed set. Seventeen patients were selected between the ages of 15 and 82 and deployed for more than 8 years. The technology update of the speech processor for the Nucleus 22. To determine Freedom's contribution, thresholds and speech perception tests were performed with the last map used with the Spectra and the maps created for Freedom. To identify the effect of the frequency allocation table, both upgraded and converted maps were programmed. One map was programmed with 25 dB T-SPL and 65 dB C-SPL and the other map with adaptive dynamic range optimization. To assess satisfaction, SADL and APHAB were used. All speech perception tests and all sound field thresholds were statistically better with the new speech processor; 64.7% of patients preferred maintaining the same frequency table that was suggested for the older processor. The sound field threshold was statistically significant at 500, 1,000, 1,500, and 2,000 Hz with 25 dB T-SPL/65 dB C-SPL. Regarding patient's satisfaction, there was a statistically significant improvement, only in the subscale of speech in noise abilities and phone use. The new technology improved the performance of patients with the first generation of multichannel cochlear implant.
Parallel Computation of the Jacobian Matrix for Nonlinear Equation Solvers Using MATLAB
NASA Technical Reports Server (NTRS)
Rose, Geoffrey K.; Nguyen, Duc T.; Newman, Brett A.
2017-01-01
Demonstrating speedup for parallel code on a multicore shared memory PC can be challenging in MATLAB due to underlying parallel operations that are often opaque to the user. This can limit potential for improvement of serial code even for the so-called embarrassingly parallel applications. One such application is the computation of the Jacobian matrix inherent to most nonlinear equation solvers. Computation of this matrix represents the primary bottleneck in nonlinear solver speed such that commercial finite element (FE) and multi-body-dynamic (MBD) codes attempt to minimize computations. A timing study using MATLAB's Parallel Computing Toolbox was performed for numerical computation of the Jacobian. Several approaches for implementing parallel code were investigated while only the single program multiple data (spmd) method using composite objects provided positive results. Parallel code speedup is demonstrated but the goal of linear speedup through the addition of processors was not achieved due to PC architecture.
Wolfe, Jace; Morais Duke, Mila; Schafer, Erin; Cire, George; Menapace, Christine; O'Neill, Lori
2016-01-01
The objective of this study was to evaluate the potential improvement in word recognition in quiet and in noise obtained with use of a Bluetooth-compatible wireless hearing assistance technology (HAT) relative to the acoustic mobile telephone condition (e.g. the mobile telephone receiver held to the microphone of the sound processor). A two-way repeated measures design was used to evaluate differences in telephone word recognition obtained in quiet and in competing noise in the acoustic mobile telephone condition compared to performance obtained with use of the CI sound processor and a telephone HAT. Sixteen adult users of Nucleus cochlear implants and the Nucleus 6 sound processor were included in this study. Word recognition over the mobile telephone in quiet and in noise was significantly better with use of the wireless HAT compared to performance in the acoustic mobile telephone condition. Word recognition over the mobile telephone was better in quiet when compared to performance in noise. The results of this study indicate that use of a wireless HAT improves word recognition over the mobile telephone in quiet and in noise relative to performance in the acoustic mobile telephone condition for a group of adult cochlear implant recipients.
Evaluation of Natural Language Processors.
1980-11-01
techniques described. Common practice in describing natural language processors is to describe the programs, then give about 20 examples of correctly...make a decision based on performance as to which approaches are most promising for further research and development. The lack of evaluation leaves...successively more difficult problems. This approach might be compared to children taking achievement tests in school. A 90% score on problems involving
DOE Office of Scientific and Technical Information (OSTI.GOV)
Donofrio, David
A method and apparatus for performing stencil computations efficiently are disclosed. In one embodiment, a processor receives an offset, and in response, retrieves a value from a memory via a single instruction, where the retrieving comprises: identifying, based on the offset, one of a plurality of registers of the processor; loading an address stored in the identified register; and retrieving from the memory the value at the address.
NASA Technical Reports Server (NTRS)
Marlowe, M. B.; Moore, R. A.; Whetstone, W. D.
1979-01-01
User instructions are given for performing linear and nonlinear steady state and transient thermal analyses with SPAR thermal analysis processors TGEO, SSTA, and TRTA. It is assumed that the user is familiar with basic SPAR operations and basic heat transfer theory.
Performance Qualification Test of the ISS Water Processor Assembly (WPA) Expendables
NASA Technical Reports Server (NTRS)
Carter, Layne; Tabb, David; Tatara, James D.; Mason, Richard K.
2005-01-01
The Water Processor Assembly (WPA) for use on the International Space Station (ISS) includes various technologies for the treatment of waste water. These technologies include filtration, ion exchange, adsorption, catalytic oxidation, and iodination. The WPA hardware implementing portions of these technologies, including the Particulate Filter, Multifiltration Bed, Ion Exchange Bed, and Microbial Check Valve, was recently qualified for chemical performance at the Marshall Space Flight Center. Waste water representing the quality of that produced on the ISS was generated by test subjects and processed by the WPA. Water quality analysis and instrumentation data was acquired throughout the test to monitor hardware performance. This paper documents operation of the test and the assessment of the hardware performance.
FAST: framework for heterogeneous medical image computing and visualization.
Smistad, Erik; Bozorgi, Mohammadmehdi; Lindseth, Frank
2015-11-01
Computer systems are becoming increasingly heterogeneous in the sense that they consist of different processors, such as multi-core CPUs and graphic processing units. As the amount of medical image data increases, it is crucial to exploit the computational power of these processors. However, this is currently difficult due to several factors, such as driver errors, processor differences, and the need for low-level memory handling. This paper presents a novel FrAmework for heterogeneouS medical image compuTing and visualization (FAST). The framework aims to make it easier to simultaneously process and visualize medical images efficiently on heterogeneous systems. FAST uses common image processing programming paradigms and hides the details of memory handling from the user, while enabling the use of all processors and cores on a system. The framework is open-source, cross-platform and available online. Code examples and performance measurements are presented to show the simplicity and efficiency of FAST. The results are compared to the insight toolkit (ITK) and the visualization toolkit (VTK) and show that the presented framework is faster with up to 20 times speedup on several common medical imaging algorithms. FAST enables efficient medical image computing and visualization on heterogeneous systems. Code examples and performance evaluations have demonstrated that the toolkit is both easy to use and performs better than existing frameworks, such as ITK and VTK.
Level Zero Trigger Processor for the NA62 experiment
NASA Astrophysics Data System (ADS)
Soldi, D.; Chiozzi, S.
2018-05-01
The NA62 experiment is designed to measure the ultra-rare decay K+ arrow π+ ν bar nu branching ratio with a precision of ~ 10% at the CERN Super Proton Synchrotron (SPS). The trigger system of NA62 consists in three different levels designed to select events of physics interest in a high beam rate environment. The L0 Trigger Processor (L0TP) is the lowest level system of the trigger chain. It is hardware implemented using programmable logic. The architecture of the NA62 L0TP system is a new approach compared to existing systems used in high-energy physics experiments. It is fully digital, based on a standard gigabit Ethernet communication between detectors and the L0TP Board. The L0TP Board is a commercial development board, mounting a programmable logic device (FPGA). The primitives generated by sub-detectors are sent asynchronously using the UDP protocol to the L0TP during the entire beam spill period. The L0TP realigns in time the primitives coming from seven different sources and performs a data selection based on the characteristics of the event such as energy, multiplicity and topology of hits in the sub-detectors. It guarantees a maximum latency of 1 ms. The maximum input rate is about 10 MHz for each sub-detector, while the design maximum output trigger rate is 1 MHz. A description of the trigger algorithm is presented here.
Waste Vegetable Oil as an Alternative Fuel for Diesel Vehicles
2009-03-01
processor has a 160 gallon capacity, a fuel dryer , and features automatic mixing of the chemicals. The chemicals needed consist of lye (sodium...to distinguish it as tax-exempt. Fuel taxes are reported to the Internal Revenue Service ( IRS ) when the fuel is distributed to the “Service...collected in the commercial market. The refiner will pay the tax per gallon directly to the 22 IRS . When the fuel is sold, the end user pays the tax
Optimally stopped variational quantum algorithms
NASA Astrophysics Data System (ADS)
Vinci, Walter; Shabani, Alireza
2018-04-01
Quantum processors promise a paradigm shift in high-performance computing which needs to be assessed by accurate benchmarking measures. In this article, we introduce a benchmark for the variational quantum algorithm (VQA), recently proposed as a heuristic algorithm for small-scale quantum processors. In VQA, a classical optimization algorithm guides the processor's quantum dynamics to yield the best solution for a given problem. A complete assessment of the scalability and competitiveness of VQA should take into account both the quality and the time of dynamics optimization. The method of optimal stopping, employed here, provides such an assessment by explicitly including time as a cost factor. Here, we showcase this measure for benchmarking VQA as a solver for some quadratic unconstrained binary optimization. Moreover, we show that a better choice for the cost function of the classical routine can significantly improve the performance of the VQA algorithm and even improve its scaling properties.
Web surveillance system using platform-based design
NASA Astrophysics Data System (ADS)
Lin, Shin-Yo; Tsai, Tsung-Han
2004-04-01
A revolutionary methodology of SOPC platform-based design environment for multimedia communications will be developed. We embed a softcore processor to perform the image compression in FPGA. Then, we plug-in an Ethernet daughter board in the SOPC development platform system. Afterward, a web surveillance platform system is presented. The web surveillance system consists of three parts: image capture, web server and JPEG compression. In this architecture, user can control the surveillance system by remote. By the IP address configures to Ethernet daughter board, the user can access the surveillance system via browser. When user access the surveillance system, the CMOS sensor presently capture the remote image. After that, it will feed the captured image with the embedded processor. The embedded processor immediately performs the JPEG compression. Afterward, the user receives the compressed data via Ethernet. To sum up of the above mentioned, the all system will be implemented on APEX20K200E484-2X device.
Parallel implementation of an adaptive scheme for 3D unstructured grids on the SP2
NASA Technical Reports Server (NTRS)
Strawn, Roger C.; Oliker, Leonid; Biswas, Rupak
1996-01-01
Dynamic mesh adaption on unstructured grids is a powerful tool for computing unsteady flows that require local grid modifications to efficiently resolve solution features. For this work, we consider an edge-based adaption scheme that has shown good single-processor performance on the C90. We report on our experience parallelizing this code for the SP2. Results show a 47.0X speedup on 64 processors when 10 percent of the mesh is randomly refined. Performance deteriorates to 7.7X when the same number of edges are refined in a highly-localized region. This is because almost all the mesh adaption is confined to a single processor. However, this problem can be remedied by repartitioning the mesh immediately after targeting edges for refinement but before the actual adaption takes place. With this change, the speedup improves dramatically to 43.6X.
Parallel Implementation of an Adaptive Scheme for 3D Unstructured Grids on the SP2
NASA Technical Reports Server (NTRS)
Oliker, Leonid; Biswas, Rupak; Strawn, Roger C.
1996-01-01
Dynamic mesh adaption on unstructured grids is a powerful tool for computing unsteady flows that require local grid modifications to efficiently resolve solution features. For this work, we consider an edge-based adaption scheme that has shown good single-processor performance on the C90. We report on our experience parallelizing this code for the SP2. Results show a 47.OX speedup on 64 processors when 10% of the mesh is randomly refined. Performance deteriorates to 7.7X when the same number of edges are refined in a highly-localized region. This is because almost all mesh adaption is confined to a single processor. However, this problem can be remedied by repartitioning the mesh immediately after targeting edges for refinement but before the actual adaption takes place. With this change, the speedup improves dramatically to 43.6X.
The parallel algorithm for the 2D discrete wavelet transform
NASA Astrophysics Data System (ADS)
Barina, David; Najman, Pavel; Kleparnik, Petr; Kula, Michal; Zemcik, Pavel
2018-04-01
The discrete wavelet transform can be found at the heart of many image-processing algorithms. Until now, the transform on general-purpose processors (CPUs) was mostly computed using a separable lifting scheme. As the lifting scheme consists of a small number of operations, it is preferred for processing using single-core CPUs. However, considering a parallel processing using multi-core processors, this scheme is inappropriate due to a large number of steps. On such architectures, the number of steps corresponds to the number of points that represent the exchange of data. Consequently, these points often form a performance bottleneck. Our approach appropriately rearranges calculations inside the transform, and thereby reduces the number of steps. In other words, we propose a new scheme that is friendly to parallel environments. When evaluating on multi-core CPUs, we consistently overcome the original lifting scheme. The evaluation was performed on 61-core Intel Xeon Phi and 8-core Intel Xeon processors.
Multisensor data fusion for integrated maritime surveillance
NASA Astrophysics Data System (ADS)
Premji, A.; Ponsford, A. M.
1995-01-01
A prototype Integrated Coastal Surveillance system has been developed on Canada's East Coast to provide effective surveillance out to and beyond the 200 nautical mile Exclusive Economic Zone. The system has been designed to protect Canada's natural resources, and to monitor and control the coastline for smuggling, drug trafficking, and similar illegal activity. This paper describes the Multiple Sensor - Multiple Target data fusion system that has been developed. The fusion processor has been developed around the celebrated Multiple Hypothesis Tracking algorithm which accommodates multiple targets, new targets, false alarms, and missed detections. This processor performs four major functions: plot-to-track association to form individual radar tracks; fusion of radar tracks with secondary sensor reports; track identification and tagging using secondary reports; and track level fusion to form common tracks. Radar data from coherent and non-coherent radars has been used to evaluate the performance of the processor. This paper presents preliminary results.
Internship Abstract and Final Reflection
NASA Technical Reports Server (NTRS)
Sandor, Edward
2016-01-01
The primary objective for this internship is the evaluation of an embedded natural language processor (NLP) as a way to introduce voice control into future space suits. An embedded natural language processor would provide an astronaut hands-free control for making adjustments to the environment of the space suit and checking status of consumables procedures and navigation. Additionally, the use of an embedded NLP could potentially reduce crew fatigue, increase the crewmember's situational awareness during extravehicular activity (EVA) and improve the ability to focus on mission critical details. The use of an embedded NLP may be valuable for other human spaceflight applications desiring hands-free control as well. An embedded NLP is unique because it is a small device that performs language tasks, including speech recognition, which normally require powerful processors. The dedicated device could perform speech recognition locally with a smaller form-factor and lower power consumption than traditional methods.
Extended operating range of the 30-cm ion thruster with simplified power processor requirements
NASA Technical Reports Server (NTRS)
Rawlin, V. K.
1981-01-01
A two grid 30 cm diameter mercury ion thruster was operated with only six power supplies over the baseline J series thruster power throttle range with negligible impact on thruster performance. An analysis of the functional model power processor showed that the component mass and parts count could be reduced considerably and the electrical efficiency increased slightly by only replacing power supplies with relays. The input power, output thrust, and specific impulse of the thruster were then extended, still using six supplies, from 2660 watts, 0.13 newtons, and 2980 seconds to 9130 watts, 0.37 newtons, and 3820 seconds, respectively. Increases in thrust and power density enable reductions in the number of thrusters and power processors required for most missions. Preliminary assessments of the impact of thruster operation at increased thrust and power density on the discharge characteristics, performance, and lifetime of the thruster were also made.
An optical processor for object recognition and tracking
NASA Technical Reports Server (NTRS)
Sloan, J.; Udomkesmalee, S.
1987-01-01
The design and development of a miniaturized optical processor that performs real time image correlation are described. The optical correlator utilizes the Vander Lugt matched spatial filter technique. The correlation output, a focused beam of light, is imaged onto a CMOS photodetector array. In addition to performing target recognition, the device also tracks the target. The hardware, composed of optical and electro-optical components, occupies only 590 cu cm of volume. A complete correlator system would also include an input imaging lens. This optical processing system is compact, rugged, requires only 3.5 watts of operating power, and weighs less than 3 kg. It represents a major achievement in miniaturizing optical processors. When considered as a special-purpose processing unit, it is an attractive alternative to conventional digital image recognition processing. It is conceivable that the combined technology of both optical and ditital processing could result in a very advanced robot vision system.
Frequency-multiplexed and pipelined iterative optical systolic array processors
NASA Technical Reports Server (NTRS)
Casasent, D.; Jackson, J.; Neuman, C.
1983-01-01
Optical matrix processors using acoustooptic transducers are described, with emphasis on new systolic array architectures using frequency multiplexing in addition to space and time multiplexing. A Kalman filtering application is considered in a case study from which the operations required on such a system can be defined. This also serves as a new and powerful application for iterative optical processors. The importance of pipelining the data flow and the ordering of the operations performed in a specific application of such a system are also noted. Several examples of how to effectively achieve this are included. A new technique for handling bipolar data on such architectures is also described.
Performance of a plasma fluid code on the Intel parallel computers
NASA Technical Reports Server (NTRS)
Lynch, V. E.; Carreras, B. A.; Drake, J. B.; Leboeuf, J. N.; Liewer, P.
1992-01-01
One approach to improving the real-time efficiency of plasma turbulence calculations is to use a parallel algorithm. A parallel algorithm for plasma turbulence calculations was tested on the Intel iPSC/860 hypercube and the Touchtone Delta machine. Using the 128 processors of the Intel iPSC/860 hypercube, a factor of 5 improvement over a single-processor CRAY-2 is obtained. For the Touchtone Delta machine, the corresponding improvement factor is 16. For plasma edge turbulence calculations, an extrapolation of the present results to the Intel (sigma) machine gives an improvement factor close to 64 over the single-processor CRAY-2.
A 20 MHz CMOS reorder buffer for a superscalar microprocessor
NASA Technical Reports Server (NTRS)
Lenell, John; Wallace, Steve; Bagherzadeh, Nader
1992-01-01
Superscalar processors can achieve increased performance by issuing instructions out-of-order from the original sequential instruction stream. Implementing an out-of-order instruction issue policy requires a hardware mechanism to prevent incorrectly executed instructions from updating register values. A reorder buffer can be used to allow a superscalar processor to issue instructions out-of-order and maintain program correctness. This paper describes the design and implementation of a 20MHz CMOS reorder buffer for superscalar processors. The reorder buffer is designed to accept and retire two instructions per cycle. A full-custom layout in 1.2 micron has been implemented, measuring 1.1058 mm by 1.3542 mm.
A floating-point/multiple-precision processor for airborne applications
NASA Technical Reports Server (NTRS)
Yee, R.
1982-01-01
A compact input output (I/O) numerical processor capable of performing floating-point, multiple precision and other arithmetic functions at execution times which are at least 100 times faster than comparable software emulation is described. The I/O device is a microcomputer system containing a 16 bit microprocessor, a numerical coprocessor with eight 80 bit registers running at a 5 MHz clock rate, 18K random access memory (RAM) and 16K electrically programmable read only memory (EPROM). The processor acts as an intelligent slave to the host computer and can be programmed in high order languages such as FORTRAN and PL/M-86.
The SPAR thermal analyzer: Present and future
NASA Astrophysics Data System (ADS)
Marlowe, M. B.; Whetstone, W. D.; Robinson, J. C.
The SPAR thermal analyzer, a system of finite-element processors for performing steady-state and transient thermal analyses, is described. The processors communicate with each other through the SPAR random access data base. As each processor is executed, all pertinent source data is extracted from the data base and results are stored in the data base. Steady state temperature distributions are determined by a direct solution method for linear problems and a modified Newton-Raphson method for nonlinear problems. An explicit and several implicit methods are available for the solution of transient heat transfer problems. Finite element plotting capability is available for model checkout and verification.
The SPAR thermal analyzer: Present and future
NASA Technical Reports Server (NTRS)
Marlowe, M. B.; Whetstone, W. D.; Robinson, J. C.
1982-01-01
The SPAR thermal analyzer, a system of finite-element processors for performing steady-state and transient thermal analyses, is described. The processors communicate with each other through the SPAR random access data base. As each processor is executed, all pertinent source data is extracted from the data base and results are stored in the data base. Steady state temperature distributions are determined by a direct solution method for linear problems and a modified Newton-Raphson method for nonlinear problems. An explicit and several implicit methods are available for the solution of transient heat transfer problems. Finite element plotting capability is available for model checkout and verification.
Multi-mode sensor processing on a dynamically reconfigurable massively parallel processor array
NASA Astrophysics Data System (ADS)
Chen, Paul; Butts, Mike; Budlong, Brad; Wasson, Paul
2008-04-01
This paper introduces a novel computing architecture that can be reconfigured in real time to adapt on demand to multi-mode sensor platforms' dynamic computational and functional requirements. This 1 teraOPS reconfigurable Massively Parallel Processor Array (MPPA) has 336 32-bit processors. The programmable 32-bit communication fabric provides streamlined inter-processor connections with deterministically high performance. Software programmability, scalability, ease of use, and fast reconfiguration time (ranging from microseconds to milliseconds) are the most significant advantages over FPGAs and DSPs. This paper introduces the MPPA architecture, its programming model, and methods of reconfigurability. An MPPA platform for reconfigurable computing is based on a structural object programming model. Objects are software programs running concurrently on hundreds of 32-bit RISC processors and memories. They exchange data and control through a network of self-synchronizing channels. A common application design pattern on this platform, called a work farm, is a parallel set of worker objects, with one input and one output stream. Statically configured work farms with homogeneous and heterogeneous sets of workers have been used in video compression and decompression, network processing, and graphics applications.
Computationally Efficient Modeling and Simulation of Large Scale Systems
NASA Technical Reports Server (NTRS)
Jain, Jitesh (Inventor); Koh, Cheng-Kok (Inventor); Balakrishnan, Vankataramanan (Inventor); Cauley, Stephen F (Inventor); Li, Hong (Inventor)
2014-01-01
A system for simulating operation of a VLSI interconnect structure having capacitive and inductive coupling between nodes thereof, including a processor, and a memory, the processor configured to perform obtaining a matrix X and a matrix Y containing different combinations of passive circuit element values for the interconnect structure, the element values for each matrix including inductance L and inverse capacitance P, obtaining an adjacency matrix A associated with the interconnect structure, storing the matrices X, Y, and A in the memory, and performing numerical integration to solve first and second equations.
Cache-based error recovery for shared memory multiprocessor systems
NASA Technical Reports Server (NTRS)
Wu, Kun-Lung; Fuchs, W. Kent; Patel, Janak H.
1989-01-01
A multiprocessor cache-based checkpointing and recovery scheme for of recovering from transient processor errors in a shared-memory multiprocessor with private caches is presented. New implementation techniques that use checkpoint identifiers and recovery stacks to reduce performance degradation in processor utilization during normal execution are examined. This cache-based checkpointing technique prevents rollback propagation, provides for rapid recovery, and can be integrated into standard cache coherence protocols. An analytical model is used to estimate the relative performance of the scheme during normal execution. Extensions that take error latency into account are presented.
Park, Daejin; Cho, Jeonghun
2014-01-01
A specially designed sensor processor used as a main processor in IoT (internet-of-thing) device for the rare-event sensing applications is proposed. The IoT device including the proposed sensor processor performs the event-driven sensor data processing based on an accuracy-energy configurable event-quantization in architectural level. The received sensor signal is converted into a sequence of atomic events, which is extracted by the signal-to-atomic-event generator (AEG). Using an event signal processing unit (EPU) as an accelerator, the extracted atomic events are analyzed to build the final event. Instead of the sampled raw data transmission via internet, the proposed method delays the communication with a host system until a semantic pattern of the signal is identified as a final event. The proposed processor is implemented on a single chip, which is tightly coupled in bus connection level with a microcontroller using a 0.18 μm CMOS embedded-flash process. For experimental results, we evaluated the proposed sensor processor by using an IR- (infrared radio-) based signal reflection and sensor signal acquisition system. We successfully demonstrated that the expected power consumption is in the range of 20% to 50% compared to the result of the basement in case of allowing 10% accuracy error.