ASR-9 processor augmentation card (9-PAC) phase II scan-scan correlator algorithms
DOT National Transportation Integrated Search
2001-04-26
The report documents the scan-scan correlator (tracker) algorithm developed for Phase II of the ASR-9 Processor Augmentation Card (9-PAC) project. The improved correlation and tracking algorithms in 9-PAC Phase II decrease the incidence of false-alar...
Holo-Chidi video concentrator card
NASA Astrophysics Data System (ADS)
Nwodoh, Thomas A.; Prabhakar, Aditya; Benton, Stephen A.
2001-12-01
The Holo-Chidi Video Concentrator Card is a frame buffer for the Holo-Chidi holographic video processing system. Holo- Chidi is designed at the MIT Media Laboratory for real-time computation of computer generated holograms and the subsequent display of the holograms at video frame rates. The Holo-Chidi system is made of two sets of cards - the set of Processor cards and the set of Video Concentrator Cards (VCCs). The Processor cards are used for hologram computation, data archival/retrieval from a host system, and for higher-level control of the VCCs. The VCC formats computed holographic data from multiple hologram computing Processor cards, converting the digital data to analog form to feed the acousto-optic-modulators of the Media lab's Mark-II holographic display system. The Video Concentrator card is made of: a High-Speed I/O (HSIO) interface whence data is transferred from the hologram computing Processor cards, a set of FIFOs and video RAM used as buffer for data for the hololines being displayed, a one-chip integrated microprocessor and peripheral combination that handles communication with other VCCs and furnishes the card with a USB port, a co-processor which controls display data formatting, and D-to-A converters that convert digital fringes to analog form. The co-processor is implemented with an SRAM-based FPGA with over 500,000 gates and controls all the signals needed to format the data from the multiple Processor cards into the format required by Mark-II. A VCC has three HSIO ports through which up to 500 Megabytes of computed holographic data can flow from the Processor Cards to the VCC per second. A Holo-Chidi system with three VCCs has enough frame buffering capacity to hold up to thirty two 36Megabyte hologram frames at a time. Pre-computed holograms may also be loaded into the VCC from a host computer through the low- speed USB port. Both the microprocessor and the co- processor in the VCC can access the main system memory used to store control programs and data for the VCC. The Card also generates the control signals used by the scanning mirrors of Mark-II. In this paper we discuss the design of the VCC and its implementation in the Holo-Chidi system.
Commercial Off-The-Shelf (COTS) Graphics Processing Board (GPB) Radiation Test Evaluation Report
NASA Technical Reports Server (NTRS)
Salazar, George A.; Steele, Glen F.
2013-01-01
Large round trip communications latency for deep space missions will require more onboard computational capabilities to enable the space vehicle to undertake many tasks that have traditionally been ground-based, mission control responsibilities. As a result, visual display graphics will be required to provide simpler vehicle situational awareness through graphical representations, as well as provide capabilities never before done in a space mission, such as augmented reality for in-flight maintenance or Telepresence activities. These capabilities will require graphics processors and associated support electronic components for high computational graphics processing. In an effort to understand the performance of commercial graphics card electronics operating in the expected radiation environment, a preliminary test was performed on five commercial offthe- shelf (COTS) graphics cards. This paper discusses the preliminary evaluation test results of five COTS graphics processing cards tested to the International Space Station (ISS) low earth orbit radiation environment. Three of the five graphics cards were tested to a total dose of 6000 rads (Si). The test articles, test configuration, preliminary results, and recommendations are discussed.
A Cost Effective System Design Approach for Critical Space Systems
NASA Technical Reports Server (NTRS)
Abbott, Larry Wayne; Cox, Gary; Nguyen, Hai
2000-01-01
NASA-JSC required an avionics platform capable of serving a wide range of applications in a cost-effective manner. In part, making the avionics platform cost effective means adhering to open standards and supporting the integration of COTS products with custom products. Inherently, operation in space requires low power, mass, and volume while retaining high performance, reconfigurability, scalability, and upgradability. The Universal Mini-Controller project is based on a modified PC/104-Plus architecture while maintaining full compatibility with standard COTS PC/104 products. The architecture consists of a library of building block modules, which can be mixed and matched to meet a specific application. A set of NASA developed core building blocks, processor card, analog input/output card, and a Mil-Std-1553 card, have been constructed to meet critical functions and unique interfaces. The design for the processor card is based on the PowerPC architecture. This architecture provides an excellent balance between power consumption and performance, and has an upgrade path to the forthcoming radiation hardened PowerPC processor. The processor card, which makes extensive use of surface mount technology, has a 166 MHz PowerPC 603e processor, 32 Mbytes of error detected and corrected RAM, 8 Mbytes of Flash, and I Mbytes of EPROM, on a single PC/104-Plus card. Similar densities have been achieved with the quad channel Mil-Std-1553 card and the analog input/output cards. The power management built into the processor and its peripheral chip allows the power and performance of the system to be adjusted to meet the requirements of the application, allowing another dimension to the flexibility of the Universal Mini-Controller. Unique mechanical packaging allows the Universal Mini-Controller to accommodate standard COTS and custom oversized PC/104-Plus cards. This mechanical packaging also provides thermal management via conductive cooling of COTS boards, which are typically designed for convection cooling methods.
A universal computer control system for motors
NASA Technical Reports Server (NTRS)
Szakaly, Zoltan F. (Inventor)
1991-01-01
A control system for a multi-motor system such as a space telerobot, having a remote computational node and a local computational node interconnected with one another by a high speed data link is described. A Universal Computer Control System (UCCS) for the telerobot is located at each node. Each node is provided with a multibus computer system which is characterized by a plurality of processors with all processors being connected to a common bus, and including at least one command processor. The command processor communicates over the bus with a plurality of joint controller cards. A plurality of direct current torque motors, of the type used in telerobot joints and telerobot hand-held controllers, are connected to the controller cards and responds to digital control signals from the command processor. Essential motor operating parameters are sensed by analog sensing circuits and the sensed analog signals are converted to digital signals for storage at the controller cards where such signals can be read during an address read/write cycle of the command processing processor.
Thread-Level Parallelization and Optimization of NWChem for the Intel MIC Architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shan, Hongzhang; Williams, Samuel; Jong, Wibe de
In the multicore era it was possible to exploit the increase in on-chip parallelism by simply running multiple MPI processes per chip. Unfortunately, manycore processors' greatly increased thread- and data-level parallelism coupled with a reduced memory capacity demand an altogether different approach. In this paper we explore augmenting two NWChem modules, triples correction of the CCSD(T) and Fock matrix construction, with OpenMP in order that they might run efficiently on future manycore architectures. As the next NERSC machine will be a self-hosted Intel MIC (Xeon Phi) based supercomputer, we leverage an existing MIC testbed at NERSC to evaluate our experiments.more » In order to proxy the fact that future MIC machines will not have a host processor, we run all of our experiments in tt native mode. We found that while straightforward application of OpenMP to the deep loop nests associated with the tensor contractions of CCSD(T) was sufficient in attaining high performance, significant effort was required to safely and efficiently thread the TEXAS integral package when constructing the Fock matrix. Ultimately, our new MPI OpenMP hybrid implementations attain up to 65x better performance for the triples part of the CCSD(T) due in large part to the fact that the limited on-card memory limits the existing MPI implementation to a single process per card. Additionally, we obtain up to 1.6x better performance on Fock matrix constructions when compared with the best MPI implementations running multiple processes per card.« less
Thread-level parallelization and optimization of NWChem for the Intel MIC architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shan, Hongzhang; Williams, Samuel; de Jong, Wibe
In the multicore era it was possible to exploit the increase in on-chip parallelism by simply running multiple MPI processes per chip. Unfortunately, manycore processors' greatly increased thread- and data-level parallelism coupled with a reduced memory capacity demand an altogether different approach. In this paper we explore augmenting two NWChem modules, triples correction of the CCSD(T) and Fock matrix construction, with OpenMP in order that they might run efficiently on future manycore architectures. As the next NERSC machine will be a self-hosted Intel MIC (Xeon Phi) based supercomputer, we leverage an existing MIC testbed at NERSC to evaluate our experiments.more » In order to proxy the fact that future MIC machines will not have a host processor, we run all of our experiments in native mode. We found that while straightforward application of OpenMP to the deep loop nests associated with the tensor contractions of CCSD(T) was sufficient in attaining high performance, significant e ort was required to safely and efeciently thread the TEXAS integral package when constructing the Fock matrix. Ultimately, our new MPI+OpenMP hybrid implementations attain up to 65× better performance for the triples part of the CCSD(T) due in large part to the fact that the limited on-card memory limits the existing MPI implementation to a single process per card. Additionally, we obtain up to 1.6× better performance on Fock matrix constructions when compared with the best MPI implementations running multiple processes per card.« less
ARSC: Augmented Reality Student Card--An Augmented Reality Solution for the Education Field
ERIC Educational Resources Information Center
El Sayed, Neven A. M.; Zayed, Hala H.; Sharawy, Mohamed I.
2011-01-01
Augmented Reality (AR) is the technology of adding virtual objects to real scenes through enabling the addition of missing information in real life. As the lack of resources is a problem that can be solved through AR, this paper presents and explains the usage of AR technology we introduce Augmented Reality Student Card (ARSC) as an application of…
List-mode PET image reconstruction for motion correction using the Intel XEON PHI co-processor
NASA Astrophysics Data System (ADS)
Ryder, W. J.; Angelis, G. I.; Bashar, R.; Gillam, J. E.; Fulton, R.; Meikle, S.
2014-03-01
List-mode image reconstruction with motion correction is computationally expensive, as it requires projection of hundreds of millions of rays through a 3D array. To decrease reconstruction time it is possible to use symmetric multiprocessing computers or graphics processing units. The former can have high financial costs, while the latter can require refactoring of algorithms. The Xeon Phi is a new co-processor card with a Many Integrated Core architecture that can run 4 multiple-instruction, multiple data threads per core with each thread having a 512-bit single instruction, multiple data vector register. Thus, it is possible to run in the region of 220 threads simultaneously. The aim of this study was to investigate whether the Xeon Phi co-processor card is a viable alternative to an x86 Linux server for accelerating List-mode PET image reconstruction for motion correction. An existing list-mode image reconstruction algorithm with motion correction was ported to run on the Xeon Phi coprocessor with the multi-threading implemented using pthreads. There were no differences between images reconstructed using the Phi co-processor card and images reconstructed using the same algorithm run on a Linux server. However, it was found that the reconstruction runtimes were 3 times greater for the Phi than the server. A new version of the image reconstruction algorithm was developed in C++ using OpenMP for mutli-threading and the Phi runtimes decreased to 1.67 times that of the host Linux server. Data transfer from the host to co-processor card was found to be a rate-limiting step; this needs to be carefully considered in order to maximize runtime speeds. When considering the purchase price of a Linux workstation with Xeon Phi co-processor card and top of the range Linux server, the former is a cost-effective computation resource for list-mode image reconstruction. A multi-Phi workstation could be a viable alternative to cluster computers at a lower cost for medical imaging applications.
Code of Federal Regulations, 2014 CFR
2014-01-01
... Banking FEDERAL RESERVE SYSTEM (CONTINUED) BOARD OF GOVERNORS OF THE FEDERAL RESERVE SYSTEM (CONTINUED... transactions over a payment card network. An acquirer does not include a person that acts only as a processor... management or policies of the company, as the Board determines. (f) Debit card (1) Means any card, or other...
12 CFR 235.7 - Limitations on payment card restrictions.
Code of Federal Regulations, 2014 CFR
2014-01-01
... Section 235.7 Banks and Banking FEDERAL RESERVE SYSTEM (CONTINUED) BOARD OF GOVERNORS OF THE FEDERAL... restrictions. (a) Prohibition on network exclusivity—(1) In general. An issuer or payment card network shall not directly or through any agent, processor, or licensed member of a payment card network, by...
Code of Federal Regulations, 2012 CFR
2012-01-01
... Banking FEDERAL RESERVE SYSTEM (CONTINUED) BOARD OF GOVERNORS OF THE FEDERAL RESERVE SYSTEM (CONTINUED... transactions over a payment card network. An acquirer does not include a person that acts only as a processor... management or policies of the company, as the Board determines. (f) Debit card (1) Means any card, or other...
Code of Federal Regulations, 2013 CFR
2013-01-01
... Banking FEDERAL RESERVE SYSTEM (CONTINUED) BOARD OF GOVERNORS OF THE FEDERAL RESERVE SYSTEM (CONTINUED... transactions over a payment card network. An acquirer does not include a person that acts only as a processor... management or policies of the company, as the Board determines. (f) Debit card (1) Means any card, or other...
12 CFR 235.7 - Limitations on payment card restrictions.
Code of Federal Regulations, 2013 CFR
2013-01-01
... Section 235.7 Banks and Banking FEDERAL RESERVE SYSTEM (CONTINUED) BOARD OF GOVERNORS OF THE FEDERAL... restrictions. (a) Prohibition on network exclusivity—(1) In general. An issuer or payment card network shall not directly or through any agent, processor, or licensed member of a payment card network, by...
12 CFR 235.7 - Limitations on payment card restrictions.
Code of Federal Regulations, 2012 CFR
2012-01-01
... Section 235.7 Banks and Banking FEDERAL RESERVE SYSTEM (CONTINUED) BOARD OF GOVERNORS OF THE FEDERAL... restrictions. (a) Prohibition on network exclusivity—(1) In general. An issuer or payment card network shall not directly or through any agent, processor, or licensed member of a payment card network, by...
Pettit performs the EPIC Card Testing and X2R10 Software Transition
2011-12-28
ISS030-E-022574 (28 Dec. 2011) -- NASA astronaut Don Pettit (foreground),Expedition 30 flight engineer, performs the Enhanced Processor and Integrated Communications (EPIC) card testing and X2R10 software transition. The software transition work will include EPIC card testing and card installations, and monitoring of the upgraded Multiplexer/ Demultiplexer (MDM) computers. Dan Burbank, Expedition 30 commander, is setting up a camcorder in the background.
Pettit performs the EPIC Card Testing and X2R10 Software Transition
2011-12-28
ISS030-E-022575 (28 Dec. 2011) -- NASA astronaut Don Pettit (foreground),Expedition 30 flight engineer, performs the Enhanced Processor and Integrated Communications (EPIC) card testing and X2R10 software transition. The software transition work will include EPIC card testing and card installations, and monitoring of the upgraded Multiplexer/ Demultiplexer (MDM) computers. Dan Burbank, Expedition 30 commander, is setting up a camcorder in the background.
NASA Technical Reports Server (NTRS)
Sood, Bhanu; Evans, John; Daniluk, Kelly; Sturgis, Jason; Davis, Milton; Petrick, David
2017-01-01
In this reliability life cycle evaluation of the SpaceCube 2.0 processor card, a partially populated version of the card is being evaluated to determine its durability with respect to typical GSFC mission loads.
Master/Programmable-Slave Computer
NASA Technical Reports Server (NTRS)
Smaistrla, David; Hall, William A.
1990-01-01
Unique modular computer features compactness, low power, mass storage of data, multiprocessing, and choice of various input/output modes. Master processor communicates with user via usual keyboard and video display terminal. Coordinates operations of as many as 24 slave processors, each dedicated to different experiment. Each slave circuit card includes slave microprocessor and assortment of input/output circuits for communication with external equipment, with master processor, and with other slave processors. Adaptable to industrial process control with selectable degrees of automatic control, automatic and/or manual monitoring, and manual intervention.
Bringing Algorithms to Life: Cooperative Computing Activities Using Students as Processors.
ERIC Educational Resources Information Center
Bachelis, Gregory F.; And Others
1994-01-01
Presents cooperative computing activities in which each student plays the role of a switch or processor and acts out algorithms. Includes binary counting, finding the smallest card in a deck, sorting by selection and merging, adding and multiplying large numbers, and sieving for primes. (16 references) (Author/MKR)
77 FR 46258 - Debit Card Interchange Fees and Routing
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-03
... the types of fraud, methods used to commit fraud, and available fraud-prevention methods. An issuer... 920(a)(5) requires the Board to consider (1) the nature, type, and occurrence of fraud in electronic..., merchant trade associations, a card-payment processor, technology companies, a member of Congress...
Easy robot programming for beginners and kids using augmented reality environments
NASA Astrophysics Data System (ADS)
Sakamoto, Kunio; Nishiguchi, Masahiro
2010-11-01
The authors have developed the mobile robot which can be programmed by command and instruction cards. All you have to do is to arrange cards on a table and to shot the programming stage by a camera. Our card programming system recognizes instruction cards and translates icon commands into the motor driver program. This card programming environment also provides low-level structure programming.
NASA Technical Reports Server (NTRS)
Hall, William A.
1990-01-01
Slave microprocessors in multimicroprocessor computing system contains modified circuit cards programmed via bus connecting master processor with slave microprocessors. Enables interactive, microprocessor-based, single-loop control. Confers ability to load and run program from master/slave bus, without need for microprocessor development station. Tristate buffers latch all data and information on status. Slave central processing unit never connected directly to bus.
The Electronic Flight Bag: A Multi-Function Tool for the Modern Cockpit
2002-08-01
56K Modem , Sound Card, Touchscreen USB, PCMCIA, IR, 56K Modem , Sound Card, Touchscreen USB, IrDA, PCMCIA, Wireless LAN, Touchscreen, Integrated...card, internal modem and augmented internal battery. It is designed to complement the PID for use in the classroom at home or on the road.59
ARTS III/Parallel Processor Design Study
DOT National Transportation Integrated Search
1975-04-01
It was the purpose of this design study to investigate the feasibility, suitability, and cost-effectiveness of augmenting the ARTS III failsafe/failsoft multiprocessor system with a form of parallel processor to accomodate a large growth in air traff...
ERIC Educational Resources Information Center
Jastrzembski, Tiffany S.; Charness, Neil
2007-01-01
The authors estimate weighted mean values for nine information processing parameters for older adults using the Card, Moran, and Newell (1983) Model Human Processor model. The authors validate a subset of these parameters by modeling two mobile phone tasks using two different phones and comparing model predictions to a sample of younger (N = 20;…
2011-12-29
ISS030-E-017776 (29 Dec. 2011) --- Working in chorus with the International Space Station team in Houston?s Mission Control Center, this astronaut and his Expedition 30 crewmates on the station install a set of Enhanced Processor and Integrated Communications (EPIC) computer cards in one of seven primary computers onboard. The upgrade will allow more experiments to operate simultaneously, and prepare for the arrival of commercial cargo ships later this year.
Signal processing for smart cards
NASA Astrophysics Data System (ADS)
Quisquater, Jean-Jacques; Samyde, David
2003-06-01
In 1998, Paul Kocher showed that when a smart card computes cryptographic algorithms, for signatures or encryption, its consumption or its radiations leak information. The keys or the secrets hidden in the card can then be recovered using a differential measurement based on the intercorrelation function. A lot of silicon manufacturers use desynchronization countermeasures to defeat power analysis. In this article we detail a new resynchronization technic. This method can be used to facilitate the use of a neural network to do the code recognition. It becomes possible to reverse engineer a software code automatically. Using data and clock separation methods, we show how to optimize the synchronization using signal processing. Then we compare these methods with watermarking methods for 1D and 2D signal. The very last watermarking detection improvements can be applied to signal processing for smart cards with very few modifications. Bayesian processing is one of the best ways to do Differential Power Analysis, and it is possible to extract a PIN code from a smart card in very few samples. So this article shows the need to continue to set up effective countermeasures for cryptographic processors. Although the idea to use advanced signal processing operators has been commonly known for a long time, no publication explains that results can be obtained. The main idea of differential measurement is to use the cross-correlation of two random variables and to repeat consumption measurements on the processor to be analyzed. We use two processors clocked at the same external frequency and computing the same data. The applications of our design are numerous. Two measurements provide the inputs of a central operator. With the most accurate operator we can improve the signal noise ratio, re-synchronize the acquisition clock with the internal one, or remove jitter. The analysis based on consumption or electromagnetic measurements can be improved using our structure. At first sight the same results can be obtained with only one smart card, but this idea is not completely true because the statistical properties of the signal are not the same. As the two smart cards are submitted to the same external noise during the measurement, it is more easy to reduce the influence of perturbations. This paper shows the importance of accurate countermeasures against differential analysis.
Flexible Peripheral Component Interconnect Input/Output Card
NASA Technical Reports Server (NTRS)
Bigelow, Kirk K.; Jerry, Albert L.; Baricio, Alisha G.; Cummings, Jon K.
2010-01-01
The Flexible Peripheral Component Interconnect (PCI) Input/Output (I/O) Card is an innovative circuit board that provides functionality to interface between a variety of devices. It supports user-defined interrupts for interface synchronization, tracks system faults and failures, and includes checksum and parity evaluation of interface data. The card supports up to 16 channels of high-speed, half-duplex, low-voltage digital signaling (LVDS) serial data, and can interface combinations of serial and parallel devices. Placement of a processor within the field programmable gate array (FPGA) controls an embedded application with links to host memory over its PCI bus. The FPGA also provides protocol stacking and quick digital signal processor (DSP) functions to improve host performance. Hardware timers, counters, state machines, and other glue logic support interface communications. The Flexible PCI I/O Card provides an interface for a variety of dissimilar computer systems, featuring direct memory access functionality. The card has the following attributes: 8/16/32-bit, 33-MHz PCI r2.2 compliance, Configurable for universal 3.3V/5V interface slots, PCI interface based on PLX Technology's PCI9056 ASIC, General-use 512K 16 SDRAM memory, General-use 1M 16 Flash memory, FPGA with 3K to 56K logical cells with embedded 27K to 198K bits RAM, I/O interface: 32-channel LVDS differential transceivers configured in eight, 4-bit banks; signaling rates to 200 MHz per channel, Common SCSI-3, 68-pin interface connector.
A Qualitative Security Analysis of a New Class of 3-D Integrated Crypto Co-processors
2012-01-01
and mobile phones, lottery ticket vending machines , and various electronic payment systems. The main reason for their use in such applications is that...military applications such as secure communication links. However, the proliferation of Automated Teller Machines (ATMs) in the ’80s introduced them to...commercial applications. Today many popular consumer devices have cryptographic processors in them, for example, smart- cards for pay-TV access machines
Magic cards: a new augmented-reality approach.
Demuynck, Olivier; Menendez, José Manuel
2013-01-01
Augmented reality (AR) commonly uses markers for detection and tracking. Such multimedia applications associate each marker with a virtual 3D model stored in the memory of the camera-equipped device running the application. Application users are limited in their interactions, which require knowing how to design and program 3D objects. This generally prevents them from developing their own entertainment AR applications. The Magic Cards application solves this problem by offering an easy way to create and manage an unlimited number of virtual objects that are encoded on special markers.
Augmented Reality Comes to Physics
ERIC Educational Resources Information Center
Buesing, Mark; Cook, Michael
2013-01-01
Augmented reality (AR) is a technology used on computing devices where processor-generated graphics are rendered over real objects to enhance the sensory experience in real time. In other words, what you are really seeing is augmented by the computer. Many AR games already exist for systems such as Kinect and Nintendo 3DS and mobile apps, such as…
Integrating Fingerprint Verification into the Smart Card-Based Healthcare Information System
NASA Astrophysics Data System (ADS)
Moon, Daesung; Chung, Yongwha; Pan, Sung Bum; Park, Jin-Won
2009-12-01
As VLSI technology has been improved, a smart card employing 32-bit processors has been released, and more personal information such as medical, financial data can be stored in the card. Thus, it becomes important to protect personal information stored in the card. Verification of the card holder's identity using a fingerprint has advantages over the present practices of Personal Identification Numbers (PINs) and passwords. However, the computational workload of fingerprint verification is much heavier than that of the typical PIN-based solution. In this paper, we consider three strategies to implement fingerprint verification in a smart card environment and how to distribute the modules of fingerprint verification between the smart card and the card reader. We first evaluate the number of instructions of each step of a typical fingerprint verification algorithm, and estimate the execution time of several cryptographic algorithms to guarantee the security/privacy of the fingerprint data transmitted in the smart card with the client-server environment. Based on the evaluation results, we analyze each scenario with respect to the security level and the real-time execution requirements in order to implement fingerprint verification in the smart card with the client-server environment.
NASA Technical Reports Server (NTRS)
Cecil, R. W.; White, R. A.; Szczur, M. R.
1972-01-01
The IDAMS Processor is a package of task routines and support software that performs convolution filtering, image expansion, fast Fourier transformation, and other operations on a digital image tape. A unique task control card for that program, together with any necessary parameter cards, selects each processing technique to be applied to the input image. A variable number of tasks can be selected for execution by including the proper task and parameter cards in the input deck. An executive maintains control of the run; it initiates execution of each task in turn and handles any necessary error processing.
Lee, Anthony; Yau, Christopher; Giles, Michael B.; Doucet, Arnaud; Holmes, Christopher C.
2011-01-01
We present a case-study on the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. Graphics cards, containing multiple Graphics Processing Units (GPUs), are self-contained parallel computational devices that can be housed in conventional desktop and laptop computers and can be thought of as prototypes of the next generation of many-core processors. For certain classes of population-based Monte Carlo algorithms they offer massively parallel simulation, with the added advantage over conventional distributed multi-core processors that they are cheap, easily accessible, easy to maintain, easy to code, dedicated local devices with low power consumption. On a canonical set of stochastic simulation examples including population-based Markov chain Monte Carlo methods and Sequential Monte Carlo methods, we nd speedups from 35 to 500 fold over conventional single-threaded computer code. Our findings suggest that GPUs have the potential to facilitate the growth of statistical modelling into complex data rich domains through the availability of cheap and accessible many-core computation. We believe the speedup we observe should motivate wider use of parallelizable simulation methods and greater methodological attention to their design. PMID:22003276
SpaceCube v2.0 Space Flight Hybrid Reconfigurable Data Processing System
NASA Technical Reports Server (NTRS)
Petrick, Dave
2014-01-01
This paper details the design architecture, design methodology, and the advantages of the SpaceCube v2.0 high performance data processing system for space applications. The purpose in building the SpaceCube v2.0 system is to create a superior high performance, reconfigurable, hybrid data processing system that can be used in a multitude of applications including those that require a radiation hardened and reliable solution. The SpaceCube v2.0 system leverages seven years of board design, avionics systems design, and space flight application experiences. This paper shows how SpaceCube v2.0 solves the increasing computing demands of space data processing applications that cannot be attained with a standalone processor approach.The main objective during the design stage is to find a good system balance between power, size, reliability, cost, and data processing capability. These design variables directly impact each other, and it is important to understand how to achieve a suitable balance. This paper will detail how these critical design factors were managed including the construction of an Engineering Model for an experiment on the International Space Station to test out design concepts. We will describe the designs for the processor card, power card, backplane, and a mission unique interface card. The mechanical design for the box will also be detailed since it is critical in meeting the stringent thermal and structural requirements imposed by the processing system. In addition, the mechanical design uses advanced thermal conduction techniques to solve the internal thermal challenges.The SpaceCube v2.0 processing system is based on an extended version of the 3U cPCI standard form factor where each card is 190mm x 100mm in size The typical power draw of the processor card is 8 to 10W and scales with application complexity. The SpaceCube v2.0 data processing card features two Xilinx Virtex-5 QV Field Programmable Gate Arrays (FPGA), eight memory modules, a monitor FPGA with analog monitoring, Ethernet, configurable interconnect to the Xilinx FPGAs including gigabit transceivers, and the necessary voltage regulation. The processor board uses a back-to-back design methodology for common parts that maximizes the board real estate available. This paper will show how to meet the IPC 6012B Class 3A standard with a 22-layer board that has two column grid array devices with 1.0mm pitch. All layout trades such as stack-up options, via selection, and FPGA signal breakout will be discussed with feature size results. The overall board design process will be discussed including parts selection, circuit design, proper signal termination, layout placement and route planning, signal integrity design and verification, and power integrity results. The radiation mitigation techniques will also be detailed including configuration scrubbing options, Xilinx circuit mitigation and FPGA functional monitoring, and memory protection.Finally, this paper will describe how this system is being used to solve the extreme challenges of a robotic satellite servicing mission where typical space-rated processors are not sufficient enough to meet the intensive data processing requirements. The SpaceCube v2.0 is the main payload control computer and is required to control critical subsystems such as autonomous rendezvous and docking using a suite of vision sensors and object avoidance when controlling two robotic arms.
NASA Technical Reports Server (NTRS)
Alkhatib, Hasan S.
1991-01-01
The hardware and the software architecture of the TurboLAN Intelligent Network Adapter Card (TINAC) are described. A high level as well as detailed treatment of the workings of various components of the TINAC are presented. The TINAC is divided into the following four major functional units: (1) the network access unit (NAU); (2) the buffer management unit; (3) the host interface unit; and (4) the node processor unit.
Experimentation and Evaluation of Advanced Integrated System Concepts.
1980-09-26
ART). (b) Selects one of four trunk circuits from each trunk (m) Dual Modem and Loop Interface (DMLI) card. circuit card. (n) Dictation and paging...Arbitrator L Bus - Modems ET _Modems Modems Figure 4-1 Certain Telenet Processor models (see Section 4.3 for details) can be equipped with redundancy to...JMemory Bank B Memory Bank A ArbittrAto Arbitrator A t a i Interface U a Modems $ Figure 4-2 In a system with common logic redundancy all centrally
ERIC Educational Resources Information Center
Perez, Ernest
1997-01-01
Examines the practical realities of upgrading Intel personal computers in libraries, considering budgets and technical personnel availability. Highlights include adding RAM; putting in faster processor chips, including clock multipliers; new hard disks; CD-ROM speed; motherboards and interface cards; cost limits and economic factors; and…
NEW EPICS/RTEMS IOC BASED ON ALTERA SOC AT JEFFERSON LAB
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, Jianxun; Seaton, Chad; Allison, Trent L.
A new EPICS/RTEMS IOC based on the Altera System-on-Chip (SoC) FPGA is being designed at Jefferson Lab. The Altera SoC FPGA integrates a dual ARM Cortex-A9 Hard Processor System (HPS) consisting of processor, peripherals and memory interfaces tied seamlessly with the FPGA fabric using a high-bandwidth interconnect backbone. The embedded Altera SoC IOC has features of remote network boot via U-Boot from SD card or QSPI Flash, 1Gig Ethernet, 1GB DDR3 SDRAM on HPS, UART serial ports, and ISA bus interface. RTEMS for the ARM processor BSP were built with CEXP shell, which will dynamically load the EPICS applications atmore » runtime. U-Boot is the primary bootloader to remotely load the kernel image into local memory from a DHCP/TFTP server over Ethernet, and automatically run RTEMS and EPICS. The first design of the SoC IOC will be compatible with Jefferson Lab’s current PC104 IOCs, which have been running in CEBAF 10 years. The next design would be mounting in a chassis and connected to a daughter card via standard HSMC connectors. This standard SoC IOC will become the next generation of low-level IOC for the accelerator controls at Jefferson Lab.« less
Deep Unsupervised Learning on a Desktop PC: A Primer for Cognitive Scientists.
Testolin, Alberto; Stoianov, Ivilin; De Filippo De Grazia, Michele; Zorzi, Marco
2013-01-01
Deep belief networks hold great promise for the simulation of human cognition because they show how structured and abstract representations may emerge from probabilistic unsupervised learning. These networks build a hierarchy of progressively more complex distributed representations of the sensory data by fitting a hierarchical generative model. However, learning in deep networks typically requires big datasets and it can involve millions of connection weights, which implies that simulations on standard computers are unfeasible. Developing realistic, medium-to-large-scale learning models of cognition would therefore seem to require expertise in programing parallel-computing hardware, and this might explain why the use of this promising approach is still largely confined to the machine learning community. Here we show how simulations of deep unsupervised learning can be easily performed on a desktop PC by exploiting the processors of low cost graphic cards (graphic processor units) without any specific programing effort, thanks to the use of high-level programming routines (available in MATLAB or Python). We also show that even an entry-level graphic card can outperform a small high-performance computing cluster in terms of learning time and with no loss of learning quality. We therefore conclude that graphic card implementations pave the way for a widespread use of deep learning among cognitive scientists for modeling cognition and behavior.
Deep Unsupervised Learning on a Desktop PC: A Primer for Cognitive Scientists
Testolin, Alberto; Stoianov, Ivilin; De Filippo De Grazia, Michele; Zorzi, Marco
2013-01-01
Deep belief networks hold great promise for the simulation of human cognition because they show how structured and abstract representations may emerge from probabilistic unsupervised learning. These networks build a hierarchy of progressively more complex distributed representations of the sensory data by fitting a hierarchical generative model. However, learning in deep networks typically requires big datasets and it can involve millions of connection weights, which implies that simulations on standard computers are unfeasible. Developing realistic, medium-to-large-scale learning models of cognition would therefore seem to require expertise in programing parallel-computing hardware, and this might explain why the use of this promising approach is still largely confined to the machine learning community. Here we show how simulations of deep unsupervised learning can be easily performed on a desktop PC by exploiting the processors of low cost graphic cards (graphic processor units) without any specific programing effort, thanks to the use of high-level programming routines (available in MATLAB or Python). We also show that even an entry-level graphic card can outperform a small high-performance computing cluster in terms of learning time and with no loss of learning quality. We therefore conclude that graphic card implementations pave the way for a widespread use of deep learning among cognitive scientists for modeling cognition and behavior. PMID:23653617
Development and analysis of the Software Implemented Fault-Tolerance (SIFT) computer
NASA Technical Reports Server (NTRS)
Goldberg, J.; Kautz, W. H.; Melliar-Smith, P. M.; Green, M. W.; Levitt, K. N.; Schwartz, R. L.; Weinstock, C. B.
1984-01-01
SIFT (Software Implemented Fault Tolerance) is an experimental, fault-tolerant computer system designed to meet the extreme reliability requirements for safety-critical functions in advanced aircraft. Errors are masked by performing a majority voting operation over the results of identical computations, and faulty processors are removed from service by reassigning computations to the nonfaulty processors. This scheme has been implemented in a special architecture using a set of standard Bendix BDX930 processors, augmented by a special asynchronous-broadcast communication interface that provides direct, processor to processor communication among all processors. Fault isolation is accomplished in hardware; all other fault-tolerance functions, together with scheduling and synchronization are implemented exclusively by executive system software. The system reliability is predicted by a Markov model. Mathematical consistency of the system software with respect to the reliability model has been partially verified, using recently developed tools for machine-aided proof of program correctness.
NASA Technical Reports Server (NTRS)
Hirt, E. F.; Fox, G. L.
1982-01-01
Two specific NASTRAN preprocessors and postprocessors are examined. A postprocessor for dynamic analysis and a graphical interactive package for model generation and review of resuls are presented. A computer program that provides response spectrum analysis capability based on data from NASTRAN finite element model is described and the GIFTS system, a graphic processor to augment NASTRAN is introduced.
Report Card. Functional Models of Institutional Research and Other Selected Papers.
ERIC Educational Resources Information Center
Brown, Charles I., Ed.
Presentations are on the five topics of functional models of institutional research: (1) and the political scene; (2) at public and private colleges and universities; (3) for improving communications between institutional researchers and data processors; (4) for deriving qualitative decisions from quantitative data; and (5) for special interest…
Augmented Reality Comes to Physics
NASA Astrophysics Data System (ADS)
Buesing, Mark; Cook, Michael
2013-04-01
Augmented reality (AR) is a technology used on computing devices where processor-generated graphics are rendered over real objects to enhance the sensory experience in real time. In other words, what you are really seeing is augmented by the computer. Many AR games already exist for systems such as Kinect and Nintendo 3DS and mobile apps, such as Tagwhat and Star Chart (a must for astronomy class). The yellow line marking first downs in a televised football game2 and the enhanced puck that makes televised hockey easier to follow3 both use augmented reality to do the job.
Augmented reality application utility for aviation maintenance work instruction
NASA Astrophysics Data System (ADS)
Pourcho, John Bryan
Current aviation maintenance work instructions do not display information effectively enough to prevent costly errors and safety concerns. Aircraft are complex assemblies of highly interrelated components that confound troubleshooting and can make the maintenance procedure difficult (Drury & Gramopadhye, 2001). The sophisticated nature of aircraft maintenance necessitates a revolutionized training intervention for aviation maintenance technicians (United States General Accounting Office, 2003). Quite simply, the paper based job task cards fall short of offering rapid access to technical data and the system or component visualization necessary for working on complex integrated aircraft systems. Possible solutions to this problem include upgraded standards for paper based task cards and the use of integrated 3D product definition used on various mobile platforms (Ropp, Thomas, Lee, Broyles, Lewin, Andreychek, & Nicol, 2013). Previous studies have shown that incorporation of 3D graphics in work instructions allow the user to more efficiently and accurately interpret maintenance information (Jackson & Batstone, 2008). For aircraft maintenance workers, the use of mobile 3D model-based task cards could make current paper task card standards obsolete with their ability to deliver relevant, synchronized information to and from the hangar. Unlike previous versions of 3D model-based definition task cards and paper task cards, which are currently used in the maintenance industry, 3D model based definition task cards have the potential to be more mobile and accessible. Utilizing augmented reality applications on mobile devices to seamlessly deliver 3D product definition on mobile devices could increase the efficiency, accuracy, and reduce the mental workload for technicians when performing maintenance tasks (Macchiarella, 2004). This proposal will serve as a literary review of the aviation maintenance industry, the spatial ability of maintenance technicians, and benefits of modern digital hardware to educate, point out gaps in research, and observe possible foundations on which to build the future of aviation maintenance job task cards leading to a the methodology of the proposed study.
Adoption of smart cards in the medical sector: the Canadian experience.
Auber, B A; Hamel, G
2001-10-01
This research evaluates the factors influencing the adoption of smart cards in the medical sector (a smart card has a micro-processor containing information about the patient: identification, emergency data (allergies, blood type, etc.), vaccination, drugs used, and the general medical record). This research was conducted after a pilot study designed to evaluate the use of such smart cards. Two hundred and ninety-nine professionals, along with 7248 clients, used the smart card for a year. The targeted population included mostly elderly people, infants, and pregnant women (the most intensive users of health care services). Following this pilot study, two surveys were conducted, together with numerous interviews, to assess the factors influencing adoption of the technology. A general picture emerged. indicating that although the new card is well-perceived by individuals, tangible benefits must be available to motivate professionals and clients to adopt the technology. Results show that the fundamental dimension that needs to be assessed before massive diffusion is the relative advantage to the professional. The system must provide a direct benefit to its user. The relative advantage of the system for the professional is directly linked to the obligation for the client to use the card. The system is beneficial for the professional only if the information on the card is complete. Technical adequacy is a necessary but not sufficient condition for adoption.
Evaluation of an Adaptive Automation Trigger Based on Task Performance, Priority, and Frequency
2013-06-01
with dual Intel ® Xeon ® CPU x5550 processors @ 2.67 GHz each, 12.0 GB RAM, and a 1.5 GB PCIe nVidia Quadro FX 4800 graphics card (Microsoft...Cole Publishing Company . Miller, C. A., & Parasuraman, R. (2007). Designing for flexible interaction between humans and automation: Delegation
Onboard spectral imager data processor
NASA Astrophysics Data System (ADS)
Otten, Leonard J.; Meigs, Andrew D.; Franklin, Abraham J.; Sears, Robert D.; Robison, Mark W.; Rafert, J. Bruce; Fronterhouse, Donald C.; Grotbeck, Ronald L.
1999-10-01
Previous papers have described the concept behind the MightySat II.1 program, the satellite's Fourier Transform imaging spectrometer's optical design, the design for the spectral imaging payload, and its initial qualification testing. This paper discusses the on board data processing designed to reduce the amount of downloaded data by an order of magnitude and provide a demonstration of a smart spaceborne spectral imaging sensor. Two custom components, a spectral imager interface 6U VME card that moves data at over 30 MByte/sec, and four TI C-40 processors mounted to a second 6U VME and daughter card, are used to adapt the sensor to the spacecraft and provide the necessary high speed processing. A system architecture that offers both on board real time image processing and high-speed post data collection analysis of the spectral data has been developed. In addition to the on board processing of the raw data into a usable spectral data volume, one feature extraction technique has been incorporated. This algorithm operates on the basic interferometric data. The algorithm is integrated within the data compression process to search for uploadable feature descriptions.
The design of dual-mode complex signal processors based on quadratic modular number codes
NASA Astrophysics Data System (ADS)
Jenkins, W. K.; Krogmeier, J. V.
1987-04-01
It has been known for a long time that quadratic modular number codes admit an unusual representation of complex numbers which leads to complete decoupling of the real and imaginary channels, thereby simplifying complex multiplication and providing error isolation between the real and imaginary channels. This paper first presents a tutorial review of the theory behind the different types of complex modular rings (fields) that result from particular parameter selections, and then presents a theory for a 'dual-mode' complex signal processor based on the choice of augmented power-of-2 moduli. It is shown how a diminished-1 binary code, used by previous designers for the realization of Fermat number transforms, also leads to efficient realizations for dual-mode complex arithmetic for certain augmented power-of-2 moduli. Then a design is presented for a recursive complex filter based on a ROM/ACCUMULATOR architecture and realized in an augmented power-of-2 quadratic code, and a computer-generated example of a complex recursive filter is shown to illustrate the principles of the theory.
Accelerating Monte Carlo simulations with an NVIDIA ® graphics processor
NASA Astrophysics Data System (ADS)
Martinsen, Paul; Blaschke, Johannes; Künnemeyer, Rainer; Jordan, Robert
2009-10-01
Modern graphics cards, commonly used in desktop computers, have evolved beyond a simple interface between processor and display to incorporate sophisticated calculation engines that can be applied to general purpose computing. The Monte Carlo algorithm for modelling photon transport in turbid media has been implemented on an NVIDIA ® 8800 GT graphics card using the CUDA toolkit. The Monte Carlo method relies on following the trajectory of millions of photons through the sample, often taking hours or days to complete. The graphics-processor implementation, processing roughly 110 million scattering events per second, was found to run more than 70 times faster than a similar, single-threaded implementation on a 2.67 GHz desktop computer. Program summaryProgram title: Phoogle-C/Phoogle-G Catalogue identifier: AEEB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 51 264 No. of bytes in distributed program, including test data, etc.: 2 238 805 Distribution format: tar.gz Programming language: C++ Computer: Designed for Intel PCs. Phoogle-G requires a NVIDIA graphics card with support for CUDA 1.1 Operating system: Windows XP Has the code been vectorised or parallelized?: Phoogle-G is written for SIMD architectures RAM: 1 GB Classification: 21.1 External routines: Charles Karney Random number library. Microsoft Foundation Class library. NVIDA CUDA library [1]. Nature of problem: The Monte Carlo technique is an effective algorithm for exploring the propagation of light in turbid media. However, accurate results require tracing the path of many photons within the media. The independence of photons naturally lends the Monte Carlo technique to implementation on parallel architectures. Generally, parallel computing can be expensive, but recent advances in consumer grade graphics cards have opened the possibility of high-performance desktop parallel-computing. Solution method: In this pair of programmes we have implemented the Monte Carlo algorithm described by Prahl et al. [2] for photon transport in infinite scattering media to compare the performance of two readily accessible architectures: a standard desktop PC and a consumer grade graphics card from NVIDIA. Restrictions: The graphics card implementation uses single precision floating point numbers for all calculations. Only photon transport from an isotropic point-source is supported. The graphics-card version has no user interface. The simulation parameters must be set in the source code. The desktop version has a simple user interface; however some properties can only be accessed through an ActiveX client (such as Matlab). Additional comments: The random number library used has a LGPL ( http://www.gnu.org/copyleft/lesser.html) licence. Running time: Runtime can range from minutes to months depending on the number of photons simulated and the optical properties of the medium. References:http://www.nvidia.com/object/cuda_home.html. S. Prahl, M. Keijzer, Sl. Jacques, A. Welch, SPIE Institute Series 5 (1989) 102.
An Augmented Lagrangian Filter Method for Real-Time Embedded Optimization
Chiang, Nai -Yuan; Huang, Rui; Zavala, Victor M.
2017-04-17
We present a filter line-search algorithm for nonconvex continuous optimization that combines an augmented Lagrangian function and a constraint violation metric to accept and reject steps. The approach is motivated by real-time optimization applications that need to be executed on embedded computing platforms with limited memory and processor speeds. The proposed method enables primal–dual regularization of the linear algebra system that in turn permits the use of solution strategies with lower computing overheads. We prove that the proposed algorithm is globally convergent and we demonstrate the developments using a nonconvex real-time optimization application for a building heating, ventilation, and airmore » conditioning system. Our numerical tests are performed on a standard processor and on an embedded platform. Lastly, we demonstrate that the approach reduces solution times by a factor of over 1000.« less
An Augmented Lagrangian Filter Method for Real-Time Embedded Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiang, Nai -Yuan; Huang, Rui; Zavala, Victor M.
We present a filter line-search algorithm for nonconvex continuous optimization that combines an augmented Lagrangian function and a constraint violation metric to accept and reject steps. The approach is motivated by real-time optimization applications that need to be executed on embedded computing platforms with limited memory and processor speeds. The proposed method enables primal–dual regularization of the linear algebra system that in turn permits the use of solution strategies with lower computing overheads. We prove that the proposed algorithm is globally convergent and we demonstrate the developments using a nonconvex real-time optimization application for a building heating, ventilation, and airmore » conditioning system. Our numerical tests are performed on a standard processor and on an embedded platform. Lastly, we demonstrate that the approach reduces solution times by a factor of over 1000.« less
Replication of Space-Shuttle Computers in FPGAs and ASICs
NASA Technical Reports Server (NTRS)
Ferguson, Roscoe C.
2008-01-01
A document discusses the replication of the functionality of the onboard space-shuttle general-purpose computers (GPCs) in field-programmable gate arrays (FPGAs) and application-specific integrated circuits (ASICs). The purpose of the replication effort is to enable utilization of proven space-shuttle flight software and software-development facilities to the extent possible during development of software for flight computers for a new generation of launch vehicles derived from the space shuttles. The replication involves specifying the instruction set of the central processing unit and the input/output processor (IOP) of the space-shuttle GPC in a hardware description language (HDL). The HDL is synthesized to form a "core" processor in an FPGA or, less preferably, in an ASIC. The core processor can be used to create a flight-control card to be inserted into a new avionics computer. The IOP of the GPC as implemented in the core processor could be designed to support data-bus protocols other than that of a multiplexer interface adapter (MIA) used in the space shuttle. Hence, a computer containing the core processor could be tailored to communicate via the space-shuttle GPC bus and/or one or more other buses.
DIABCARD a smart card for patients with chronic diseases.
Engelbrecht, R; Hildebrand, C
1997-01-01
Within the European Union-sponsored project DIABCARD, the core of a chip-card-based medical information system for patients with chronic diseases, exemplified on diabetes mellitus, has been developed. The long-term goal of the project is to improve the medical record and the quality of care for patients with chronic diseases. The basic idea is to have a portable electronic medical record on a smart card. This will improve the communication between the different healthcare personnel and between different institutions and, at the same time, promote shared care. The DIABCARD chip-card-based medical information system will offer controlled access to the necessary and up-to-date patient record to everyone involved in the patient's treatment, and it will help reduce the constantly rising healthcare expenditure. The system first was implemented in a small version. The system architecture contains hardware, software, and orgware. It considers especially the memory of the chip card, the processor, the data structure, security functions, the operating system on the chip card, the interface between the chip card and the application, and various application areas. The DIABCARD dataset was defined via an information model, which describes the different communication processes, via acknowledged diabetes datasets and medical scenarios. It includes, among others, emergency data, data for quality assurance, and data for blood glucose self-monitoring. The first prototype has been developed, and a pilot was run for 3 months.
Design of a motion JPEG (M/JPEG) adapter card
NASA Astrophysics Data System (ADS)
Lee, D. H.; Sudharsanan, Subramania I.
1994-05-01
In this paper we describe a design of a high performance JPEG (Joint Photographic Experts Group) Micro Channel adapter card. The card, tested on a range of PS/2 platforms (models 50 to 95), can complete JPEG operations on a 640 by 240 pixel image within 1/60 of a second, thus enabling real-time capture and display of high quality digital video. The card accepts digital pixels for either a YUV 4:2:2 or an RGB 4:4:4 pixel bus and has been shown to handle up to 2.05 MBytes/second of compressed data. The compressed data is transmitted to a host memory area by Direct Memory Access operations. The card uses a single C-Cube's CL550 JPEG processor that complies with the baseline JPEG. We give broad descriptions of the hardware that controls the video interface, CL550, and the system interface. Some critical design points that enhance the overall performance of the M/JPEG systems are pointed out. The control of the adapter card is achieved by an interrupt driven software that runs under DOS. The software performs a variety of tasks that include change of color space (RGB or YUV), change of quantization and Huffman tables, odd and even field control and some diagnostic operations.
Spectral Structure Of Phase-Induced Intensity Noise In Recirculating Delay Lines
NASA Astrophysics Data System (ADS)
Tur, M.; Moslehi, B.; Bowers, J. E.; Newton, S. A.; Jackson, K. P.; Goodman, J. W.; Cutler, C. C.; Shaw, H. J.
1983-09-01
The dynamic range of fiber optic signal processors driven by relatively incoherent multimode semiconductor lasers is shown to be severely limited by laser phase-induced noise. It is experimentally demonstrated that while the noise power spectrum of differential length fiber filters is approximately flat, processors with recirculating loops exhibit noise with a periodically structured power spectrum with notches at zero frequency as well as at all other multiples of 1/(loop delay). The experimental results are aug-mented by a theoretical analysis.
MATCHED FILTER COMPUTATION ON FPGA, CELL, AND GPU
DOE Office of Scientific and Technical Information (OSTI.GOV)
BAKER, ZACHARY K.; GOKHALE, MAYA B.; TRIPP, JUSTIN L.
2007-01-08
The matched filter is an important kernel in the processing of hyperspectral data. The filter enables researchers to sift useful data from instruments that span large frequency bands. In this work, they evaluate the performance of a matched filter algorithm implementation on accelerated co-processor (XD1000), the IBM Cell microprocessor, and the NVIDIA GeForce 6900 GTX GPU graphics card. They provide extensive discussion of the challenges and opportunities afforded by each platform. In particular, they explore the problems of partitioning the filter most efficiently between the host CPU and the co-processor. Using their results, they derive several performance metrics that providemore » the optimal solution for a variety of application situations.« less
Large-N in Volcano Settings: Volcanosri
NASA Astrophysics Data System (ADS)
Lees, J. M.; Song, W.; Xing, G.; Vick, S.; Phillips, D.
2014-12-01
We seek a paradigm shift in the approach we take on volcano monitoring where the compromise from high fidelity to large numbers of sensors is used to increase coverage and resolution. Accessibility, danger and the risk of equipment loss requires that we develop systems that are independent and inexpensive. Furthermore, rather than simply record data on hard disk for later analysis we desire a system that will work autonomously, capitalizing on wireless technology and in field network analysis. To this end we are currently producing a low cost seismic array which will incorporate, at the very basic level, seismological tools for first cut analysis of a volcano in crises mode. At the advanced end we expect to perform tomographic inversions in the network in near real time. Geophone (4 Hz) sensors connected to a low cost recording system will be installed on an active volcano where triggering earthquake location and velocity analysis will take place independent of human interaction. Stations are designed to be inexpensive and possibly disposable. In one of the first implementations the seismic nodes consist of an Arduino Due processor board with an attached Seismic Shield. The Arduino Due processor board contains an Atmel SAM3X8E ARM Cortex-M3 CPU. This 32 bit 84 MHz processor can filter and perform coarse seismic event detection on a 1600 sample signal in fewer than 200 milliseconds. The Seismic Shield contains a GPS module, 900 MHz high power mesh network radio, SD card, seismic amplifier, and 24 bit ADC. External sensors can be attached to either this 24-bit ADC or to the internal multichannel 12 bit ADC contained on the Arduino Due processor board. This allows the node to support attachment of multiple sensors. By utilizing a high-speed 32 bit processor complex signal processing tasks can be performed simultaneously on multiple sensors. Using a 10 W solar panel, second system being developed can run autonomously and collect data on 3 channels at 100Hz for 6 months with the installed 16Gb SD card. Initial designs and test results will be presented and discussed.
2011-03-25
number one and Nebulae at number three. Both systems rely on GPU co-processing and use Intel Xeon processors cards and NVIDIA Tesla C2050 GPUs. In...spite of a theoretical peak capability of almost 3 Petaflop/s, Nebulae clocked at 1.271 PFlop/s when running the Linpack benchmark, which puts it
NASA Astrophysics Data System (ADS)
Hill, C.
2008-12-01
Low cost graphic cards today use many, relatively simple, compute cores to deliver support for memory bandwidth of more than 100GB/s and theoretical floating point performance of more than 500 GFlop/s. Right now this performance is, however, only accessible to highly parallel algorithm implementations that, (i) can use a hundred or more, 32-bit floating point, concurrently executing cores, (ii) can work with graphics memory that resides on the graphics card side of the graphics bus and (iii) can be partially expressed in a language that can be compiled by a graphics programming tool. In this talk we describe our experiences implementing a complete, but relatively simple, time dependent shallow-water equations simulation targeting a cluster of 30 computers each hosting one graphics card. The implementation takes into account the considerations (i), (ii) and (iii) listed previously. We code our algorithm as a series of numerical kernels. Each kernel is designed to be executed by multiple threads of a single process. Kernels are passed memory blocks to compute over which can be persistent blocks of memory on a graphics card. Each kernel is individually implemented using the NVidia CUDA language but driven from a higher level supervisory code that is almost identical to a standard model driver. The supervisory code controls the overall simulation timestepping, but is written to minimize data transfer between main memory and graphics memory (a massive performance bottle-neck on current systems). Using the recipe outlined we can boost the performance of our cluster by nearly an order of magnitude, relative to the same algorithm executing only on the cluster CPU's. Achieving this performance boost requires that many threads are available to each graphics processor for execution within each numerical kernel and that the simulations working set of data can fit into the graphics card memory. As we describe, this puts interesting upper and lower bounds on the problem sizes for which this technology is currently most useful. However, many interesting problems fit within this envelope. Looking forward, we extrapolate our experience to estimate full-scale ocean model performance and applicability. Finally we describe preliminary hybrid mixed 32-bit and 64-bit experiments with graphics cards that support 64-bit arithmetic, albeit at a lower performance.
A UNIX SVR4-OS 9 distributed data acquisition for high energy physics
NASA Astrophysics Data System (ADS)
Drouhin, F.; Schwaller, B.; Fontaine, J. C.; Charles, F.; Pallares, A.; Huss, D.
1998-08-01
The distributed data acquisition (DAQ) system developed by the GRPHE (Groupe de Recherche en Physique des Hautes Energies) group is a combination of hardware and software dedicated to high energy physics. The system described here is used in the beam tests of the CMS tracker. The central processor of the system is a RISC CPU hosted in a VME card, running a POSIX compliant UNIX system. Specialized real-time OS9 VME cards perform the instrumentation control. The main data flow goes over a deterministic high speed network. The UNIX system manages a list of OS9 front-end systems with a synchronisation protocol running over a TCP/IP layer.
Ultra-Reliable Digital Avionics (URDA) processor
NASA Astrophysics Data System (ADS)
Branstetter, Reagan; Ruszczyk, William; Miville, Frank
1994-10-01
Texas Instruments Incorporated (TI) developed the URDA processor design under contract with the U.S. Air Force Wright Laboratory and the U.S. Army Night Vision and Electro-Sensors Directorate. TI's approach couples advanced packaging solutions with advanced integrated circuit (IC) technology to provide a high-performance (200 MIPS/800 MFLOPS) modular avionics processor module for a wide range of avionics applications. TI's processor design integrates two Ada-programmable, URDA basic processor modules (BPM's) with a JIAWG-compatible PiBus and TMBus on a single F-22 common integrated processor-compatible form-factor SEM-E avionics card. A separate, high-speed (25-MWord/second 32-bit word) input/output bus is provided for sensor data. Each BPM provides a peak throughput of 100 MIPS scalar concurrent with 400-MFLOPS vector processing in a removable multichip module (MCM) mounted to a liquid-flowthrough (LFT) core and interfacing to a processor interface module printed wiring board (PWB). Commercial RISC technology coupled with TI's advanced bipolar complementary metal oxide semiconductor (BiCMOS) application specific integrated circuit (ASIC) and silicon-on-silicon packaging technologies are used to achieve the high performance in a miniaturized package. A Mips R4000-family reduced instruction set computer (RISC) processor and a TI 100-MHz BiCMOS vector coprocessor (VCP) ASIC provide, respectively, the 100 MIPS of a scalar processor throughput and 400 MFLOPS of vector processing throughput for each BPM. The TI Aladdim ASIC chipset was developed on the TI Aladdin Program under contract with the U.S. Army Communications and Electronics Command and was sponsored by the Advanced Research Projects Agency with technical direction from the U.S. Army Night Vision and Electro-Sensors Directorate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
This report contains papers on the following topics: NREN Security Issues: Policies and Technologies; Layer Wars: Protect the Internet with Network Layer Security; Electronic Commission Management; Workflow 2000 - Electronic Document Authorization in Practice; Security Issues of a UNIX PEM Implementation; Implementing Privacy Enhanced Mail on VMS; Distributed Public Key Certificate Management; Protecting the Integrity of Privacy-enhanced Electronic Mail; Practical Authorization in Large Heterogeneous Distributed Systems; Security Issues in the Truffles File System; Issues surrounding the use of Cryptographic Algorithms and Smart Card Applications; Smart Card Augmentation of Kerberos; and An Overview of the Advanced Smart Card Access Control System.more » Selected papers were processed separately for inclusion in the Energy Science and Technology Database.« less
The role of neuroimaging in the discovery of processing stages. A review.
Mulder, G; Wijers, A A; Lange, J J; Buijink, B M; Mulder, L J; Willemsen, A T; Paans, A M
1995-11-01
In this contribution we show how neuroimaging methods can augment behavioural methods to discover processing stages. Event Related Brain Potentials (ERPs), Brain Electrical Source Analysis (BESA) and regional changes in cerebral blood flow (rCBF) do not necessarily require behavioural responses. With the aid of rCBF we are able to discover several cortical and subcortical brain systems (processors) active in selective attention and memory search tasks. BESA describes cortical activity with high temporal resolution in terms of a limited number of neural generators within these brain systems. The combination of behavioural methods and neuroimaging provides a picture of the functional architecture of the brain. The review is organized around three processors: the Visual, Cognitive and Manual Motor Processors.
2011-12-29
ISS030-E-017789 (29 Dec. 2011) --- Working in chorus with the International Space Station team in Houston?s Mission Control Center, this astronaut and his Expedition 30 crewmates on the station install a set of Enhanced Processor and Integrated Communications (EPIC) computer cards in one of seven primary computers onboard. The upgrade will allow more experiments to operate simultaneously, and prepare for the arrival of commercial cargo ships later this year.
Learning to Decode Nonverbal Cues in Cross-Cultural Interactions
2009-06-01
iPhones support Mac OS X v10.4.10 or later operating system, as well as Windows Vista and XP, and iTunes 7.5 or later. Apple has designed the iPhones to be...Processor; 1G RAM, 1G HD, Direct X9/ATI Radeon 9800 card with dedicated memory; Noise-canceling headset w/ microphone. Apple video iPod (can be
NASA Technical Reports Server (NTRS)
Clement, Bradley J.; Estlin, Tara A.; Bornstein, Benjamin J.
2013-01-01
The Mobile Thread Task Manager (MTTM) is being applied to parallelizing existing flight software to understand the benefits and to develop new techniques and architectural concepts for adapting software to multicore architectures. It allocates and load-balances tasks for a group of threads that migrate across processors to improve cache performance. In order to balance-load across threads, the MTTM augments a basic map-reduce strategy to draw jobs from a global queue. In a multicore processor, memory may be "homed" to the cache of a specific processor and must be accessed from that processor. The MTTB architecture wraps access to data with thread management to move threads to the home processor for that data so that the computation follows the data in an attempt to avoid L2 cache misses. Cache homing is also handled by a memory manager that translates identifiers to processor IDs where the data will be homed (according to rules defined by the user). The user can also specify the number of threads and processors separately, which is important for tuning performance for different patterns of computation and memory access. MTTM efficiently processes tasks in parallel on a multiprocessor computer. It also provides an interface to make it easier to adapt existing software to a multiprocessor environment.
OpenCL Implementation of NeuroIsing
NASA Astrophysics Data System (ADS)
Zapart, C. A.
Recent advances in graphics card hardware combined with anintroduction of the OpenCL standard promise to accelerate numerical simulations across diverse scientific disciplines. One such field benefiting from new hardware/software paradigms is econophysics. The paper describes an OpenCL implementation of a selected econophysics model: NeuroIsing, which has been designed to execute in parallel on a vendor-independent graphics card. Originally introduced in the paper [C.~A.~Zapart, ``Econophysics in Financial Time Series Prediction'', PhD thesis, Graduate University for Advanced Studies, Japan (2009)], at first it was implemented on a CELL processor running inside a SONY PS3 games console. The NeuroIsing framework can be applied to predicting and trading foreign exchange as well as stock market index futures.
A Unix SVR-4-OS9 distributed data acquisition for high energy physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Drouhin, F.; Schwaller, B.; Fontaine, J.C.
1998-08-01
The distributed data acquisition (DAQ) system developed by the GRPHE (Groupe de Recherche en Physique des Hautes Energies) group is a combination of hardware and software dedicated to high energy physics. The system described here is used in the beam tests of the CMs tracker. The central processor of the system is a RISC CPU hosted in a VME card, running a POSIX compliant UNIX system. Specialized real-time OS9 VME cards perform the instrumentation control. The main data flow goes over a deterministic high speed network. The Unix system manages a list of OS9 front-end systems with a synchronization protocolmore » running over a TCP/IP layer.« less
Light-weight cyptography for resource constrained environments
NASA Astrophysics Data System (ADS)
Baier, Patrick; Szu, Harold
2006-04-01
We give a survey of "light-weight" encryption algorithms designed to maximise security within tight resource constraints (limited memory, power consumption, processor speed, chip area, etc.) The target applications of such algorithms are RFIDs, smart cards, mobile phones, etc., which may store, process and transmit sensitive data, but at the same time do not always support conventional strong algorithms. A survey of existing algorithms is given and new proposal is introduced.
Implementation Of The Configurable Fault Tolerant System Experiment On NPSAT 1
2016-03-01
REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE IMPLEMENTATION OF THE CONFIGURABLE FAULT TOLERANT SYSTEM EXPERIMENT ON NPSAT...open-source microprocessor without interlocked pipeline stages (MIPS) based processor softcore, a cached memory structure capable of accessing double...data rate type three and secure digital card memories, an interface to the main satellite bus, and XILINX’s soft error mitigation softcore. The
Visual Media Reasoning - Terrain-based Geolocation
2015-06-01
the drawings, specifications, or other data does not license the holder or any other person or corporation ; or convey any rights or permission to...3.4 Alternative Metric Investigation This section describes a graphics processor unit (GPU) based implementation in the NVIDIA CUDA programming...utilizing 2 concurrent CPU cores, each controlling a single Nvidia C2075 Tesla Fermi CUDA card. Figure 22 shows a comparison of the CPU and the GPU powered
NASA Technical Reports Server (NTRS)
Bokhari, S. H.; Raza, A. D.
1984-01-01
Three methods of augmenting computer networks by adding at most one link per processor are discussed: (1) A tree of N nodes may be augmented such that the resulting graph has diameter no greater than 4log sub 2((N+2)/3)-2. Thi O(N(3)) algorithm can be applied to any spanning tree of a connected graph to reduce the diameter of that graph to O(log N); (2) Given a binary tree T and a chain C of N nodes each, C may be augmented to produce C so that T is a subgraph of C. This algorithm is O(N) and may be used to produce augmented chains or rings that have diameter no greater than 2log sub 2((N+2)/3) and are planar; (3) Any rectangular two-dimensional 4 (8) nearest neighbor array of size N = 2(k) may be augmented so that it can emulate a single step shuffle-exchange network of size N/2 in 3(t) time steps.
Accelerated Adaptive MGS Phase Retrieval
NASA Technical Reports Server (NTRS)
Lam, Raymond K.; Ohara, Catherine M.; Green, Joseph J.; Bikkannavar, Siddarayappa A.; Basinger, Scott A.; Redding, David C.; Shi, Fang
2011-01-01
The Modified Gerchberg-Saxton (MGS) algorithm is an image-based wavefront-sensing method that can turn any science instrument focal plane into a wavefront sensor. MGS characterizes optical systems by estimating the wavefront errors in the exit pupil using only intensity images of a star or other point source of light. This innovative implementation of MGS significantly accelerates the MGS phase retrieval algorithm by using stream-processing hardware on conventional graphics cards. Stream processing is a relatively new, yet powerful, paradigm to allow parallel processing of certain applications that apply single instructions to multiple data (SIMD). These stream processors are designed specifically to support large-scale parallel computing on a single graphics chip. Computationally intensive algorithms, such as the Fast Fourier Transform (FFT), are particularly well suited for this computing environment. This high-speed version of MGS exploits commercially available hardware to accomplish the same objective in a fraction of the original time. The exploit involves performing matrix calculations in nVidia graphic cards. The graphical processor unit (GPU) is hardware that is specialized for computationally intensive, highly parallel computation. From the software perspective, a parallel programming model is used, called CUDA, to transparently scale multicore parallelism in hardware. This technology gives computationally intensive applications access to the processing power of the nVidia GPUs through a C/C++ programming interface. The AAMGS (Accelerated Adaptive MGS) software takes advantage of these advanced technologies, to accelerate the optical phase error characterization. With a single PC that contains four nVidia GTX-280 graphic cards, the new implementation can process four images simultaneously to produce a JWST (James Webb Space Telescope) wavefront measurement 60 times faster than the previous code.
2012-02-17
to be solved. Disclaimer: Reference herein to any specific commercial company , product, process, or service by trade name, trademark...data processing rather than data caching and control flow. To make use of this computational power, NVIDIA introduced a general purpose parallel...GPU implementations were run on an Intel Nehalem Xeon E5520 2.26GHz processor with an NVIDIA Tesla C2070 graphics card for varying numbers of
Configuration control of seven-degree-of-freedom arms
NASA Technical Reports Server (NTRS)
Seraji, Homayoun (Inventor); Long, Mark K. (Inventor); Lee, Thomas S. (Inventor)
1992-01-01
A seven degree of freedom robot arm with a six degree of freedom end effector is controlled by a processor employing a 6 by 7 Jacobian matrix for defining location and orientation of the end effector in terms of the rotation angles of the joints, a 1 (or more) by 7 Jacobian matrix for defining 1 (or more) user specified kinematic functions constraining location or movement of selected portions of the arm in terms of the joint angles, the processor combining the two Jacobian matrices to produce an augmented 7 (or more) by 7 Jacobian matrix, the processor effecting control by computing in accordance with forward kinematics from the augmented 7 by 7 Jacobian matrix and from the seven joint angles of the arm a set of seven desired joint angles for transmittal to the joint servo loops of the arm. One of the kinematic functions constraints the orientation of the elbow plane of the arm. Another one of the kinematic functions minimizes a sum of gravitational torques on the joints. Still another kinematic function constrains the location of the arm to perform collision avoidance. Generically, one kinematic function minimizes a sum of selected mechanical parameters of at least some of the joints associated with weighting coefficients which may be changed during arm movement. The mechanical parameters may be velocity errors or gravity torques associated with individual joints.
Configuration control of seven degree of freedom arms
NASA Technical Reports Server (NTRS)
Seraji, Homayoun (Inventor)
1995-01-01
A seven-degree-of-freedom robot arm with a six-degree-of-freedom end effector is controlled by a processor employing a 6-by-7 Jacobian matrix for defining location and orientation of the end effector in terms of the rotation angles of the joints, a 1 (or more)-by-7 Jacobian matrix for defining 1 (or more) user-specified kinematic functions constraining location or movement of selected portions of the arm in terms of the joint angles, the processor combining the two Jacobian matrices to produce an augmented 7 (or more)-by-7 Jacobian matrix, the processor effecting control by computing in accordance with forward kinematics from the augmented 7-by-7 Jacobian matrix and from the seven joint angles of the arm a set of seven desired joint angles for transmittal to the joint servo loops of the arms. One of the kinematic functions constrains the orientation of the elbow plane of the arm. Another one of the kinematic functions minimizing a sum of gravitational torques on the joints. Still another one of the kinematic functions constrains the location of the arm to perform collision avoidance. Generically, one of the kinematic functions minimizes a sum of selected mechanical parameters of at least some of the joints associated with weighting coefficients which may be changed during arm movement. The mechanical parameters may be velocity errors or position errors or gravity torques associated with individual joints.
Mobile Situational Awareness Tool: Unattended Ground Sensor-Based Remote Surveillance System
2014-09-01
into prototyped WSNs. In 2012, the Raspberry Pi , an SBC with an Arm-Processor running Gnu/Linux also designed for students and hobbyists, entered...the market selling for only $25 each [30]. The Raspberry Pi was the size of a credit card, had the ability to connect to a wide variety of...peripherals to include Wi-Fi adapters and cameras, and had enough processing power to play high-definition video [31]. The Raspberry Pi proved to be
1991-07-31
INTELLIGENT SCSI DMV-719 MAS MIL CONTROLLER DY-4 SYSTEMS BYTE-WIDE MEMORY CARD DMV-536 MEM MIL DY-4 SYSTEMS POWER SUPPLY UNIT DMV-870 PWR MIL P age No. 5 06/10...FORCE COMPUTERS PROCESSOR CPU-386 SERIES SBC COM FORCE COMPUTERS ADVANCED SYSTEM CONTROL ASCU -1/2 SBC COM UNITI FORCE COMPUTERS GRAPHICS CONTROLLER AGC...RECORD VENDOR: JANZ COMPUTER AG DIVISION: VENDOR ADDRESS: Im Doerener Feld 3 D-4790 Paderborn Germany MARKETING: Johannes Kunz TECHNICAL: Arnulf
Investigating the Importance of Stereo Displays for Helicopter Landing Simulation
2016-08-11
visualization. The two instances of X Plane® were implemented using two separate PCs, each incorporating Intel i7 processors and Nvidia Quadro K4200... Nvidia GeForce GTX 680 graphics card was used to administer the stereo acuity and fusion range tests. The tests were displayed on an Asus VG278HE 3D...monitor with 1920x1080 pixels that was compatible with Nvidia 3D Vision2 and that used active shutter glasses. At a 1-m viewing distance, the
JSC Wireless Sensor Network Update
NASA Technical Reports Server (NTRS)
Wagner, Robert
2010-01-01
Sensor nodes composed of three basic components... radio module: COTS radio module implementing standardized WSN protocol; treated as WSN modem by main board main board: contains application processor (TI MSP430 microcontroller), memory, power supply; responsible for sensor data acquisition, pre-processing, and task scheduling; re-used in every application with growing library of embedded C code sensor card: contains application-specific sensors, data conditioning hardware, and any advanced hardware not built into main board (DSPs, faster A/D, etc.); requires (re-) development for each application.
Real-time trajectory optimization on parallel processors
NASA Technical Reports Server (NTRS)
Psiaki, Mark L.
1993-01-01
A parallel algorithm has been developed for rapidly solving trajectory optimization problems. The goal of the work has been to develop an algorithm that is suitable to do real-time, on-line optimal guidance through repeated solution of a trajectory optimization problem. The algorithm has been developed on an INTEL iPSC/860 message passing parallel processor. It uses a zero-order-hold discretization of a continuous-time problem and solves the resulting nonlinear programming problem using a custom-designed augmented Lagrangian nonlinear programming algorithm. The algorithm achieves parallelism of function, derivative, and search direction calculations through the principle of domain decomposition applied along the time axis. It has been encoded and tested on 3 example problems, the Goddard problem, the acceleration-limited, planar minimum-time to the origin problem, and a National Aerospace Plane minimum-fuel ascent guidance problem. Execution times as fast as 118 sec of wall clock time have been achieved for a 128-stage Goddard problem solved on 32 processors. A 32-stage minimum-time problem has been solved in 151 sec on 32 processors. A 32-stage National Aerospace Plane problem required 2 hours when solved on 32 processors. A speed-up factor of 7.2 has been achieved by using 32-nodes instead of 1-node to solve a 64-stage Goddard problem.
A comparison of communication using the Apple iPad and a picture-based system.
Flores, Margaret; Musgrove, Kate; Renner, Scott; Hinton, Vanessa; Strozier, Shaunita; Franklin, Susan; Hil, Doris
2012-06-01
Augmentative and alternative communication (AAC) interventions have been shown to improve both communication and social skills in children and youth with autism spectrum disorders and other developmental disabilities. AAC applications have become available for personal devices such as cell phones, MP3 Players, and personal computer tablets. It is critical that these new forms of AAC are explored and evaluated. The purpose of this study was to investigate the utility of the Apple iPad™ as a communication device by comparing its use to a communication system using picture cards. Five elementary students with autism spectrum disorders and developmental disabilities who used a picture card system participated in the study. The results were mixed; communication behaviors either increased when using the iPad or remained the same as when using picture cards. The implications of these findings are discussed.
NASA Astrophysics Data System (ADS)
Russkova, Tatiana V.
2017-11-01
One tool to improve the performance of Monte Carlo methods for numerical simulation of light transport in the Earth's atmosphere is the parallel technology. A new algorithm oriented to parallel execution on the CUDA-enabled NVIDIA graphics processor is discussed. The efficiency of parallelization is analyzed on the basis of calculating the upward and downward fluxes of solar radiation in both a vertically homogeneous and inhomogeneous models of the atmosphere. The results of testing the new code under various atmospheric conditions including continuous singlelayered and multilayered clouds, and selective molecular absorption are presented. The results of testing the code using video cards with different compute capability are analyzed. It is shown that the changeover of computing from conventional PCs to the architecture of graphics processors gives more than a hundredfold increase in performance and fully reveals the capabilities of the technology used.
DSP Implementation of the Retinex Image Enhancement Algorithm
NASA Technical Reports Server (NTRS)
Hines, Glenn; Rahman, Zia-Ur; Jobson, Daniel; Woodell, Glenn
2004-01-01
The Retinex is a general-purpose image enhancement algorithm that is used to produce good visual representations of scenes. It performs a non-linear spatial/spectral transform that synthesizes strong local contrast enhancement and color constancy. A real-time, video frame rate implementation of the Retinex is required to meet the needs of various potential users. Retinex processing contains a relatively large number of complex computations, thus to achieve real-time performance using current technologies requires specialized hardware and software. In this paper we discuss the design and development of a digital signal processor (DSP) implementation of the Retinex. The target processor is a Texas Instruments TMS320C6711 floating point DSP. NTSC video is captured using a dedicated frame-grabber card, Retinex processed, and displayed on a standard monitor. We discuss the optimizations used to achieve real-time performance of the Retinex and also describe our future plans on using alternative architectures.
Jastrzembski, Tiffany S.; Charness, Neil
2009-01-01
The authors estimate weighted mean values for nine information processing parameters for older adults using the Card, Moran, and Newell (1983) Model Human Processor model. The authors validate a subset of these parameters by modeling two mobile phone tasks using two different phones and comparing model predictions to a sample of younger (N = 20; Mage = 20) and older (N = 20; Mage = 69) adults. Older adult models fit keystroke-level performance at the aggregate grain of analysis extremely well (R = 0.99) and produced equivalent fits to previously validated younger adult models. Critical path analyses highlighted points of poor design as a function of cognitive workload, hardware/software design, and user characteristics. The findings demonstrate that estimated older adult information processing parameters are valid for modeling purposes, can help designers understand age-related performance using existing interfaces, and may support the development of age-sensitive technologies. PMID:18194048
Jastrzembski, Tiffany S; Charness, Neil
2007-12-01
The authors estimate weighted mean values for nine information processing parameters for older adults using the Card, Moran, and Newell (1983) Model Human Processor model. The authors validate a subset of these parameters by modeling two mobile phone tasks using two different phones and comparing model predictions to a sample of younger (N = 20; M-sub(age) = 20) and older (N = 20; M-sub(age) = 69) adults. Older adult models fit keystroke-level performance at the aggregate grain of analysis extremely well (R = 0.99) and produced equivalent fits to previously validated younger adult models. Critical path analyses highlighted points of poor design as a function of cognitive workload, hardware/software design, and user characteristics. The findings demonstrate that estimated older adult information processing parameters are valid for modeling purposes, can help designers understand age-related performance using existing interfaces, and may support the development of age-sensitive technologies.
Quantum Monte Carlo: Faster, More Reliable, And More Accurate
NASA Astrophysics Data System (ADS)
Anderson, Amos Gerald
2010-06-01
The Schrodinger Equation has been available for about 83 years, but today, we still strain to apply it accurately to molecules of interest. The difficulty is not theoretical in nature, but practical, since we're held back by lack of sufficient computing power. Consequently, effort is applied to find acceptable approximations to facilitate real time solutions. In the meantime, computer technology has begun rapidly advancing and changing the way we think about efficient algorithms. For those who can reorganize their formulas to take advantage of these changes and thereby lift some approximations, incredible new opportunities await. Over the last decade, we've seen the emergence of a new kind of computer processor, the graphics card. Designed to accelerate computer games by optimizing quantity instead of quality in processor, they have become of sufficient quality to be useful to some scientists. In this thesis, we explore the first known use of a graphics card to computational chemistry by rewriting our Quantum Monte Carlo software into the requisite "data parallel" formalism. We find that notwithstanding precision considerations, we are able to speed up our software by about a factor of 6. The success of a Quantum Monte Carlo calculation depends on more than just processing power. It also requires the scientist to carefully design the trial wavefunction used to guide simulated electrons. We have studied the use of Generalized Valence Bond wavefunctions to simply, and yet effectively, captured the essential static correlation in atoms and molecules. Furthermore, we have developed significantly improved two particle correlation functions, designed with both flexibility and simplicity considerations, representing an effective and reliable way to add the necessary dynamic correlation. Lastly, we present our method for stabilizing the statistical nature of the calculation, by manipulating configuration weights, thus facilitating efficient and robust calculations. Our combination of Generalized Valence Bond wavefunctions, improved correlation functions, and stabilized weighting techniques for calculations run on graphics cards, represents a new way for using Quantum Monte Carlo to study arbitrarily sized molecules.
Method and apparatus for electrospark deposition
Bailey, Jeffrey A.; Johnson, Roger N.; Park, Walter R.; Munley, John T.
2004-12-28
A method and apparatus for controlling electrospark deposition (ESD) comprises using electrical variable waveforms from the ESD process as a feedback parameter. The method comprises measuring a plurality of peak amplitudes from a series of electrical energy pulses delivered to an electrode tip. The maximum peak value from among the plurality of peak amplitudes correlates to the contact force between the electrode tip and a workpiece. The method further comprises comparing the maximum peak value to a set point to determine an offset and optimizing the contact force according to the value of the offset. The apparatus comprises an electrode tip connected to an electrical energy wave generator and an electrical signal sensor, which connects to a high-speed data acquisition card. An actuator provides relative motion between the electrode tip and a workpiece by receiving a feedback drive signal from a processor that is operably connected to the actuator and the high-speed data acquisition card.
Design of a ``Digital Atlas Vme Electronics'' (DAVE) module
NASA Astrophysics Data System (ADS)
Goodrick, M.; Robinson, D.; Shaw, R.; Postranecky, M.; Warren, M.
2012-01-01
ATLAS-SCT has developed a new ATLAS trigger card, 'Digital Atlas Vme Electronics' (``DAVE''). The unit is designed to provide a versatile array of interface and logic resources, including a large FPGA. It interfaces to both VME bus and USB hosts. DAVE aims to provide exact ATLAS CTP (ATLAS Central Trigger Processor) functionality, with random trigger, simple and complex deadtime, ECR (Event Counter Reset), BCR (Bunch Counter Reset) etc. being generated to give exactly the same conditions in standalone running as experienced in combined runs. DAVE provides additional hardware and a large amount of free firmware resource to allow users to add or change functionality. The combination of the large number of individually programmable inputs and outputs in various formats, with very large external RAM and other components all connected to the FPGA, also makes DAVE a powerful and versatile FPGA utility card.
NASA Astrophysics Data System (ADS)
Lightstone, P. C.; Davidson, W. M.
1982-04-01
The military detection assessment laboratory houses an experimental field system which assesses different alarm indicators such as fence disturbance sensors, MILES cables, and microwave Racons. A speech synthesis board which could be interfaced, by means of a computer, to an alarm logger making verbal acknowledgement of alarms possible was purchased. Different products and different types of voice synthesis were analyzed before a linear predictive code device produced by Telesensory Speech Systems of Palo Alto, California was chosen. This device is called the Speech 1000 Board and has a dedicated 8085 processor. A multiplexer card was designed and the Sp 1000 interfaced through the card into a TMS 990/100M Texas Instrument microcomputer. It was also necessary to design the software with the capability of recognizing and flagging an alarm on any 1 of 32 possible lines. The experimental field system was then packaged with a dc power supply, LED indicators, speakers, and switches, and deployed in the field performing reliably.
Architecture of security management unit for safe hosting of multiple agents
NASA Astrophysics Data System (ADS)
Gilmont, Tanguy; Legat, Jean-Didier; Quisquater, Jean-Jacques
1999-04-01
In such growing areas as remote applications in large public networks, electronic commerce, digital signature, intellectual property and copyright protection, and even operating system extensibility, the hardware security level offered by existing processors is insufficient. They lack protection mechanisms that prevent the user from tampering critical data owned by those applications. Some devices make exception, but have not enough processing power nor enough memory to stand up to such applications (e.g. smart cards). This paper proposes an architecture of secure processor, in which the classical memory management unit is extended into a new security management unit. It allows ciphered code execution and ciphered data processing. An internal permanent memory can store cipher keys and critical data for several client agents simultaneously. The ordinary supervisor privilege scheme is replaced by a privilege inheritance mechanism that is more suited to operating system extensibility. The result is a secure processor that has hardware support for extensible multitask operating systems, and can be used for both general applications and critical applications needing strong protection. The security management unit and the internal permanent memory can be added to an existing CPU core without loss of performance, and do not require it to be modified.
Equalizer: a scalable parallel rendering framework.
Eilemann, Stefan; Makhinya, Maxim; Pajarola, Renato
2009-01-01
Continuing improvements in CPU and GPU performances as well as increasing multi-core processor and cluster-based parallelism demand for flexible and scalable parallel rendering solutions that can exploit multipipe hardware accelerated graphics. In fact, to achieve interactive visualization, scalable rendering systems are essential to cope with the rapid growth of data sets. However, parallel rendering systems are non-trivial to develop and often only application specific implementations have been proposed. The task of developing a scalable parallel rendering framework is even more difficult if it should be generic to support various types of data and visualization applications, and at the same time work efficiently on a cluster with distributed graphics cards. In this paper we introduce a novel system called Equalizer, a toolkit for scalable parallel rendering based on OpenGL which provides an application programming interface (API) to develop scalable graphics applications for a wide range of systems ranging from large distributed visualization clusters and multi-processor multipipe graphics systems to single-processor single-pipe desktop machines. We describe the system architecture, the basic API, discuss its advantages over previous approaches, present example configurations and usage scenarios as well as scalability results.
Using state-issued identification cards for obesity tracking.
Morris, Daniel S; Schubert, Stacey S; Ngo, Duyen L; Rubado, Dan J; Main, Eric; Douglas, Jae P
2015-01-01
Obesity prevention has emerged as one of public health's top priorities. Public health agencies need reliable data on population health status to guide prevention efforts. Existing survey data sources provide county-level estimates; obtaining sub-county estimates from survey data can be prohibitively expensive. State-issued identification cards are an alternate data source for community-level obesity estimates. We computed body mass index for 3.2 million adult Oregonians who were issued a driver license or identification card between 2003 and 2010. Statewide estimates of obesity prevalence and average body mass index were compared to the Oregon Behavioral Risk Factor Surveillance System (BRFSS). After geocoding addresses we calculated average adult body mass index for every census tract and block group in the state. Sub-county estimates reveal striking patterns in the population's weight status. Annual obesity prevalence estimates from identification cards averaged 18% lower than the BRFSS for men and 31% lower for women. Body mass index estimates averaged 2% lower than the BRFSS for men and 5% lower for women. Identification card records are a promising data source to augment tracking of obesity. People do tend to misrepresent their weight, but the consistent bias does not obscure patterns and trends. Large numbers of records allow for stable estimates for small geographic areas. Copyright © 2014 Asian Oceanian Association for the Study of Obesity. All rights reserved.
Pi-Sat: A Low Cost Small Satellite and Distributed Spacecraft Mission System Test Platform
NASA Technical Reports Server (NTRS)
Cudmore, Alan
2015-01-01
Current technology and budget trends indicate a shift in satellite architectures from large, expensive single satellite missions, to small, low cost distributed spacecraft missions. At the center of this shift is the SmallSatCubesat architecture. The primary goal of the Pi-Sat project is to create a low cost, and easy to use Distributed Spacecraft Mission (DSM) test bed to facilitate the research and development of next-generation DSM technologies and concepts. This test bed also serves as a realistic software development platform for Small Satellite and Cubesat architectures. The Pi-Sat is based on the popular $35 Raspberry Pi single board computer featuring a 700Mhz ARM processor, 512MB of RAM, a flash memory card, and a wealth of IO options. The Raspberry Pi runs the Linux operating system and can easily run Code 582s Core Flight System flight software architecture. The low cost and high availability of the Raspberry Pi make it an ideal platform for a Distributed Spacecraft Mission and Cubesat software development. The Pi-Sat models currently include a Pi-Sat 1U Cube, a Pi-Sat Wireless Node, and a Pi-Sat Cubesat processor card.The Pi-Sat project takes advantage of many popular trends in the Maker community including low cost electronics, 3d printing, and rapid prototyping in order to provide a realistic platform for flight software testing, training, and technology development. The Pi-Sat has also provided fantastic hands on training opportunities for NASA summer interns and Pathways students.
Scalable NIC-based reduction on large-scale clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moody, A.; Fernández, J. C.; Petrini, F.
2003-01-01
Many parallel algorithms require effiaent support for reduction mllectives. Over the years, researchers have developed optimal reduction algonduns by taking inm account system size, dam size, and complexities of reduction operations. However, all of these algorithm have assumed the faa that the reduction precessing takes place on the host CPU. Modem Network Interface Cards (NICs) sport programmable processors with substantial memory and thus introduce a fresh variable into the equation This raises the following intersting challenge: Can we take advantage of modern NICs to implementJost redudion operations? In this paper, we take on this challenge in the context of large-scalemore » clusters. Through experiments on the 960-node, 1920-processor or ASCI Linux Cluster (ALC) located at the Lawrence Livermore National Laboratory, we show that NIC-based reductions indeed perform with reduced latency and immed consistency over host-based aleorithms for the wmmon case and that these benefits scale as the system grows. In the largest configuration tested--1812 processors-- our NIC-based algorithm can sum a single element vector in 73 ps with 32-bi integers and in 118 with Mbit floating-point numnbers. These results represent an improvement, respeaively, of 121% and 39% with resvect w the {approx}roductionle vel MPI library« less
Technology transfer of military space microprocessor developments
NASA Astrophysics Data System (ADS)
Gorden, C.; King, D.; Byington, L.; Lanza, D.
1999-01-01
Over the past 13 years the Air Force Research Laboratory (AFRL) has led the development of microprocessors and computers for USAF space and strategic missile applications. As a result of these Air Force development programs, advanced computer technology is available for use by civil and commercial space customers as well. The Generic VHSIC Spaceborne Computer (GVSC) program began in 1985 at AFRL to fulfill a deficiency in the availability of space-qualified data and control processors. GVSC developed a radiation hardened multi-chip version of the 16-bit, Mil-Std 1750A microprocessor. The follow-on to GVSC, the Advanced Spaceborne Computer Module (ASCM) program, was initiated by AFRL to establish two industrial sources for complete, radiation-hardened 16-bit and 32-bit computers and microelectronic components. Development of the Control Processor Module (CPM), the first of two ASCM contract phases, concluded in 1994 with the availability of two sources for space-qualified, 16-bit Mil-Std-1750A computers, cards, multi-chip modules, and integrated circuits. The second phase of the program, the Advanced Technology Insertion Module (ATIM), was completed in December 1997. ATIM developed two single board computers based on 32-bit reduced instruction set computer (RISC) processors. GVSC, CPM, and ATIM technologies are flying or baselined into the majority of today's DoD, NASA, and commercial satellite systems.
QuickProbs—A Fast Multiple Sequence Alignment Algorithm Designed for Graphics Processors
Gudyś, Adam; Deorowicz, Sebastian
2014-01-01
Multiple sequence alignment is a crucial task in a number of biological analyses like secondary structure prediction, domain searching, phylogeny, etc. MSAProbs is currently the most accurate alignment algorithm, but its effectiveness is obtained at the expense of computational time. In the paper we present QuickProbs, the variant of MSAProbs customised for graphics processors. We selected the two most time consuming stages of MSAProbs to be redesigned for GPU execution: the posterior matrices calculation and the consistency transformation. Experiments on three popular benchmarks (BAliBASE, PREFAB, OXBench-X) on quad-core PC equipped with high-end graphics card show QuickProbs to be 5.7 to 9.7 times faster than original CPU-parallel MSAProbs. Additional tests performed on several protein families from Pfam database give overall speed-up of 6.7. Compared to other algorithms like MAFFT, MUSCLE, or ClustalW, QuickProbs proved to be much more accurate at similar speed. Additionally we introduce a tuned variant of QuickProbs which is significantly more accurate on sets of distantly related sequences than MSAProbs without exceeding its computation time. The GPU part of QuickProbs was implemented in OpenCL, thus the package is suitable for graphics processors produced by all major vendors. PMID:24586435
NASA Astrophysics Data System (ADS)
Badoni, D.; Bizzarri, M.; Bonaiuto, V.; Checcucci, B.; De Simone, N.; Federici, L.; Fucci, A.; Paoluzzi, G.; Papi, A.; Piccini, M.; Salamon, A.; Salina, G.; Santovetti, E.; Sargeni, F.; Venditti, S.
2014-01-01
The goal of the NA62 experiment at the CERN SPS is the measurement of the Branching Ratio of the very rare kaon decay K+→π+ ν bar nu with a 10% accuracy by collecting 100 events in two years of data taking. An efficient photon veto system is needed to reject the K+→π+ π0 background and a liquid krypton electromagnetic calorimeter will be used for this purpose in the 1-10 mrad angular region. The L0 trigger system for the calorimeter consists of a peak reconstruction algorithm implemented on FPGA by using a mixed parallel architecture based on soft core Altera NIOS II embedded processors together with custom VHDL modules. This solution allows an efficient and flexible reconstruction of the energy-deposition peak. The system will be totally composed of 36 TEL62 boards, 108 mezzanine cards and 215 high-performance FPGAs. We describe the design, current status and the results of the first performance tests.
NASA Astrophysics Data System (ADS)
Laracuente, Nicholas; Grossman, Carl
2013-03-01
We developed an algorithm and software to calculate autocorrelation functions from real-time photon-counting data using the fast, parallel capabilities of graphical processor units (GPUs). Recent developments in hardware and software have allowed for general purpose computing with inexpensive GPU hardware. These devices are more suited for emulating hardware autocorrelators than traditional CPU-based software applications by emphasizing parallel throughput over sequential speed. Incoming data are binned in a standard multi-tau scheme with configurable points-per-bin size and are mapped into a GPU memory pattern to reduce time-expensive memory access. Applications include dynamic light scattering (DLS) and fluorescence correlation spectroscopy (FCS) experiments. We ran the software on a 64-core graphics pci card in a 3.2 GHz Intel i5 CPU based computer running Linux. FCS measurements were made on Alexa-546 and Texas Red dyes in a standard buffer (PBS). Software correlations were compared to hardware correlator measurements on the same signals. Supported by HHMI and Swarthmore College
NASA Technical Reports Server (NTRS)
Samba, A. S.
1985-01-01
The problem of solving banded linear systems by direct (non-iterative) techniques on the Vector Processor System (VPS) 32 supercomputer is considered. Two efficient direct methods for solving banded linear systems on the VPS 32 are described. The vector cyclic reduction (VCR) algorithm is discussed in detail. The performance of the VCR on a three parameter model problem is also illustrated. The VCR is an adaptation of the conventional point cyclic reduction algorithm. The second direct method is the Customized Reduction of Augmented Triangles' (CRAT). CRAT has the dominant characteristics of an efficient VPS 32 algorithm. CRAT is tailored to the pipeline architecture of the VPS 32 and as a consequence the algorithm is implicitly vectorizable.
Optical Verification Laboratory Demonstration System for High Security Identification Cards
NASA Technical Reports Server (NTRS)
Javidi, Bahram
1997-01-01
Document fraud including unauthorized duplication of identification cards and credit cards is a serious problem facing the government, banks, businesses, and consumers. In addition, counterfeit products such as computer chips, and compact discs, are arriving on our shores in great numbers. With the rapid advances in computers, CCD technology, image processing hardware and software, printers, scanners, and copiers, it is becoming increasingly easy to reproduce pictures, logos, symbols, paper currency, or patterns. These problems have stimulated an interest in research, development and publications in security technology. Some ID cards, credit cards and passports currently use holograms as a security measure to thwart copying. The holograms are inspected by the human eye. In theory, the hologram cannot be reproduced by an unauthorized person using commercially-available optical components; in practice, however, technology has advanced to the point where the holographic image can be acquired from a credit card-photographed or captured with by a CCD camera-and a new hologram synthesized using commercially-available optical components or hologram-producing equipment. Therefore, a pattern that can be read by a conventional light source and a CCD camera can be reproduced. An optical security and anti-copying device that provides significant security improvements over existing security technology was demonstrated. The system can be applied for security verification of credit cards, passports, and other IDs so that they cannot easily be reproduced. We have used a new scheme of complex phase/amplitude patterns that cannot be seen and cannot be copied by an intensity-sensitive detector such as a CCD camera. A random phase mask is bonded to a primary identification pattern which could also be phase encoded. The pattern could be a fingerprint, a picture of a face, or a signature. The proposed optical processing device is designed to identify both the random phase mask and the primary pattern [1-3]. We have demonstrated experimentally an optical processor for security verification of objects, products, and persons. This demonstration is very important to encourage industries to consider the proposed system for research and development.
Readout and trigger for the AFP detector at ATLAS experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kocian, M.
AFP, the ATLAS Forward Proton consists of silicon detectors at 205 m and 217 m on each side of ATLAS. In 2016 two detectors in one side were installed. The FEI4 chips are read at 160 Mbps over the optical fibers. The DAQ system uses a FPGA board with Artix chip and a mezzanine card with RCE data processing module based on a Zynq chip with ARM processor running ArchLinux. Finally, in this paper we give an overview of the AFP detector with the commissioning steps taken to integrate with the ATLAS TDAQ. Furthermore first performance results are presented.
Readout and trigger for the AFP detector at ATLAS experiment
Kocian, M.
2017-01-25
AFP, the ATLAS Forward Proton consists of silicon detectors at 205 m and 217 m on each side of ATLAS. In 2016 two detectors in one side were installed. The FEI4 chips are read at 160 Mbps over the optical fibers. The DAQ system uses a FPGA board with Artix chip and a mezzanine card with RCE data processing module based on a Zynq chip with ARM processor running ArchLinux. Finally, in this paper we give an overview of the AFP detector with the commissioning steps taken to integrate with the ATLAS TDAQ. Furthermore first performance results are presented.
Cranial implant design using augmented reality immersive system.
Ai, Zhuming; Evenhouse, Ray; Leigh, Jason; Charbel, Fady; Rasmussen, Mary
2007-01-01
Software tools that utilize haptics for sculpting precise fitting cranial implants are utilized in an augmented reality immersive system to create a virtual working environment for the modelers. The virtual environment is designed to mimic the traditional working environment as closely as possible, providing more functionality for the users. The implant design process uses patient CT data of a defective area. This volumetric data is displayed in an implant modeling tele-immersive augmented reality system where the modeler can build a patient specific implant that precisely fits the defect. To mimic the traditional sculpting workspace, the implant modeling augmented reality system includes stereo vision, viewer centered perspective, sense of touch, and collaboration. To achieve optimized performance, this system includes a dual-processor PC, fast volume rendering with three-dimensional texture mapping, the fast haptic rendering algorithm, and a multi-threading architecture. The system replaces the expensive and time consuming traditional sculpting steps such as physical sculpting, mold making, and defect stereolithography. This augmented reality system is part of a comprehensive tele-immersive system that includes a conference-room-sized system for tele-immersive small group consultation and an inexpensive, easily deployable networked desktop virtual reality system for surgical consultation, evaluation and collaboration. This system has been used to design patient-specific cranial implants with precise fit.
Performance Metrics for Monitoring Parallel Program Executions
NASA Technical Reports Server (NTRS)
Sarukkai, Sekkar R.; Gotwais, Jacob K.; Yan, Jerry; Lum, Henry, Jr. (Technical Monitor)
1994-01-01
Existing tools for debugging performance of parallel programs either provide graphical representations of program execution or profiles of program executions. However, for performance debugging tools to be useful, such information has to be augmented with information that highlights the cause of poor program performance. Identifying the cause of poor performance necessitates the need for not only determining the significance of various performance problems on the execution time of the program, but also needs to consider the effect of interprocessor communications of individual source level data structures. In this paper, we present a suite of normalized indices which provide a convenient mechanism for focusing on a region of code with poor performance and highlights the cause of the problem in terms of processors, procedures and data structure interactions. All the indices are generated from trace files augmented with data structure information.. Further, we show with the help of examples from the NAS benchmark suite that the indices help in detecting potential cause of poor performance, based on augmented execution traces obtained by monitoring the program.
NASA Astrophysics Data System (ADS)
Bradu, Adrian; Kapinchev, Konstantin; Barnes, Fred; Garway-Heath, David F.; Rajendram, Ranjan; Keane, Pearce; Podoleanu, Adrian G.
2015-03-01
Recently, we introduced a novel Optical Coherence Tomography (OCT) method, termed as Master Slave OCT (MS-OCT), specialized for delivering en-face images. This method uses principles of spectral domain interfereometry in two stages. MS-OCT operates like a time domain OCT, selecting only signals from a chosen depth only while scanning the laser beam across the eye. Time domain OCT allows real time production of an en-face image, although relatively slowly. As a major advance, the Master Slave method allows collection of signals from any number of depths, as required by the user. The tremendous advantage in terms of parallel provision of data from numerous depths could not be fully employed by using multi core processors only. The data processing required to generate images at multiple depths simultaneously is not achievable with commodity multicore processors only. We compare here the major improvement in processing and display, brought about by using graphic cards. We demonstrate images obtained with a swept source at 100 kHz (which determines an acquisition time [Ta] for a frame of 200×200 pixels2 of Ta =1.6 s). By the end of the acquired frame being scanned, using our computing capacity, 4 simultaneous en-face images could be created in T = 0.8 s. We demonstrate that by using graphic cards, 32 en-face images can be displayed in Td 0.3 s. Other faster swept source engines can be used with no difference in terms of Td. With 32 images (or more), volumes can be created for 3D display, using en-face images, as opposed to the current technology where volumes are created using cross section OCT images.
Political Kidnappings in Turkey, 1971-1972
1977-07-01
diet , one TPLA member replied, "No, your minds must be clear in case you have to make an important decision." The hostages never learned what that...and inadequate exercise and diet augmented the intense anxiety and discomfort they suffered. As has happened in other polit- ical kidnappings, the...card from one of the airmen. Meanwhile, after his arrest, Ertekin had revealed the names of the group members, and the GOT learned that four of the
Efficient Implementation of MrBayes on Multi-GPU
Zhou, Jianfu; Liu, Xiaoguang; Wang, Gang
2013-01-01
MrBayes, using Metropolis-coupled Markov chain Monte Carlo (MCMCMC or (MC)3), is a popular program for Bayesian inference. As a leading method of using DNA data to infer phylogeny, the (MC)3 Bayesian algorithm and its improved and parallel versions are now not fast enough for biologists to analyze massive real-world DNA data. Recently, graphics processor unit (GPU) has shown its power as a coprocessor (or rather, an accelerator) in many fields. This article describes an efficient implementation a(MC)3 (aMCMCMC) for MrBayes (MC)3 on compute unified device architecture. By dynamically adjusting the task granularity to adapt to input data size and hardware configuration, it makes full use of GPU cores with different data sets. An adaptive method is also developed to split and combine DNA sequences to make full use of a large number of GPU cards. Furthermore, a new “node-by-node” task scheduling strategy is developed to improve concurrency, and several optimizing methods are used to reduce extra overhead. Experimental results show that a(MC)3 achieves up to 63× speedup over serial MrBayes on a single machine with one GPU card, and up to 170× speedup with four GPU cards, and up to 478× speedup with a 32-node GPU cluster. a(MC)3 is dramatically faster than all the previous (MC)3 algorithms and scales well to large GPU clusters. PMID:23493260
Efficient implementation of MrBayes on multi-GPU.
Bao, Jie; Xia, Hongju; Zhou, Jianfu; Liu, Xiaoguang; Wang, Gang
2013-06-01
MrBayes, using Metropolis-coupled Markov chain Monte Carlo (MCMCMC or (MC)(3)), is a popular program for Bayesian inference. As a leading method of using DNA data to infer phylogeny, the (MC)(3) Bayesian algorithm and its improved and parallel versions are now not fast enough for biologists to analyze massive real-world DNA data. Recently, graphics processor unit (GPU) has shown its power as a coprocessor (or rather, an accelerator) in many fields. This article describes an efficient implementation a(MC)(3) (aMCMCMC) for MrBayes (MC)(3) on compute unified device architecture. By dynamically adjusting the task granularity to adapt to input data size and hardware configuration, it makes full use of GPU cores with different data sets. An adaptive method is also developed to split and combine DNA sequences to make full use of a large number of GPU cards. Furthermore, a new "node-by-node" task scheduling strategy is developed to improve concurrency, and several optimizing methods are used to reduce extra overhead. Experimental results show that a(MC)(3) achieves up to 63× speedup over serial MrBayes on a single machine with one GPU card, and up to 170× speedup with four GPU cards, and up to 478× speedup with a 32-node GPU cluster. a(MC)(3) is dramatically faster than all the previous (MC)(3) algorithms and scales well to large GPU clusters.
Event-Driven Random-Access-Windowing CCD Imaging System
NASA Technical Reports Server (NTRS)
Monacos, Steve; Portillo, Angel; Ortiz, Gerardo; Alexander, James; Lam, Raymond; Liu, William
2004-01-01
A charge-coupled-device (CCD) based high-speed imaging system, called a realtime, event-driven (RARE) camera, is undergoing development. This camera is capable of readout from multiple subwindows [also known as regions of interest (ROIs)] within the CCD field of view. Both the sizes and the locations of the ROIs can be controlled in real time and can be changed at the camera frame rate. The predecessor of this camera was described in High-Frame-Rate CCD Camera Having Subwindow Capability (NPO- 30564) NASA Tech Briefs, Vol. 26, No. 12 (December 2002), page 26. The architecture of the prior camera requires tight coupling between camera control logic and an external host computer that provides commands for camera operation and processes pixels from the camera. This tight coupling limits the attainable frame rate and functionality of the camera. The design of the present camera loosens this coupling to increase the achievable frame rate and functionality. From a host computer perspective, the readout operation in the prior camera was defined on a per-line basis; in this camera, it is defined on a per-ROI basis. In addition, the camera includes internal timing circuitry. This combination of features enables real-time, event-driven operation for adaptive control of the camera. Hence, this camera is well suited for applications requiring autonomous control of multiple ROIs to track multiple targets moving throughout the CCD field of view. Additionally, by eliminating the need for control intervention by the host computer during the pixel readout, the present design reduces ROI-readout times to attain higher frame rates. This camera (see figure) includes an imager card consisting of a commercial CCD imager and two signal-processor chips. The imager card converts transistor/ transistor-logic (TTL)-level signals from a field programmable gate array (FPGA) controller card. These signals are transmitted to the imager card via a low-voltage differential signaling (LVDS) cable assembly. The FPGA controller card is connected to the host computer via a standard peripheral component interface (PCI).
Real-time lens distortion correction: speed, accuracy and efficiency
NASA Astrophysics Data System (ADS)
Bax, Michael R.; Shahidi, Ramin
2014-11-01
Optical lens systems suffer from nonlinear geometrical distortion. Optical imaging applications such as image-enhanced endoscopy and image-based bronchoscope tracking require correction of this distortion for accurate localization, tracking, registration, and measurement of image features. Real-time capability is desirable for interactive systems and live video. The use of a texture-mapping graphics accelerator, which is standard hardware on current motherboard chipsets and add-in video graphics cards, to perform distortion correction is proposed. Mesh generation for image tessellation, an error analysis, and performance results are presented. It is shown that distortion correction using commodity graphics hardware is substantially faster than using the main processor and can be performed at video frame rates (faster than 30 frames per second), and that the polar-based method of mesh generation proposed here is more accurate than a conventional grid-based approach. Using graphics hardware to perform distortion correction is not only fast and accurate but also efficient as it frees the main processor for other tasks, which is an important issue in some real-time applications.
[Development of a video image system for wireless capsule endoscopes based on DSP].
Yang, Li; Peng, Chenglin; Wu, Huafeng; Zhao, Dechun; Zhang, Jinhua
2008-02-01
A video image recorder to record video picture for wireless capsule endoscopes was designed. TMS320C6211 DSP of Texas Instruments Inc. is the core processor of this system. Images are periodically acquired from Composite Video Broadcast Signal (CVBS) source and scaled by video decoder (SAA7114H). Video data is transported from high speed buffer First-in First-out (FIFO) to Digital Signal Processor (DSP) under the control of Complex Programmable Logic Device (CPLD). This paper adopts JPEG algorithm for image coding, and the compressed data in DSP was stored to Compact Flash (CF) card. TMS320C6211 DSP is mainly used for image compression and data transporting. Fast Discrete Cosine Transform (DCT) algorithm and fast coefficient quantization algorithm are used to accelerate operation speed of DSP and decrease the executing code. At the same time, proper address is assigned for each memory, which has different speed;the memory structure is also optimized. In addition, this system uses plenty of Extended Direct Memory Access (EDMA) to transport and process image data, which results in stable and high performance.
Crespo, Alejandro C.; Dominguez, Jose M.; Barreiro, Anxo; Gómez-Gesteira, Moncho; Rogers, Benedict D.
2011-01-01
Smoothed Particle Hydrodynamics (SPH) is a numerical method commonly used in Computational Fluid Dynamics (CFD) to simulate complex free-surface flows. Simulations with this mesh-free particle method far exceed the capacity of a single processor. In this paper, as part of a dual-functioning code for either central processing units (CPUs) or Graphics Processor Units (GPUs), a parallelisation using GPUs is presented. The GPU parallelisation technique uses the Compute Unified Device Architecture (CUDA) of nVidia devices. Simulations with more than one million particles on a single GPU card exhibit speedups of up to two orders of magnitude over using a single-core CPU. It is demonstrated that the code achieves different speedups with different CUDA-enabled GPUs. The numerical behaviour of the SPH code is validated with a standard benchmark test case of dam break flow impacting on an obstacle where good agreement with the experimental results is observed. Both the achieved speed-ups and the quantitative agreement with experiments suggest that CUDA-based GPU programming can be used in SPH methods with efficiency and reliability. PMID:21695185
Software Graphics Processing Unit (sGPU) for Deep Space Applications
NASA Technical Reports Server (NTRS)
McCabe, Mary; Salazar, George; Steele, Glen
2015-01-01
A graphics processing capability will be required for deep space missions and must include a range of applications, from safety-critical vehicle health status to telemedicine for crew health. However, preliminary radiation testing of commercial graphics processing cards suggest they cannot operate in the deep space radiation environment. Investigation into an Software Graphics Processing Unit (sGPU)comprised of commercial-equivalent radiation hardened/tolerant single board computers, field programmable gate arrays, and safety-critical display software shows promising results. Preliminary performance of approximately 30 frames per second (FPS) has been achieved. Use of multi-core processors may provide a significant increase in performance.
A computational system for lattice QCD with overlap Dirac quarks
NASA Astrophysics Data System (ADS)
Chiu, Ting-Wai; Hsieh, Tung-Han; Huang, Chao-Hsi; Huang, Tsung-Ren
2003-05-01
We outline the essential features of a Linux PC cluster which is now being developed at National Taiwan University, and discuss how to optimize its hardware and software for lattice QCD with overlap Dirac quarks. At present, the cluster constitutes of 30 nodes, with each node consisting of one Pentium 4 processor (1.6/2.0 GHz), one Gbyte of PC800 RDRAM, one 40/80 Gbyte hard disk, and a network card. The speed of this system is estimated to be 30 Gflops, and its price/performance ratio is better than $1.0/Mflops for 64-bit (double precision) computations in quenched lattice QCD with overlap Dirac quarks.
CSP: A Multifaceted Hybrid Architecture for Space Computing
NASA Technical Reports Server (NTRS)
Rudolph, Dylan; Wilson, Christopher; Stewart, Jacob; Gauvin, Patrick; George, Alan; Lam, Herman; Crum, Gary Alex; Wirthlin, Mike; Wilson, Alex; Stoddard, Aaron
2014-01-01
Research on the CHREC Space Processor (CSP) takes a multifaceted hybrid approach to embedded space computing. Working closely with the NASA Goddard SpaceCube team, researchers at the National Science Foundation (NSF) Center for High-Performance Reconfigurable Computing (CHREC) at the University of Florida and Brigham Young University are developing hybrid space computers that feature an innovative combination of three technologies: commercial-off-the-shelf (COTS) devices, radiation-hardened (RadHard) devices, and fault-tolerant computing. Modern COTS processors provide the utmost in performance and energy-efficiency but are susceptible to ionizing radiation in space, whereas RadHard processors are virtually immune to this radiation but are more expensive, larger, less energy-efficient, and generations behind in speed and functionality. By featuring COTS devices to perform the critical data processing, supported by simpler RadHard devices that monitor and manage the COTS devices, and augmented with novel uses of fault-tolerant hardware, software, information, and networking within and between COTS devices, the resulting system can maximize performance and reliability while minimizing energy consumption and cost. NASA Goddard has adopted the CSP concept and technology with plans underway to feature flight-ready CSP boards on two upcoming space missions.
On-board attitude determination for the Explorer Platform satellite
NASA Technical Reports Server (NTRS)
Jayaraman, C.; Class, B.
1992-01-01
This paper describes the attitude determination algorithm for the Explorer Platform satellite. The algorithm, which is baselined on the Landsat code, is a six-element linear quadratic state estimation processor, in the form of a Kalman filter augmented by an adaptive filter process. Improvements to the original Landsat algorithm were required to meet mission pointing requirements. These consisted of a more efficient sensor processing algorithm and the addition of an adaptive filter which acts as a check on the Kalman filter during satellite slew maneuvers. A 1750A processor will be flown on board the satellite for the first time as a coprocessor (COP) in addition to the NASA Standard Spacecraft Computer. The attitude determination algorithm, which will be resident in the COP's memory, will make full use of its improved processing capabilities to meet mission requirements. Additional benefits were gained by writing the attitude determination code in Ada.
The TOPSAR interferometric radar topographic mapping instrument
NASA Technical Reports Server (NTRS)
Zebker, Howard A.; Madsen, Soren N.; Martin, Jan; Alberti, Giovanni; Vetrella, Sergio; Cucci, Alessandro
1992-01-01
The NASA DC-8 AIRSAR instrument was augmented with a pair of C-band antennas displaced across track to form an interferometer sensitive to topographic variations of the Earth's surface. The antennas were developed by the Italian consortium Co.Ri.S.T.A., under contract to the Italian Space Agency (ASI), while the AIRSAR instrument and modifications to it supporting TOPSAR were sponsored by NASA. A new data processor was developed at JPL for producing the topographic maps, and a second processor was developed at Co.Ri.S.T.A. All the results presented below were processed at JPL. During the 1991 DC-8 flight campaign, data were acquired over several sites in the United States and Europe, and topographic maps were produced from several of these flight lines. Analysis of the results indicate that statistical errors are in the 2-3 m range for flat terrain and in the 4-5 m range for mountainous areas.
Parallelization of the FLAPW method
NASA Astrophysics Data System (ADS)
Canning, A.; Mannstadt, W.; Freeman, A. J.
2000-08-01
The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining structural, electronic and magnetic properties of crystals and surfaces. Until the present work, the FLAPW method has been limited to systems of less than about a hundred atoms due to the lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work, we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell, running on up to 512 processors on a CRAY T3E parallel supercomputer.
A fast CT reconstruction scheme for a general multi-core PC.
Zeng, Kai; Bai, Erwei; Wang, Ge
2007-01-01
Expensive computational cost is a severe limitation in CT reconstruction for clinical applications that need real-time feedback. A primary example is bolus-chasing computed tomography (CT) angiography (BCA) that we have been developing for the past several years. To accelerate the reconstruction process using the filtered backprojection (FBP) method, specialized hardware or graphics cards can be used. However, specialized hardware is expensive and not flexible. The graphics processing unit (GPU) in a current graphic card can only reconstruct images in a reduced precision and is not easy to program. In this paper, an acceleration scheme is proposed based on a multi-core PC. In the proposed scheme, several techniques are integrated, including utilization of geometric symmetry, optimization of data structures, single-instruction multiple-data (SIMD) processing, multithreaded computation, and an Intel C++ compilier. Our scheme maintains the original precision and involves no data exchange between the GPU and CPU. The merits of our scheme are demonstrated in numerical experiments against the traditional implementation. Our scheme achieves a speedup of about 40, which can be further improved by several folds using the latest quad-core processors.
A Fast CT Reconstruction Scheme for a General Multi-Core PC
Zeng, Kai; Bai, Erwei; Wang, Ge
2007-01-01
Expensive computational cost is a severe limitation in CT reconstruction for clinical applications that need real-time feedback. A primary example is bolus-chasing computed tomography (CT) angiography (BCA) that we have been developing for the past several years. To accelerate the reconstruction process using the filtered backprojection (FBP) method, specialized hardware or graphics cards can be used. However, specialized hardware is expensive and not flexible. The graphics processing unit (GPU) in a current graphic card can only reconstruct images in a reduced precision and is not easy to program. In this paper, an acceleration scheme is proposed based on a multi-core PC. In the proposed scheme, several techniques are integrated, including utilization of geometric symmetry, optimization of data structures, single-instruction multiple-data (SIMD) processing, multithreaded computation, and an Intel C++ compilier. Our scheme maintains the original precision and involves no data exchange between the GPU and CPU. The merits of our scheme are demonstrated in numerical experiments against the traditional implementation. Our scheme achieves a speedup of about 40, which can be further improved by several folds using the latest quad-core processors. PMID:18256731
General purpose PDP-11 interface processor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Purtilo, J.
1982-10-01
The Z80 interface card, which for simplicity we refer to as the Zcard, is a general purpose microprocessor which is directly interfaced to the Whaley multiplexor, itself implemented on PDP-11 series computers at the University of Illinois. The Zcard provides two serial ports at EAI interface levels and contains up to 16K of local memory. The principle function of the Zcard is to serve as an interface in which the user might alter protocols, change interface characteristics (e.g. baud rate, hardware echo, or flow control), or switch between synchronous and asynchronous modes all under software control. In addition, the Zcardmore » provides extra lines for control of external devices (e.g. modem) and input of status data. In the simpliest use, the Zcard runs a program, downloaded from the PDP-11, which emulates the level EIAFB card (previously developed for the Whaley Multiplexor) for serial communication with terminals. Using no flow control, Zcard performance seems superior to that of the EIAFB card in that no characters are lost under normal use (due to the much larger buffer which the Zcard can provide). The Zcard can further distinguish itself from the EIAFB card in that flow control can be enabled by software. A more sophisticated application of the Zcard might include linking several Zcards (using a synchronous option and both of the provided ports) into a network. In this capacity, a ring topology suggests itself as the most immediate choice, although many alternate connection schemes can be envisioned. This report is intended to summarize the project as well as provide an outline for those intending to modify or program the Zcard.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yeung, Yu-Hong; Pothen, Alex; Halappanavar, Mahantesh
We present an augmented matrix approach to update the solution to a linear system of equations when the coefficient matrix is modified by a few elements within a principal submatrix. This problem arises in the dynamic security analysis of a power grid, where operators need to performmore » $N-x$ contingency analysis, i.e., determine the state of the system when up to $x$ links from $N$ fail. Our algorithms augment the coefficient matrix to account for the changes in it, and then compute the solution to the augmented system without refactoring the modified matrix. We provide two algorithms, a direct method, and a hybrid direct-iterative method for solving the augmented system. We also exploit the sparsity of the matrices and vectors to accelerate the overall computation. Our algorithms are compared on three power grids with PARDISO, a parallel direct solver, and CHOLMOD, a direct solver with the ability to modify the Cholesky factors of the coefficient matrix. We show that our augmented algorithms outperform PARDISO (by two orders of magnitude), and CHOLMOD (by a factor of up to 5). Further, our algorithms scale better than CHOLMOD as the number of elements updated increases. The solutions are computed with high accuracy. Our algorithms are capable of computing $N-x$ contingency analysis on a $778K$ bus grid, updating a solution with $x=20$ elements in $$1.6 \\times 10^{-2}$$ seconds on an Intel Xeon processor.« less
Analyzing GAIAN Database (GaianDB) on a Tactical Network
2015-11-30
we connected 3 Raspberry Pi’s running GaianDB and our augmented version of splatform to a network of 3 CSRs. The Raspberry Pi is a low power, low...based on Debian from a connected secure digital high capacity (SDHC) card or a universal serial bus (USB) device. The Raspberry Pi comes equipped with...requirements, capabilities, and cost make the Raspberry Pi a useful device for sensor experimentation. From there, we performed 3 types of benchmarks
NASA Technical Reports Server (NTRS)
Hewett, Marle D.; Tartt, David M.; Duke, Eugene L.; Antoniewicz, Robert F.; Brumbaugh, Randal W.
1988-01-01
The development of an automated flight test management system (ATMS) as a component of a rapid-prototyping flight research facility for AI-based flight systems concepts is described. The rapid-prototyping facility includes real-time high-fidelity simulators, numeric and symbolic processors, and high-performance research aircraft modified to accept commands for a ground-based remotely augmented vehicle facility. The flight system configuration of the ATMS includes three computers: the TI explorer LX and two GOULD SEL 32/27s.
An Experiment Support Computer for Externally-Based ISS Payloads
NASA Astrophysics Data System (ADS)
Sell, S. W.; Chen, S. E.
2002-01-01
The Experiment Support Facility - External (ESF-X) is a computer designed for general experiment use aboard the International Space Station (ISS) Truss Site locations. The ESF-X design is highly modular and uses commercial off-the-shelf (COTS) components wherever possible to allow for maximum reconfigurability to meet the needs of almost any payload. The ESF-X design has been developed with the EXPRESS Pallet as the target location and the University of Colorado's Micron Accuracy Deployment Experiment (MADE) as the anticipated first payload and capability driver. Thus the design presented here is configured for structural dynamics and control as well as optics experiments. The ESF-X is a small (58.4 x 48.3 x 17.8") steel and copper enclosure which houses a 14 slot VME card chassis and power supply. All power and data connections are made through a single panel on the enclosure so that only one side of the enclosure must be accessed for nominal operation and servicing activities. This feature also allows convenient access during integration and checkout activities. Because it utilizes a standard VME backplane, ESF-X can make use of the many commercial boards already in production for this standard. Since the VME standard is also heavily used in industrial and military applications, many ruggedized components are readily available. The baseline design includes commercial processors, Ethernet, MIL-STD-1553, and mass storage devices. The main processor board contains four TI 6701 DSPs with a PowerPC based controller. Other standard functions, such as analog-to-digital, digital-to-analog, motor driver, temperature readings, etc., are handled on industry-standard IP modules. Carrier cards, which hold 4 IP modules each, are placed in slots in the VME backplane. A unique, custom IP carrier board with radiation event detectors allows non RAD-hard components to be used in an extended exposure environment. Thermal control is maintained by conductive cooling through the copper floor of the enclosure. All components, including the VME backplane, are thermally connected to the floor. The VME chassis can accept both conduction-cooled and convection cooled cards; non-conduction-cooled cards are simply thermal-strapped to the VME chassis. The current ESF-X configuration provides 44 high-rate A/D, 48 low-rate temperature RTDs, 32 digital IO channels (DIO), as well as drivers for digital position encoders, video frame grabbers, an optical interferometry system, stepper motors, paraffin actuators, high torque DC brushless motors, and piezoelectric actuators based on capability demands derived from the MADE program. ESF-X is presently in the critical design phase; potential users are welcome to submit comments and capability requests.
NASA Astrophysics Data System (ADS)
Grzeszczuk, A.; Kowalski, S.
2015-04-01
Compute Unified Device Architecture (CUDA) is a parallel computing platform developed by Nvidia for increase speed of graphics by usage of parallel mode for processes calculation. The success of this solution has opened technology General-Purpose Graphic Processor Units (GPGPUs) for applications not coupled with graphics. The GPGPUs system can be applying as effective tool for reducing huge number of data for pulse shape analysis measures, by on-line recalculation or by very quick system of compression. The simplified structure of CUDA system and model of programming based on example Nvidia GForce GTX580 card are presented by our poster contribution in stand-alone version and as ROOT application.
NASA Astrophysics Data System (ADS)
Rudianto, Indra; Sudarmaji
2018-04-01
We present an implementation of the spectral-element method for simulation of two-dimensional elastic wave propagation in fully heterogeneous media. We have incorporated most of realistic geological features in the model, including surface topography, curved layer interfaces, and 2-D wave-speed heterogeneity. To accommodate such complexity, we use an unstructured quadrilateral meshing technique. Simulation was performed on a GPU cluster, which consists of 24 core processors Intel Xeon CPU and 4 NVIDIA Quadro graphics cards using CUDA and MPI implementation. We speed up the computation by a factor of about 5 compared to MPI only, and by a factor of about 40 compared to Serial implementation.
Autonomous Space Processor for Orbital Debris (ASPOD)
NASA Technical Reports Server (NTRS)
Ramohalli, Kumar; Mitchell, Dominique; Taft, Brett
1992-01-01
A project in the Advanced Design Program at the University of Arizona is described. The project is named the Autonomous Space Processor for Orbital Debris (ASPOD) and is a Universities Space Research Association (USRA) sponsored design project. The development of ASPOD and the students' abilities in designing and building a prototype spacecraft are the ultimate goals of this project. This year's focus entailed the development of a secondary robotic arm and end-effector to work in tandem with an existent arm in the removal of orbital debris. The new arm features the introduction of composite materials and a linear drive system, thus producing a light-weight and more accurate prototype. The main characteristic of the end-effector design is that it incorporates all of the motors and gearing internally, thus not subjecting them to the harsh space environment. Furthermore, the arm and the end-effector are automated by a control system with positional feedback. This system is composed of magnetic and optical encoders connected to a 486 PC via two servo-motor controller cards. Programming a series of basic routines and sub-routines allowed the ASPOD prototype to become more autonomous. The new system is expected to perform specified tasks with a positional accuracy of 0.5 cm.
Retroreflector field tracker. [noncontact optical position sensor for space application
NASA Technical Reports Server (NTRS)
Wargocki, F. E.; Ray, A. J.; Hall, G. E.
1984-01-01
An electrooptical position-measuring instrument, the Retroreflector Field Tracker or RFT, is described. It is part of the Dynamic Augmentation Experiment - a part of the payload of Space Shuttle flight 41-D in Summer 1984. The tracker measures and outputs the position of 23 reflective targets placed on a 32-m solar array to provide data for determination of the dynamics of the lightweight structure. The sensor uses a 256 x 256 pixel CID detector; the processor electronics include three Z-80 microprocessors. A pulsed laser diode illuminator is used.
Numerical study of the vortex tube reconnection using vortex particle method on many graphics cards
NASA Astrophysics Data System (ADS)
Kudela, Henryk; Kosior, Andrzej
2014-08-01
Vortex Particle Methods are one of the most convenient ways of tracking the vorticity evolution. In the article we presented numerical recreation of the real life experiment concerning head-on collision of two vortex rings. In the experiment the evolution and reconnection of the vortex structures is tracked with passive markers (paint particles) which in viscous fluid does not follow the evolution of vorticity field. In numerical computations we showed the difference between vorticity evolution and movement of passive markers. The agreement with the experiment was very good. Due to problems with very long time of computations on a single processor the Vortex-in-Cell method was implemented on the multicore architecture of the graphics cards (GPUs). Vortex Particle Methods are very well suited for parallel computations. As there are myriads of particles in the flow and for each of them the same equations of motion have to be solved the SIMD architecture used in GPUs seems to be perfect. The main disadvantage in this case is the small amount of the RAM memory. To overcome this problem we created a multiGPU implementation of the VIC method. Some remarks on parallel computing are given in the article.
NASA Astrophysics Data System (ADS)
Spurzem, R.; Berczik, P.; Zhong, S.; Nitadori, K.; Hamada, T.; Berentzen, I.; Veles, A.
2012-07-01
Astrophysical Computer Simulations of Dense Star Clusters in Galactic Nuclei with Supermassive Black Holes are presented using new cost-efficient supercomputers in China accelerated by graphical processing cards (GPU). We use large high-accuracy direct N-body simulations with Hermite scheme and block-time steps, parallelised across a large number of nodes on the large scale and across many GPU thread processors on each node on the small scale. A sustained performance of more than 350 Tflop/s for a science run on using simultaneously 1600 Fermi C2050 GPUs is reached; a detailed performance model is presented and studies for the largest GPU clusters in China with up to Petaflop/s performance and 7000 Fermi GPU cards. In our case study we look at two supermassive black holes with equal and unequal masses embedded in a dense stellar cluster in a galactic nucleus. The hardening processes due to interactions between black holes and stars, effects of rotation in the stellar system and relativistic forces between the black holes are simultaneously taken into account. The simulation stops at the complete relativistic merger of the black holes.
Parallelization of the FLAPW method and comparison with the PPW method
NASA Astrophysics Data System (ADS)
Canning, Andrew; Mannstadt, Wolfgang; Freeman, Arthur
2000-03-01
The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining electronic and magnetic properties of crystals and surfaces. In the past the FLAPW method has been limited to systems of about a hundred atoms due to the lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell running on up to 512 processors on a Cray T3E parallel supercomputer. Some results will also be presented on a comparison of the plane-wave pseudopotential method and the FLAPW method on large systems.
Autonomous space processor for orbital debris
NASA Technical Reports Server (NTRS)
Ramohalli, Kumar; Marine, Micky; Colvin, James; Crockett, Richard; Sword, Lee; Putz, Jennifer; Woelfle, Sheri
1991-01-01
The development of an Autonomous Space Processor for Orbital Debris (ASPOD) was the goal. The nature of this craft, which will process, in situ, orbital debris using resources available in low Earth orbit (LEO) is explained. The serious problem of orbital debris is briefly described and the nature of the large debris population is outlined. The focus was on the development of a versatile robotic manipulator to augment an existing robotic arm, the incorporation of remote operation of the robotic arms, and the formulation of optimal (time and energy) trajectory planning algorithms for coordinated robotic arms. The mechanical design of the new arm is described in detail. The work envelope is explained showing the flexibility of the new design. Several telemetry communication systems are described which will enable the remote operation of the robotic arms. The trajectory planning algorithms are fully developed for both the time optimal and energy optimal problems. The time optimal problem is solved using phase plane techniques while the energy optimal problem is solved using dynamic programming.
High performance in silico virtual drug screening on many-core processors.
McIntosh-Smith, Simon; Price, James; Sessions, Richard B; Ibarra, Amaurys A
2015-05-01
Drug screening is an important part of the drug development pipeline for the pharmaceutical industry. Traditional, lab-based methods are increasingly being augmented with computational methods, ranging from simple molecular similarity searches through more complex pharmacophore matching to more computationally intensive approaches, such as molecular docking. The latter simulates the binding of drug molecules to their targets, typically protein molecules. In this work, we describe BUDE, the Bristol University Docking Engine, which has been ported to the OpenCL industry standard parallel programming language in order to exploit the performance of modern many-core processors. Our highly optimized OpenCL implementation of BUDE sustains 1.43 TFLOP/s on a single Nvidia GTX 680 GPU, or 46% of peak performance. BUDE also exploits OpenCL to deliver effective performance portability across a broad spectrum of different computer architectures from different vendors, including GPUs from Nvidia and AMD, Intel's Xeon Phi and multi-core CPUs with SIMD instruction sets.
High performance in silico virtual drug screening on many-core processors
Price, James; Sessions, Richard B; Ibarra, Amaurys A
2015-01-01
Drug screening is an important part of the drug development pipeline for the pharmaceutical industry. Traditional, lab-based methods are increasingly being augmented with computational methods, ranging from simple molecular similarity searches through more complex pharmacophore matching to more computationally intensive approaches, such as molecular docking. The latter simulates the binding of drug molecules to their targets, typically protein molecules. In this work, we describe BUDE, the Bristol University Docking Engine, which has been ported to the OpenCL industry standard parallel programming language in order to exploit the performance of modern many-core processors. Our highly optimized OpenCL implementation of BUDE sustains 1.43 TFLOP/s on a single Nvidia GTX 680 GPU, or 46% of peak performance. BUDE also exploits OpenCL to deliver effective performance portability across a broad spectrum of different computer architectures from different vendors, including GPUs from Nvidia and AMD, Intel’s Xeon Phi and multi-core CPUs with SIMD instruction sets. PMID:25972727
Chikkagoudar, Satish; Wang, Kai; Li, Mingyao
2011-05-26
Gene-gene interaction in genetic association studies is computationally intensive when a large number of SNPs are involved. Most of the latest Central Processing Units (CPUs) have multiple cores, whereas Graphics Processing Units (GPUs) also have hundreds of cores and have been recently used to implement faster scientific software. However, currently there are no genetic analysis software packages that allow users to fully utilize the computing power of these multi-core devices for genetic interaction analysis for binary traits. Here we present a novel software package GENIE, which utilizes the power of multiple GPU or CPU processor cores to parallelize the interaction analysis. GENIE reads an entire genetic association study dataset into memory and partitions the dataset into fragments with non-overlapping sets of SNPs. For each fragment, GENIE analyzes: 1) the interaction of SNPs within it in parallel, and 2) the interaction between the SNPs of the current fragment and other fragments in parallel. We tested GENIE on a large-scale candidate gene study on high-density lipoprotein cholesterol. Using an NVIDIA Tesla C1060 graphics card, the GPU mode of GENIE achieves a speedup of 27 times over its single-core CPU mode run. GENIE is open-source, economical, user-friendly, and scalable. Since the computing power and memory capacity of graphics cards are increasing rapidly while their cost is going down, we anticipate that GENIE will achieve greater speedups with faster GPU cards. Documentation, source code, and precompiled binaries can be downloaded from http://www.cceb.upenn.edu/~mli/software/GENIE/.
2011-01-01
Background Gene-gene interaction in genetic association studies is computationally intensive when a large number of SNPs are involved. Most of the latest Central Processing Units (CPUs) have multiple cores, whereas Graphics Processing Units (GPUs) also have hundreds of cores and have been recently used to implement faster scientific software. However, currently there are no genetic analysis software packages that allow users to fully utilize the computing power of these multi-core devices for genetic interaction analysis for binary traits. Findings Here we present a novel software package GENIE, which utilizes the power of multiple GPU or CPU processor cores to parallelize the interaction analysis. GENIE reads an entire genetic association study dataset into memory and partitions the dataset into fragments with non-overlapping sets of SNPs. For each fragment, GENIE analyzes: 1) the interaction of SNPs within it in parallel, and 2) the interaction between the SNPs of the current fragment and other fragments in parallel. We tested GENIE on a large-scale candidate gene study on high-density lipoprotein cholesterol. Using an NVIDIA Tesla C1060 graphics card, the GPU mode of GENIE achieves a speedup of 27 times over its single-core CPU mode run. Conclusions GENIE is open-source, economical, user-friendly, and scalable. Since the computing power and memory capacity of graphics cards are increasing rapidly while their cost is going down, we anticipate that GENIE will achieve greater speedups with faster GPU cards. Documentation, source code, and precompiled binaries can be downloaded from http://www.cceb.upenn.edu/~mli/software/GENIE/. PMID:21615923
Speech coding at 4800 bps for mobile satellite communications
NASA Technical Reports Server (NTRS)
Gersho, Allen; Chan, Wai-Yip; Davidson, Grant; Chen, Juin-Hwey; Yong, Mei
1988-01-01
A speech compression project has recently been completed to develop a speech coding algorithm suitable for operation in a mobile satellite environment aimed at providing telephone quality natural speech at 4.8 kbps. The work has resulted in two alternative techniques which achieve reasonably good communications quality at 4.8 kbps while tolerating vehicle noise and rather severe channel impairments. The algorithms are embodied in a compact self-contained prototype consisting of two AT and T 32-bit floating-point DSP32 digital signal processors (DSP). A Motorola 68HC11 microcomputer chip serves as the board controller and interface handler. On a wirewrapped card, the prototype's circuit footprint amounts to only 200 sq cm, and consumes about 9 watts of power.
Issues in ATM Support of High-Performance, Geographically Distributed Computing
NASA Technical Reports Server (NTRS)
Claus, Russell W.; Dowd, Patrick W.; Srinidhi, Saragur M.; Blade, Eric D.G
1995-01-01
This report experimentally assesses the effect of the underlying network in a cluster-based computing environment. The assessment is quantified by application-level benchmarking, process-level communication, and network file input/output. Two testbeds were considered, one small cluster of Sun workstations and another large cluster composed of 32 high-end IBM RS/6000 platforms. The clusters had Ethernet, fiber distributed data interface (FDDI), Fibre Channel, and asynchronous transfer mode (ATM) network interface cards installed, providing the same processors and operating system for the entire suite of experiments. The primary goal of this report is to assess the suitability of an ATM-based, local-area network to support interprocess communication and remote file input/output systems for distributed computing.
Framework for Development and Distribution of Hardware Acceleration
NASA Astrophysics Data System (ADS)
Thomas, David B.; Luk, Wayne W.
2002-07-01
This paper describes IGOL, a framework for developing reconfigurable data processing applications. While IGOL was originally designed to target imaging and graphics systems, its structure is sufficiently general to support a broad range of applications. IGOL adopts a four-layer architecture: application layer, operation layer, appliance layer and configuration layer. This architecture is intended to separate and co-ordinate both the development and execution of hardware and software components. Hardware developers can use IGOL as an instance testbed for verification and benchmarking, as well as for distribution. Software application developers can use IGOL to discover hardware accelerated data processors, and to access them in a transparent, non-hardware specific manner. IGOL provides extensive support for the RC1000-PP board via the Handel-C language, and a wide selection of image processing filters have been developed. IGOL also supplies plug-ins to enable such filters to be incorporated in popular applications such as Premiere, Winamp, VirtualDub and DirectShow. Moreover, IGOL allows the automatic use of multiple cards to accelerate an application, demonstrated using DirectShow. To enable transparent acceleration without sacrificing performance, a three-tiered COM (Component Object Model) API has been designed and implemented. This API provides a well-defined and extensible interface which facilitates the development of hardware data processors that can accelerate multiple applications.
Multi-threaded ATLAS simulation on Intel Knights Landing processors
NASA Astrophysics Data System (ADS)
Farrell, Steven; Calafiura, Paolo; Leggett, Charles; Tsulaia, Vakhtang; Dotti, Andrea; ATLAS Collaboration
2017-10-01
The Knights Landing (KNL) release of the Intel Many Integrated Core (MIC) Xeon Phi line of processors is a potential game changer for HEP computing. With 72 cores and deep vector registers, the KNL cards promise significant performance benefits for highly-parallel, compute-heavy applications. Cori, the newest supercomputer at the National Energy Research Scientific Computing Center (NERSC), was delivered to its users in two phases with the first phase online at the end of 2015 and the second phase now online at the end of 2016. Cori Phase 2 is based on the KNL architecture and contains over 9000 compute nodes with 96GB DDR4 memory. ATLAS simulation with the multithreaded Athena Framework (AthenaMT) is a good potential use-case for the KNL architecture and supercomputers like Cori. ATLAS simulation jobs have a high ratio of CPU computation to disk I/O and have been shown to scale well in multi-threading and across many nodes. In this paper we will give an overview of the ATLAS simulation application with details on its multi-threaded design. Then, we will present a performance analysis of the application on KNL devices and compare it to a traditional x86 platform to demonstrate the capabilities of the architecture and evaluate the benefits of utilizing KNL platforms like Cori for ATLAS production.
MoNET: media over net gateway processor for next-generation network
NASA Astrophysics Data System (ADS)
Elabd, Hammam; Sundar, Rangarajan; Dedes, John
2001-12-01
MoNETTM (Media over Net) SX000 product family is designed using a scalable voice, video and packet-processing platform to address applications with channel densities from few voice channels to four OC3 per card. This platform is developed for bridging public circuit-switched network to the next generation packet telephony and data network. The platform consists of a DSP farm, RISC processors and interface modules. DSP farm is required to execute voice compression, image compression and line echo cancellation algorithms for large number of voice, video, fax, and modem or data channels. RISC CPUs are used for performing various packetizations based on RTP, UDP/IP and ATM encapsulations. In addition, RISC CPUs also participate in the DSP farm load management and communication with the host and other MoP devices. The MoNETTM S1000 communications device is designed for voice processing and for bridging TDM to ATM and IP packet networks. The S1000 consists of the DSP farm based on Carmel DSP core and 32-bit RISC CPU, along with Ethernet, Utopia, PCI, and TDM interfaces. In this paper, we will describe the VoIP infrastructure, building blocks of the S500, S1000 and S3000 devices, algorithms executed on these device and associated channel densities, detailed DSP architecture, memory architecture, data flow and scheduling.
Telephone speech comprehension in children with multichannel cochlear implants.
Aronson, L; Estienne, P; Arauz, S L; Pallante, S A
1997-11-01
Telephone speech comprehension is being evaluated in six prelingually deaf children implanted with the Nucleus 22 prosthesis fitted with the Speak strategy. All of them have had at least 1.5 years of experience with their implant. When the tests began, they had already had at least 2 months' experience with the same map in their speech processor. The children were trained in the use of the telephone as part of the rehabilitation program. None of them used it regularly but as a game that they found very entertaining. A special battery, the Bate-fon (batería para teléfono = telephone battery), was designed for training and evaluation purposes. It includes the five Spanish vowels in isolation, diphthongs, onomatopoetic animal voices, two-syllable, and three-syllable words. The tests were administered 1.5-2 years after the switch-on of their speech processor. Standard acoustic telephone coupling was used. The speech material was presented to the child on colored cards. Stimuli were presented twice. Children were informed when the response was incorrect. Averaged results indicated that the percentages of correct responses for all the speech material increase in the second presentation. All children have shown some degree of telephone communication abilities. As a result of the training, some of the children are using the telephone to communicate with their families.
A portable detection instrument based on DSP for beef marbling
NASA Astrophysics Data System (ADS)
Zhou, Tong; Peng, Yankun
2014-05-01
Beef marbling is one of the most important indices to assess beef quality. Beef marbling is graded by the measurement of the fat distribution density in the rib-eye region. However quality grades of beef in most of the beef slaughtering houses and businesses depend on trainees using their visual senses or comparing the beef slice to the Chinese standard sample cards. Manual grading demands not only great labor but it also lacks objectivity and accuracy. Aiming at the necessity of beef slaughtering houses and businesses, a beef marbling detection instrument was designed. The instrument employs Charge-coupled Device (CCD) imaging techniques, digital image processing, Digital Signal Processor (DSP) control and processing techniques and Liquid Crystal Display (LCD) screen display techniques. The TMS320DM642 digital signal processor of Texas Instruments (TI) is the core that combines high-speed data processing capabilities and real-time processing features. All processes such as image acquisition, data transmission, image processing algorithms and display were implemented on this instrument for a quick, efficient, and non-invasive detection of beef marbling. Structure of the system, working principle, hardware and software are introduced in detail. The device is compact and easy to transport. The instrument can determine the grade of beef marbling reliably and correctly.
Homemade Buckeye-Pi: A Learning Many-Node Platform for High-Performance Parallel Computing
NASA Astrophysics Data System (ADS)
Amooie, M. A.; Moortgat, J.
2017-12-01
We report on the "Buckeye-Pi" cluster, the supercomputer developed in The Ohio State University School of Earth Sciences from 128 inexpensive Raspberry Pi (RPi) 3 Model B single-board computers. Each RPi is equipped with fast Quad Core 1.2GHz ARMv8 64bit processor, 1GB of RAM, and 32GB microSD card for local storage. Therefore, the cluster has a total RAM of 128GB that is distributed on the individual nodes and a flash capacity of 4TB with 512 processors, while it benefits from low power consumption, easy portability, and low total cost. The cluster uses the Message Passing Interface protocol to manage the communications between each node. These features render our platform the most powerful RPi supercomputer to date and suitable for educational applications in high-performance-computing (HPC) and handling of large datasets. In particular, we use the Buckeye-Pi to implement optimized parallel codes in our in-house simulator for subsurface media flows with the goal of achieving a massively-parallelized scalable code. We present benchmarking results for the computational performance across various number of RPi nodes. We believe our project could inspire scientists and students to consider the proposed unconventional cluster architecture as a mainstream and a feasible learning platform for challenging engineering and scientific problems.
NASA Technical Reports Server (NTRS)
Ramohalli, Kumar; Mitchell, Dominique; Taft, Brett; Chinnock, Paul; Kutz, Bjoern
1992-01-01
This paper is regarding a project in the Advanced Design Program at the University of Arizona. The project is named the Autonomous Space Processor for Orbital Debris (ASPOD) and is a NASA/Universities Space Research Association (USRA) sponsored design project. The development of ASPOD and the students' abilities in designing and building a prototype spacecraft are the ultimate goals of this project. This year's focus entailed the development of a secondary robotic arm and end-effector to work in tandem with an existent arm in the removal of orbital debris. The new arm features the introduction of composite materials and a linear drive system, thus producing a light-weight and more accurate prototype. The main characteristic of the end-effector design is that it incorporates all of the motors and gearing internally, thus not subjecting them to the harsh space environment. Furthermore, the arm and the end-effector are automated by a control system with positional feedback. This system is composed of magnetic and optical encoders connected to a 486 PC via two servo-motor controller cards. Programming a series of basic routines and sub-routines has allowed the ASPOD prototype to become more autonomous. The new system is expected to perform specified tasks with a positional accuracy of 0.5 cm.
Computing Maximum Cardinality Matchings in Parallel on Bipartite Graphs via Tree-Grafting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Azad, Ariful; Buluc, Aydn; Pothen, Alex
It is difficult to obtain high performance when computing matchings on parallel processors because matching algorithms explicitly or implicitly search for paths in the graph, and when these paths become long, there is little concurrency. In spite of this limitation, we present a new algorithm and its shared-memory parallelization that achieves good performance and scalability in computing maximum cardinality matchings in bipartite graphs. This algorithm searches for augmenting paths via specialized breadth-first searches (BFS) from multiple source vertices, hence creating more parallelism than single source algorithms. Algorithms that employ multiple-source searches cannot discard a search tree once no augmenting pathmore » is discovered from the tree, unlike algorithms that rely on single-source searches. We describe a novel tree-grafting method that eliminates most of the redundant edge traversals resulting from this property of multiple-source searches. We also employ the recent direction-optimizing BFS algorithm as a subroutine to discover augmenting paths faster. Our algorithm compares favorably with the current best algorithms in terms of the number of edges traversed, the average augmenting path length, and the number of iterations. Here, we provide a proof of correctness for our algorithm. Our NUMA-aware implementation is scalable to 80 threads of an Intel multiprocessor and to 240 threads on an Intel Knights Corner coprocessor. On average, our parallel algorithm runs an order of magnitude faster than the fastest algorithms available. The performance improvement is more significant on graphs with small matching number.« less
Computing Maximum Cardinality Matchings in Parallel on Bipartite Graphs via Tree-Grafting
Azad, Ariful; Buluc, Aydn; Pothen, Alex
2016-03-24
It is difficult to obtain high performance when computing matchings on parallel processors because matching algorithms explicitly or implicitly search for paths in the graph, and when these paths become long, there is little concurrency. In spite of this limitation, we present a new algorithm and its shared-memory parallelization that achieves good performance and scalability in computing maximum cardinality matchings in bipartite graphs. This algorithm searches for augmenting paths via specialized breadth-first searches (BFS) from multiple source vertices, hence creating more parallelism than single source algorithms. Algorithms that employ multiple-source searches cannot discard a search tree once no augmenting pathmore » is discovered from the tree, unlike algorithms that rely on single-source searches. We describe a novel tree-grafting method that eliminates most of the redundant edge traversals resulting from this property of multiple-source searches. We also employ the recent direction-optimizing BFS algorithm as a subroutine to discover augmenting paths faster. Our algorithm compares favorably with the current best algorithms in terms of the number of edges traversed, the average augmenting path length, and the number of iterations. Here, we provide a proof of correctness for our algorithm. Our NUMA-aware implementation is scalable to 80 threads of an Intel multiprocessor and to 240 threads on an Intel Knights Corner coprocessor. On average, our parallel algorithm runs an order of magnitude faster than the fastest algorithms available. The performance improvement is more significant on graphs with small matching number.« less
GPU accelerated dynamic functional connectivity analysis for functional MRI data.
Akgün, Devrim; Sakoğlu, Ünal; Esquivel, Johnny; Adinoff, Bryon; Mete, Mutlu
2015-07-01
Recent advances in multi-core processors and graphics card based computational technologies have paved the way for an improved and dynamic utilization of parallel computing techniques. Numerous applications have been implemented for the acceleration of computationally-intensive problems in various computational science fields including bioinformatics, in which big data problems are prevalent. In neuroimaging, dynamic functional connectivity (DFC) analysis is a computationally demanding method used to investigate dynamic functional interactions among different brain regions or networks identified with functional magnetic resonance imaging (fMRI) data. In this study, we implemented and analyzed a parallel DFC algorithm based on thread-based and block-based approaches. The thread-based approach was designed to parallelize DFC computations and was implemented in both Open Multi-Processing (OpenMP) and Compute Unified Device Architecture (CUDA) programming platforms. Another approach developed in this study to better utilize CUDA architecture is the block-based approach, where parallelization involves smaller parts of fMRI time-courses obtained by sliding-windows. Experimental results showed that the proposed parallel design solutions enabled by the GPUs significantly reduce the computation time for DFC analysis. Multicore implementation using OpenMP on 8-core processor provides up to 7.7× speed-up. GPU implementation using CUDA yielded substantial accelerations ranging from 18.5× to 157× speed-up once thread-based and block-based approaches were combined in the analysis. Proposed parallel programming solutions showed that multi-core processor and CUDA-supported GPU implementations accelerated the DFC analyses significantly. Developed algorithms make the DFC analyses more practical for multi-subject studies with more dynamic analyses. Copyright © 2015 Elsevier Ltd. All rights reserved.
The research of laser marking control technology
NASA Astrophysics Data System (ADS)
Zhang, Qiue; Zhang, Rong
2009-08-01
In the area of Laser marking, the general control method is insert control card to computer's mother board, it can not support hot swap, it is difficult to assemble or it. Moreover, the one marking system must to equip one computer. In the system marking, the computer can not to do the other things except to transmit marking digital information. Otherwise it can affect marking precision. Based on traditional control methods existed some problems, introduced marking graphic editing and digital processing by the computer finish, high-speed digital signal processor (DSP) control marking the whole process. The laser marking controller is mainly contain DSP2812, digital memorizer, DAC (digital analog converting) transform unit circuit, USB interface control circuit, man-machine interface circuit, and other logic control circuit. Download the marking information which is processed by computer to U disk, DSP read the information by USB interface on time, then processing it, adopt the DSP inter timer control the marking time sequence, output the scanner control signal by D/A parts. Apply the technology can realize marking offline, thereby reduce the product cost, increase the product efficiency. The system have good effect in actual unit markings, the marking speed is more quickly than PCI control card to 20 percent. It has application value in practicality.
NASA Astrophysics Data System (ADS)
Lippert, Ross A.; Predescu, Cristian; Ierardi, Douglas J.; Mackenzie, Kenneth M.; Eastwood, Michael P.; Dror, Ron O.; Shaw, David E.
2013-10-01
In molecular dynamics simulations, control over temperature and pressure is typically achieved by augmenting the original system with additional dynamical variables to create a thermostat and a barostat, respectively. These variables generally evolve on timescales much longer than those of particle motion, but typical integrator implementations update the additional variables along with the particle positions and momenta at each time step. We present a framework that replaces the traditional integration procedure with separate barostat, thermostat, and Newtonian particle motion updates, allowing thermostat and barostat updates to be applied infrequently. Such infrequent updates provide a particularly substantial performance advantage for simulations parallelized across many computer processors, because thermostat and barostat updates typically require communication among all processors. Infrequent updates can also improve accuracy by alleviating certain sources of error associated with limited-precision arithmetic. In addition, separating the barostat, thermostat, and particle motion update steps reduces certain truncation errors, bringing the time-average pressure closer to its target value. Finally, this framework, which we have implemented on both general-purpose and special-purpose hardware, reduces software complexity and improves software modularity.
RTSPM: real-time Linux control software for scanning probe microscopy.
Chandrasekhar, V; Mehta, M M
2013-01-01
Real time computer control is an essential feature of scanning probe microscopes, which have become important tools for the characterization and investigation of nanometer scale samples. Most commercial (and some open-source) scanning probe data acquisition software uses digital signal processors to handle the real time data processing and control, which adds to the expense and complexity of the control software. We describe here scan control software that uses a single computer and a data acquisition card to acquire scan data. The computer runs an open-source real time Linux kernel, which permits fast acquisition and control while maintaining a responsive graphical user interface. Images from a simulated tuning-fork based microscope as well as a standard topographical sample are also presented, showing some of the capabilities of the software.
Avionics for a Small Satellite
NASA Technical Reports Server (NTRS)
Abbott, Larry; Jochim, David; Schuler, Robert
2001-01-01
This paper discusses a small. seven and a half (7.5) inch diameter. satellite that NASA-JSC is developing as a technology demonstrator for an astronaut assistant free flyer. The Free Flyer is designed to off load flight crew work load by performing inspections of the exterior of Space Shuttle or International Space Station. The Free Flyer is designed to be operated by the flight crew thereby reducing the number of Extra Vehicle Activities (EVA) or by an astronaut on the ground further reducing crew work load. The paper focuses on the design constraint of a small satellite and the technology approach used to achieve the set of high performance requirements specified for the Free Flyer. Particular attention is paid to the processor card as it is the heart and system integration point of the Free Flyer.
The LSLE echocardiograph - Commercial hardware aboard Spacelab. [Life Sciences Laboratory Equipment
NASA Technical Reports Server (NTRS)
Schwarz, R.
1983-01-01
The Life Sciences Laboratory Equipment Echocardiograph, a commercial 77020AC Ultrasound Imaging System modified to meet NASA's spacecraft standards, is described. The assembly consists of four models: display and control, scanner, scan converter, and physioamplifiers. Four separate processors communicate over an IEE-488 bus, and the system has more than 6000 individual components on 35 printed circuit cards. Three levels of self test are provided: a short test during power up, a basic test initiated by a front panel switch, and interactive tests for specific routines. Default mode operation further enhances reliability. Modifications of the original system include the replacement of ac power supplies with dc to dc converters, a slide-out keyboard (to prevent accidental operation), Teflon insulated wire, and additional shielding for the ultrasound transducer cable.
a Linux PC Cluster for Lattice QCD with Exact Chiral Symmetry
NASA Astrophysics Data System (ADS)
Chiu, Ting-Wai; Hsieh, Tung-Han; Huang, Chao-Hsi; Huang, Tsung-Ren
A computational system for lattice QCD with overlap Dirac quarks is described. The platform is a home-made Linux PC cluster, built with off-the-shelf components. At present the system constitutes of 64 nodes, with each node consisting of one Pentium 4 processor (1.6/2.0/2.5 GHz), one Gbyte of PC800/1066 RDRAM, one 40/80/120 Gbyte hard disk, and a network card. The computationally intensive parts of our program are written in SSE2 codes. The speed of our system is estimated to be 70 Gflops, and its price/performance ratio is better than $1.0/Mflops for 64-bit (double precision) computations in quenched QCD. We discuss how to optimize its hardware and software for computing propagators of overlap Dirac quarks.
Encoder fault analysis system based on Moire fringe error signal
NASA Astrophysics Data System (ADS)
Gao, Xu; Chen, Wei; Wan, Qiu-hua; Lu, Xin-ran; Xie, Chun-yu
2018-02-01
Aiming at the problem of any fault and wrong code in the practical application of photoelectric shaft encoder, a fast and accurate encoder fault analysis system is researched from the aspect of Moire fringe photoelectric signal processing. DSP28335 is selected as the core processor and high speed serial A/D converter acquisition card is used. And temperature measuring circuit using AD7420 is designed. Discrete data of Moire fringe error signal is collected at different temperatures and it is sent to the host computer through wireless transmission. The error signal quality index and fault type is displayed on the host computer based on the error signal identification method. The error signal quality can be used to diagnosis the state of error code through the human-machine interface.
A Protein in the palm of your hand through augmented reality.
Berry, Colin; Board, Jason
2014-01-01
Understanding of proteins and other biological macromolecules must be based on an appreciation of their 3-dimensional shape and the fine details of their structure. Conveying these details in a clear and stimulating fashion can present challenges using conventional approaches and 2-dimensional monitors and projectors. Here we describe a method for the production of 3-D interactive images of protein structures that can be manipulated in real time through the use of augmented reality software. Users first see a real-time image of themselves using the computer's camera, then, when they hold up a trigger image, a model of a molecule appears automatically in the video. This model rotates and translates in space in response to movements of the trigger card. The system described has been optimized to allow customization for the display of user-selected structures to create engaging, educational visualizations to explore 3-D structures. Copyright © 2014 The International Union of Biochemistry and Molecular Biology.
Bruno, Antonio; Pandolfo, Gianluca; Crucitti, Manuela; Lorusso, Simona; Zoccali, Rocco Antonio; Muscatello, Maria Rosaria Anna
This was the first 12-week, open-label, uncontrolled trial aimed at exploring the efficacy of acetyl-L-carnitine (ALC) add-on pharmacotherapy on clinical symptoms and cognitive functioning in 15 schizophrenia patients with suboptimal clinical response despite receiving clozapine (CLZ) monotherapy at the highest tolerated dosage. After clinical (Positive and Negative Symptoms Scale [PANSS]) and neuropsychological (Wisconsin Card Sorting Test, Stroop Color-Word Test, Verbal Fluency Test) assessments, patients received 1 g/d of ALC for 12 weeks. A final sample of 9 subjects completed the study. Acetyl-L-carnitine augmentation of CLZ significantly reduced only PANSS domains "positive" (P = 0.049); at end point, only 2 subjects (22.2% of the completers) reached a minimal improvement (25% reduction in PANSS total score). No significant differences emerged in cognitive performances at the end of the study; effect sizes were small in each explored cognitive dimension. The findings provide preliminary evidence that ALC added to ongoing CLZ treatment appeared to be ineffective to improve symptoms in schizophrenia patients who have failed to respond sufficiently to CLZ. Further trials with adequately powered methodology are needed to identify which augmentation strategies are more effective in schizophrenia patients showing a suboptimal response to CLZ.
NASA Technical Reports Server (NTRS)
1975-01-01
The trajectory simulation mode (SIMSEP) requires the namelist SIMSEP to follow TRAJ. The SIMSEP contains parameters which describe the scope of the simulation, expected dynamic errors, and cumulative statistics from previous SIMSEP runs. Following SIMSEP are a set of GUID namelists, one for each guidance correction maneuver. The GUID describes the strategy, knowledge or estimation uncertainties and cumulative statistics for that particular maneuver. The trajectory display mode (REFSEP) requires only the namelist TRAJ followed by scheduling cards, similar to those used in GODSEP. The fixed field schedule cards define: types of data displayed, span of interest, and frequency of printout. For those users who can vary the amount of blank common storage in their runs, a guideline to estimate the total MAPSEP core requirements is given. Blank common length is related directly to the dimension of the dynamic state (NDIM) used in transition matrix (STM) computation, and, the total augmented (knowledge) state (NAUG). The values of program and blank common must be added to compute the total decimal core for a CDC 6500. Other operating systems must scale these requirements appropriately.
NASA Technical Reports Server (NTRS)
Hasler, A. F.; Strong, J.; Woodward, R. H.; Pierce, H.
1991-01-01
Results are presented on an automatic stereo analysis of cloud-top heights from nearly simultaneous satellite image pairs from the GOES and NOAA satellites, using a massively parallel processor computer. Comparisons of computer-derived height fields and manually analyzed fields show that the automatic analysis technique shows promise for performing routine stereo analysis in a real-time environment, providing a useful forecasting tool by augmenting observational data sets of severe thunderstorms and hurricanes. Simulations using synthetic stereo data show that it is possible to automatically resolve small-scale features such as 4000-m-diam clouds to about 1500 m in the vertical.
NASA Astrophysics Data System (ADS)
Weigand, R.
Two new processor devices have been developed for the use on board of spacecrafts. An 8-bit 8032-microcontroller targets typical controlling applications in instruments and sub-systems, or could be used as a main processor on small satellites, whereas the LEON 32-bit SPARC processor can be used for high performance controlling and data processing tasks. The ADV80S32 is fully compliant to the Intel 80x1 architecture and instruction set, extended by additional peripherals, 512 bytes on-chip RAM and a bootstrap PROM, which allows downloading the application software using the CCSDS PacketWire pro- tocol. The memory controller provides a de-multiplexed address/data bus, and allows to access up to 16 MB data and 8 MB program RAM. The peripherals have been de- signed for the specific needs of a spacecraft, such as serial interfaces compatible to RS232, PacketWire and TTC-B-01, counters/timers for extended duration and a CRC calculation unit accelerating the CCSDS TM/TC protocol. The 0.5 um Atmel manu- facturing technology (MG2RT) provides latch-up and total dose immunity; SEU fault immunity is implemented by using SEU hardened Flip-Flops and EDAC protection of internal and external memories. The maximum clock frequency of 20 MHz allows a processing power of 3 MIPS. Engineering samples are available. For SW develop- ment, various SW packages for the 8051 architecture are on the market. The LEON processor implements a 32-bit SPARC V8 architecture, including all the multiply and divide instructions, complemented by a floating-point unit (FPU). It includes several standard peripherals, such as timers/watchdog, interrupt controller, UARTs, parallel I/Os and a memory controller, allowing to use 8, 16 and 32 bit PROM, SRAM or memory mapped I/O. With on-chip separate instruction and data caches, almost one instruction per clock cycle can be reached in some applications. A 33-MHz 32-bit PCI master/target interface and a PCI arbiter allow operating the device in a plug-in card (for SW development on PC etc.), or to consider using it as a PCI master controller in an on-board system. Advanced SEU fault tolerance is in- troduced by design, using triple modular redundancy (TMR) flip-flops for all registers and EDAC protection for all memories. The device will be manufactured in a radia- tion hard Atmel 0.25 um technology, targeting 100 MHz processor clock frequency. The non fault-tolerant LEON processor VHDL model is available as free source code, and the SPARC architecture is a well-known industry standard. Therefore, know-how, software tools and operating systems are widely available.
Gervaix, Alain; Haddad, Kevin; Lacroix, Laurence; Schrurs, Philippe; Sahin, Ayhan; Lovis, Christian; Manzano, Sergio
2017-01-01
Background The American Heart Association (AHA) guidelines for cardiopulmonary resuscitation (CPR) are nowadays recognized as the world’s most authoritative resuscitation guidelines. Adherence to these guidelines optimizes the management of critically ill patients and increases their chances of survival after cardiac arrest. Despite their availability, suboptimal quality of CPR is still common. Currently, the median hospital survival rate after pediatric in-hospital cardiac arrest is 36%, whereas it falls below 10% for out-of-hospital cardiac arrest. Among emerging information technologies and devices able to support caregivers during resuscitation and increase adherence to AHA guidelines, augmented reality (AR) glasses have not yet been assessed. In order to assess their potential, we adapted AHA Pediatric Advanced Life Support (PALS) guidelines for AR glasses. Objective The study aimed to determine whether adapting AHA guidelines for AR glasses increased adherence by reducing deviation and time to initiation of critical life-saving maneuvers during pediatric CPR when compared with the use of PALS pocket reference cards. Methods We conducted a randomized controlled trial with two parallel groups of voluntary pediatric residents, comparing AR glasses to PALS pocket reference cards during a simulation-based pediatric cardiac arrest scenario—pulseless ventricular tachycardia (pVT). The primary outcome was the elapsed time in seconds in each allocation group, from onset of pVT to the first defibrillation attempt. Secondary outcomes were time elapsed to (1) initiation of chest compression, (2) subsequent defibrillation attempts, and (3) administration of drugs, as well as the time intervals between defibrillation attempts and drug doses, shock doses, and number of shocks. All these outcomes were assessed for deviation from AHA guidelines. Results Twenty residents were randomized into 2 groups. Time to first defibrillation attempt (mean: 146 s) and adherence to AHA guidelines in terms of time to other critical resuscitation endpoints and drug dose delivery were not improved using AR glasses. However, errors and deviations were significantly reduced in terms of defibrillation doses when compared with the use of the PALS pocket reference cards. In a total of 40 defibrillation attempts, residents not wearing AR glasses used wrong doses in 65% (26/40) of cases, including 21 shock overdoses >100 J, for a cumulative defibrillation dose of 18.7 Joules per kg. These errors were reduced by 53% (21/40, P<.001) and cumulative defibrillation dose by 37% (5.14/14, P=.001) with AR glasses. Conclusions AR glasses did not decrease time to first defibrillation attempt and other critical resuscitation endpoints when compared with PALS pocket cards. However, they improved adherence and performance among residents in terms of administering the defibrillation doses set by AHA. PMID:28554878
Efficient Analysis of Simulations of the Sun's Magnetic Field
NASA Astrophysics Data System (ADS)
Scarborough, C. W.; Martínez-Sykora, J.
2014-12-01
Dynamics in the solar atmosphere, including solar flares, coronal mass ejections, micro-flares and different types of jets, are powered by the evolution of the sun's intense magnetic field. 3D Radiative Magnetohydrodnamics (MHD) computer simulations have furthered our understanding of the processes involved: When non aligned magnetic field lines reconnect, the alteration of the magnetic topology causes stored magnetic energy to be converted into thermal and kinetic energy. Detailed analysis of this evolution entails tracing magnetic field lines, an operation which is not time-efficient on a single processor. By utilizing a graphics card (GPU) to trace lines in parallel, conducting such analysis is made feasible. We applied our GPU implementation to the most advanced 3D Radiative-MHD simulations (Bifrost, Gudicksen et al. 2011) of the solar atmosphere in order to better understand the evolution of the modeled field lines.
Szarka, Mate; Guttman, Andras
2017-10-17
We present the application of a smartphone anatomy based technology in the field of liquid phase bioseparations, particularly in capillary electrophoresis. A simple capillary electrophoresis system was built with LED induced fluorescence detection and a credit card sized minicomputer to prove the concept of real time fluorescent imaging (zone adjustable time-lapse fluorescence image processor) and separation controller. The system was evaluated by analyzing under- and overloaded aminopyrenetrisulfonate (APTS)-labeled oligosaccharide samples. The open source software based image processing tool allowed undistorted signal modulation (reprocessing) if the signal was inappropriate for the actual detection system settings (too low or too high). The novel smart detection tool for fluorescently labeled biomolecules greatly expands dynamic range and enables retrospective correction for injections with unsuitable signal levels without the necessity to repeat the analysis.
Image stack alignment in full-field X-ray absorption spectroscopy using SIFT_PyOCL.
Paleo, Pierre; Pouyet, Emeline; Kieffer, Jérôme
2014-03-01
Full-field X-ray absorption spectroscopy experiments allow the acquisition of millions of spectra within minutes. However, the construction of the hyperspectral image requires an image alignment procedure with sub-pixel precision. While the image correlation algorithm has originally been used for image re-alignment using translations, the Scale Invariant Feature Transform (SIFT) algorithm (which is by design robust versus rotation, illumination change, translation and scaling) presents an additional advantage: the alignment can be limited to a region of interest of any arbitrary shape. In this context, a Python module, named SIFT_PyOCL, has been developed. It implements a parallel version of the SIFT algorithm in OpenCL, providing high-speed image registration and alignment both on processors and graphics cards. The performance of the algorithm allows online processing of large datasets.
NASA Technical Reports Server (NTRS)
Aucoin, P. J.; Stewart, J.; Mckay, M. F. (Principal Investigator)
1980-01-01
This document presents instructions for analysts who use the EOD-LARSYS as programmed on the Purdue University IBM 370/148 (recently replaced by the IBM 3031) computer. It presents sample applications, control cards, and error messages for all processors in the system and gives detailed descriptions of the mathematical procedures and information needed to execute the system and obtain the desired output. EOD-LARSYS is the JSC version of an integrated batch system for analysis of multispectral scanner imagery data. The data included is designed for use with the as built documentation (volume 3) and the program listings (volume 4). The system is operational from remote terminals at Johnson Space Center under the virtual machine/conversational monitor system environment.
NASA Technical Reports Server (NTRS)
Kim, B. F.; Moorjani, K.; Phillips, T. E.; Adrian, F. J.; Bohandy, J.; Dolecek, Q. E.
1993-01-01
A method for characterization of granular superconducting thin films has been developed which encompasses both the morphological state of the sample and its fabrication process parameters. The broad scope of this technique is due to the synergism between experimental measurements and their interpretation using numerical simulation. Two novel technologies form the substance of this system: the magnetically modulated resistance method for characterizing superconductors; and a powerful new computer peripheral, the Parallel Information Processor card, which provides enhanced computing capability for PC computers. This enhancement allows PC computers to operate at speeds approaching that of supercomputers. This makes atomic scale simulations possible on low cost machines. The present development of this system involves the integration of these two technologies using mesoscale simulations of thin film growth. A future stage of development will incorporate atomic scale modeling.
Assessing Advanced Technology in CENATE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tallent, Nathan R.; Barker, Kevin J.; Gioiosa, Roberto
PNNL's Center for Advanced Technology Evaluation (CENATE) is a new U.S. Department of Energy center whose mission is to assess and facilitate access to emerging computing technology. CENATE is assessing a range of advanced technologies, from evolutionary to disruptive. Technologies of interest include the processor socket (homogeneous and accelerated systems), memories (dynamic, static, memory cubes), motherboards, networks (network interface cards and switches), and input/output and storage devices. CENATE is developing a multi-perspective evaluation process based on integrating advanced system instrumentation, performance measurements, and modeling and simulation. We show evaluations of two emerging network technologies: silicon photonics interconnects and the Datamore » Vortex network. CENATE's evaluation also addresses the question of which machine is best for a given workload under certain constraints. We show a performance-power tradeoff analysis of a well-known machine learning application on two systems.« less
NASA Astrophysics Data System (ADS)
Tereshin, Alexander A.; Usilin, Sergey A.; Arlazarov, Vladimir V.
2018-04-01
This paper aims to study the problem of multi-class object detection in video stream with Viola-Jones cascades. An adaptive algorithm for selecting Viola-Jones cascade based on greedy choice strategy in solution of the N-armed bandit problem is proposed. The efficiency of the algorithm on the problem of detection and recognition of the bank card logos in the video stream is shown. The proposed algorithm can be effectively used in documents localization and identification, recognition of road scene elements, localization and tracking of the lengthy objects , and for solving other problems of rigid object detection in a heterogeneous data flows. The computational efficiency of the algorithm makes it possible to use it both on personal computers and on mobile devices based on processors with low power consumption.
NASA Technical Reports Server (NTRS)
Leucht, David K.; Koslosky, Marie J.; Kobe, David L.; Wu, Jya-Chang C.; Vavra, David A.
2011-01-01
The Space Environments Testbed (SET) is a flight controller data system for the Common Carrier Assembly. The SET-1 flight software provides the command, telemetry, and experiment control to ground operators for the SET-1 mission. Modes of operation (see dia gram) include: a) Boot Mode that is initiated at application of power to the processor card, and runs memory diagnostics. It may be entered via ground command or autonomously based upon fault detection. b) Maintenance Mode that allows for limited carrier health monitoring, including power telemetry monitoring on a non-interference basis. c) Safe Mode is a predefined, minimum power safehold configuration with power to experiments removed and carrier functionality minimized. It is used to troubleshoot problems that occur during flight. d) Operations Mode is used for normal experiment carrier operations. It may be entered only via ground command from Safe Mode.
The GPS Burst Detector W-Sensor
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCrady, D.D.; Phipps, P.
1994-08-01
The NAVSTAR satellites have two missions: navigation and nuclear detonation detection. The main objective of this paper is to describe one of the key elements of the Nuclear Detonation Detection System (NDS), the Burst Detector W-Sensor (BDW) that was developed for the Air Force Space and Missle Systems Center, its mission on GPS Block IIR, and how it utilizes GPS timing signals to precisely locate nuclear detonations (NUDET). The paper will also cover the interface to the Burst Detector Processor (BDP) which links the BDW to the ground station where the BDW is controlled and where data from multiple satellitesmore » are processed to determine the location of the NUDET. The Block IIR BDW is the culmination of a development program that has produced a state-of-the-art, space qualified digital receiver/processor that dissipates only 30 Watts, weighs 57 pounds, and has a 12in. {times} l4.2in. {times} 7.16in. footprint. The paper will highlight several of the key multilayer printed circuit cards without which the required power, weight, size, and radiation requirements could not have been met. In addition, key functions of the system software will be covered. The paper will be concluded with a discussion of the high speed digital signal processing and algorithm used to determine the time-of-arrival (TOA) of the electromagnetic pulse (EMP) from the NUDET.« less
Efficient algorithms and implementations of entropy-based moment closures for rarefied gases
NASA Astrophysics Data System (ADS)
Schaerer, Roman Pascal; Bansal, Pratyuksh; Torrilhon, Manuel
2017-07-01
We present efficient algorithms and implementations of the 35-moment system equipped with the maximum-entropy closure in the context of rarefied gases. While closures based on the principle of entropy maximization have been shown to yield very promising results for moderately rarefied gas flows, the computational cost of these closures is in general much higher than for closure theories with explicit closed-form expressions of the closing fluxes, such as Grad's classical closure. Following a similar approach as Garrett et al. (2015) [13], we investigate efficient implementations of the computationally expensive numerical quadrature method used for the moment evaluations of the maximum-entropy distribution by exploiting its inherent fine-grained parallelism with the parallelism offered by multi-core processors and graphics cards. We show that using a single graphics card as an accelerator allows speed-ups of two orders of magnitude when compared to a serial CPU implementation. To accelerate the time-to-solution for steady-state problems, we propose a new semi-implicit time discretization scheme. The resulting nonlinear system of equations is solved with a Newton type method in the Lagrange multipliers of the dual optimization problem in order to reduce the computational cost. Additionally, fully explicit time-stepping schemes of first and second order accuracy are presented. We investigate the accuracy and efficiency of the numerical schemes for several numerical test cases, including a steady-state shock-structure problem.
Computer-based desktop system for surgical videotape editing.
Vincent-Hamelin, E; Sarmiento, J M; de la Puente, J M; Vicente, M
1997-05-01
The educational role of surgical video presentations should be optimized by linking surgical images to graphic evaluation of indications, techniques, and results. We describe a PC-based video production system for personal editing of surgical tapes, according to the objectives of each presentation. The hardware requirement is a personal computer (100 MHz processor, 1-Gb hard disk, 16 Mb RAM) with a PC-to-TV/video transfer card plugged into a slot. Computer-generated numerical data, texts, and graphics are transformed into analog signals displayed on TV/video. A Genlock interface (a special interface card) synchronizes digital and analog signals, to overlay surgical images to electronic illustrations. The presentation is stored as digital information or recorded on a tape. The proliferation of multimedia tools is leading us to adapt presentations to the objectives of lectures and to integrate conceptual analyses with dynamic image-based information. We describe a system that handles both digital and analog signals, production being recorded on a tape. Movies may be managed in a digital environment, with either an "on-line" or "off-line" approach. System requirements are high, but handling a single device optimizes editing without incurring such complexity that management becomes impractical to surgeons. Our experience suggests that computerized editing allows linking surgical scientific and didactic messages on a single communication medium, either a videotape or a CD-ROM.
Large liquid rocket engine transient performance simulation system
NASA Technical Reports Server (NTRS)
Mason, J. R.; Southwick, R. D.
1991-01-01
A simulation system, ROCETS, was designed and developed to allow cost-effective computer predictions of liquid rocket engine transient performance. The system allows a user to generate a simulation of any rocket engine configuration using component modules stored in a library through high-level input commands. The system library currently contains 24 component modules, 57 sub-modules and maps, and 33 system routines and utilities. FORTRAN models from other sources can be operated in the system upon inclusion of interface information on comment cards. Operation of the simulation is simplified for the user by run, execution, and output processors. The simulation system makes available steady-state trim balance, transient operation, and linear partial generation. The system utilizes a modern equation solver for efficient operation of the simulations. Transient integration methods include integral and differential forms for the trapezoidal, first order Gear, and second order Gear corrector equations. A detailed technology test bed engine (TTBE) model was generated to be used as the acceptance test of the simulation system. The general level of model detail was that reflected in the Space Shuttle Main Engine DTM. The model successfully obtained steady-state balance in main stage operation and simulated throttle transients, including engine starts and shutdown. A NASA FORTRAN control model was obtained, ROCETS interface installed in comment cards, and operated with the TTBE model in closed-loop transient mode.
Implementation of Augmented Reality Technology in Sangiran Museum with Vuforia
NASA Astrophysics Data System (ADS)
Purnomo, F. A.; Santosa, P. I.; Hartanto, R.; Pratisto, E. H.; Purbayu, A.
2018-03-01
Archaeological object is an evidence of life on ancient relics which has a lifespan of millions years ago. The discovery of this ancient object by the Museum Sangiran then is preserved and protected from potential damage. This research will develop Augmented Reality application for the museum that display a virtual information from ancient object on display. The content includes information as text, audio, and animation of 3D model as a representation of the ancient object. This study emphasizes the 3D Markerless recognition process by using Vuforia Augmented Reality (AR) system so that visitor can access the exhibition objects through different viewpoints. Based on the test result, by registering image target with 25o angle interval, 3D markerless keypoint feature can be detected with different viewpoint. The device must meet minimal specifications of Dual Core 1.2 GHz processor, GPU Power VR SG5X, 8 MP auto focus camera and 1 GB of memory to run the application. The average success of the AR application detects object in museum exhibition to 3D Markerless with a single view by 40%, Markerless multiview by 86% (for angle 0° - 180°) and 100% (for angle 0° - 360°). Application detection distance is between 23 cm and up to 540 cm with the response time to detect 3D Markerless has 12 seconds in average.
Automatic detection and classification of obstacles with applications in autonomous mobile robots
NASA Astrophysics Data System (ADS)
Ponomaryov, Volodymyr I.; Rosas-Miranda, Dario I.
2016-04-01
Hardware implementation of an automatic detection and classification of objects that can represent an obstacle for an autonomous mobile robot using stereo vision algorithms is presented. We propose and evaluate a new method to detect and classify objects for a mobile robot in outdoor conditions. This method is divided in two parts, the first one is the object detection step based on the distance from the objects to the camera and a BLOB analysis. The second part is the classification step that is based on visuals primitives and a SVM classifier. The proposed method is performed in GPU in order to reduce the processing time values. This is performed with help of hardware based on multi-core processors and GPU platform, using a NVIDIA R GeForce R GT640 graphic card and Matlab over a PC with Windows 10.
Portable Computer Technology (PCT) Research and Development Program Phase 2
NASA Technical Reports Server (NTRS)
Castillo, Michael; McGuire, Kenyon; Sorgi, Alan
1995-01-01
The subject of this project report, focused on: (1) Design and development of two Advanced Portable Workstation 2 (APW 2) units. These units incorporate advanced technology features such as a low power Pentium processor, a high resolution color display, National Television Standards Committee (NTSC) video handling capabilities, a Personal Computer Memory Card International Association (PCMCIA) interface, and Small Computer System Interface (SCSI) and ethernet interfaces. (2) Use these units to integrate and demonstrate advanced wireless network and portable video capabilities. (3) Qualification of the APW 2 systems for use in specific experiments aboard the Mir Space Station. A major objective of the PCT Phase 2 program was to help guide future choices in computing platforms and techniques for meeting National Aeronautics and Space Administration (NASA) mission objectives. The focus being on the development of optimal configurations of computing hardware, software applications, and network technologies for use on NASA missions.
Investigation of an advanced fault tolerant integrated avionics system
NASA Technical Reports Server (NTRS)
Dunn, W. R.; Cottrell, D.; Flanders, J.; Javornik, A.; Rusovick, M.
1986-01-01
Presented is an advanced, fault-tolerant multiprocessor avionics architecture as could be employed in an advanced rotorcraft such as LHX. The processor structure is designed to interface with existing digital avionics systems and concepts including the Army Digital Avionics System (ADAS) cockpit/display system, navaid and communications suites, integrated sensing suite, and the Advanced Digital Optical Control System (ADOCS). The report defines mission, maintenance and safety-of-flight reliability goals as might be expected for an operational LHX aircraft. Based on use of a modular, compact (16-bit) microprocessor card family, results of a preliminary study examining simplex, dual and standby-sparing architectures is presented. Given the stated constraints, it is shown that the dual architecture is best suited to meet reliability goals with minimum hardware and software overhead. The report presents hardware and software design considerations for realizing the architecture including redundancy management requirements and techniques as well as verification and validation needs and methods.
LabPatch, an acquisition and analysis program for patch-clamp electrophysiology.
Robinson, T; Thomsen, L; Huizinga, J D
2000-05-01
An acquisition and analysis program, "LabPatch," has been developed for use in patch-clamp research. LabPatch controls any patch-clamp amplifier, acquires and records data, runs voltage protocols, plots and analyzes data, and connects to spreadsheet and database programs. Controls within LabPatch are grouped by function on one screen, much like an oscilloscope front panel. The software is mouse driven, so that the user need only point and click. Finally, the ability to copy data to other programs running in Windows 95/98, and the ability to keep track of experiments using a database, make LabPatch extremely versatile. The system requirements include Windows 95/98, at least a 100-MHz processor and 16 MB RAM, a data acquisition card, digital-to-analog converter, and a patch-clamp amplifier. LabPatch is available free of charge at http://www.fhs.mcmaster.ca/huizinga/.
Real-time generation of infrared ocean scene based on GPU
NASA Astrophysics Data System (ADS)
Jiang, Zhaoyi; Wang, Xun; Lin, Yun; Jin, Jianqiu
2007-12-01
Infrared (IR) image synthesis for ocean scene has become more and more important nowadays, especially for remote sensing and military application. Although a number of works present ready-to-use simulations, those techniques cover only a few possible ways of water interacting with the environment. And the detail calculation of ocean temperature is rarely considered by previous investigators. With the advance of programmable features of graphic card, many algorithms previously limited to offline processing have become feasible for real-time usage. In this paper, we propose an efficient algorithm for real-time rendering of infrared ocean scene using the newest features of programmable graphics processors (GPU). It differs from previous works in three aspects: adaptive GPU-based ocean surface tessellation, sophisticated balance equation of thermal balance for ocean surface, and GPU-based rendering for infrared ocean scene. Finally some results of infrared image are shown, which are in good accordance with real images.
Biomedical Wireless Ambulatory Crew Monitor
NASA Technical Reports Server (NTRS)
Chmiel, Alan; Humphreys, Brad
2009-01-01
A compact, ambulatory biometric data acquisition system has been developed for space and commercial terrestrial use. BioWATCH (Bio medical Wireless and Ambulatory Telemetry for Crew Health) acquires signals from biomedical sensors using acquisition modules attached to a common data and power bus. Several slots allow the user to configure the unit by inserting sensor-specific modules. The data are then sent real-time from the unit over any commercially implemented wireless network including 802.11b/g, WCDMA, 3G. This system has a distributed computing hierarchy and has a common data controller on each sensor module. This allows for the modularity of the device along with the tailored ability to control the cards using a relatively small master processor. The distributed nature of this system affords the modularity, size, and power consumption that betters the current state of the art in medical ambulatory data acquisition. A new company was created to market this technology.
A Future Accelerated Cognitive Distributed Hybrid Testbed for Big Data Science Analytics
NASA Astrophysics Data System (ADS)
Halem, M.; Prathapan, S.; Golpayegani, N.; Huang, Y.; Blattner, T.; Dorband, J. E.
2016-12-01
As increased sensor spectral data volumes from current and future Earth Observing satellites are assimilated into high-resolution climate models, intensive cognitive machine learning technologies are needed to data mine, extract and intercompare model outputs. It is clear today that the next generation of computers and storage, beyond petascale cluster architectures, will be data centric. They will manage data movement and process data in place. Future cluster nodes have been announced that integrate multiple CPUs with high-speed links to GPUs and MICS on their backplanes with massive non-volatile RAM and access to active flash RAM disk storage. Active Ethernet connected key value store disk storage drives with 10Ge or higher are now available through the Kinetic Open Storage Alliance. At the UMBC Center for Hybrid Multicore Productivity Research, a future state-of-the-art Accelerated Cognitive Computer System (ACCS) for Big Data science is being integrated into the current IBM iDataplex computational system `bluewave'. Based on the next gen IBM 200 PF Sierra processor, an interim two node IBM Power S822 testbed is being integrated with dual Power 8 processors with 10 cores, 1TB Ram, a PCIe to a K80 GPU and an FPGA Coherent Accelerated Processor Interface card to 20TB Flash Ram. This system is to be updated to the Power 8+, an NVlink 1.0 with the Pascal GPU late in 2016. Moreover, the Seagate 96TB Kinetic Disk system with 24 Ethernet connected active disks is integrated into the ACCS storage system. A Lightweight Virtual File System developed at the NASA GSFC is installed on bluewave. Since remote access to publicly available quantum annealing computers is available at several govt labs, the ACCS will offer an in-line Restricted Boltzmann Machine optimization capability to the D-Wave 2X quantum annealing processor over the campus high speed 100 Gb network to Internet 2 for large files. As an evaluation test of the cognitive functionality of the architecture, the following studies utilizing all the system components will be presented; (i) a near real time climate change study generating CO2 fluxes and (ii) a deep dive capability into an 8000 x8000 pixel image pyramid display and (iii) Large dense and sparse eigenvalue decomposition.
Siebert, Johan N; Ehrler, Frederic; Gervaix, Alain; Haddad, Kevin; Lacroix, Laurence; Schrurs, Philippe; Sahin, Ayhan; Lovis, Christian; Manzano, Sergio
2017-05-29
The American Heart Association (AHA) guidelines for cardiopulmonary resuscitation (CPR) are nowadays recognized as the world's most authoritative resuscitation guidelines. Adherence to these guidelines optimizes the management of critically ill patients and increases their chances of survival after cardiac arrest. Despite their availability, suboptimal quality of CPR is still common. Currently, the median hospital survival rate after pediatric in-hospital cardiac arrest is 36%, whereas it falls below 10% for out-of-hospital cardiac arrest. Among emerging information technologies and devices able to support caregivers during resuscitation and increase adherence to AHA guidelines, augmented reality (AR) glasses have not yet been assessed. In order to assess their potential, we adapted AHA Pediatric Advanced Life Support (PALS) guidelines for AR glasses. The study aimed to determine whether adapting AHA guidelines for AR glasses increased adherence by reducing deviation and time to initiation of critical life-saving maneuvers during pediatric CPR when compared with the use of PALS pocket reference cards. We conducted a randomized controlled trial with two parallel groups of voluntary pediatric residents, comparing AR glasses to PALS pocket reference cards during a simulation-based pediatric cardiac arrest scenario-pulseless ventricular tachycardia (pVT). The primary outcome was the elapsed time in seconds in each allocation group, from onset of pVT to the first defibrillation attempt. Secondary outcomes were time elapsed to (1) initiation of chest compression, (2) subsequent defibrillation attempts, and (3) administration of drugs, as well as the time intervals between defibrillation attempts and drug doses, shock doses, and number of shocks. All these outcomes were assessed for deviation from AHA guidelines. Twenty residents were randomized into 2 groups. Time to first defibrillation attempt (mean: 146 s) and adherence to AHA guidelines in terms of time to other critical resuscitation endpoints and drug dose delivery were not improved using AR glasses. However, errors and deviations were significantly reduced in terms of defibrillation doses when compared with the use of the PALS pocket reference cards. In a total of 40 defibrillation attempts, residents not wearing AR glasses used wrong doses in 65% (26/40) of cases, including 21 shock overdoses >100 J, for a cumulative defibrillation dose of 18.7 Joules per kg. These errors were reduced by 53% (21/40, P<.001) and cumulative defibrillation dose by 37% (5.14/14, P=.001) with AR glasses. AR glasses did not decrease time to first defibrillation attempt and other critical resuscitation endpoints when compared with PALS pocket cards. However, they improved adherence and performance among residents in terms of administering the defibrillation doses set by AHA. ©Johan N Siebert, Frederic Ehrler, Alain Gervaix, Kevin Haddad, Laurence Lacroix, Philippe Schrurs, Ayhan Sahin, Christian Lovis, Sergio Manzano. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 29.05.2017.
High-Precision Timing of Several Millisecond Pulsars
NASA Astrophysics Data System (ADS)
Ferdman, R. D.; Stairs, I. H.; Backer, D. C.; Ramachandran, R.; Demorest, P.; Nice, D. J.; Lyne, A. G.; Kramer, M.; Lorimer, D.; McLaughlin, M.; Manchester, D.; Camilo, F.; D'Amico, N.; Possenti, A.; Burgay, M.; Joshi, B. C.; Freire, P. C.
2004-12-01
The highest precision pulsar timing is achieved by reproducing as accurately as possible the pulse profile as emitted by the pulsar, in high signal-to-noise observations. The best profile reconstruction can be accomplished with several-bit voltage sampling and coherent removal of the dispersion suffered by pulsar signals as they traverse the interstellar medium. The Arecibo Signal Processor (ASP) and its counterpart the Green Bank Astronomical Signal Processor (GASP) are flexible, state-of-the-art wide-bandwidth observing systems, built primarily for high-precision long-term timing of millisecond and binary pulsars. ASP and GASP are in use at the 300-m Arecibo telescope in Puerto Rico and the 100-m Green Bank Telescope in Green Bank, West Virginia, respectively, taking advantage of the enormous sensitivities of these telescopes. These instruments result in high-precision science through 4 and 8-bit sampling and perform coherent dedispersion on the incoming data stream in real or near-real time. This is done using a network of personal computers, over an observing bandwidth of 64 to 128 MHz, in each of two polarizations. We present preliminary results of timing and polarimetric observations with ASP/GASP for several pulsars, including the recently-discovered relativistic double-pulsar binary J0737-3039. These data are compared to simultaneous observations with other pulsar instruments, such as the new "spigot card" spectrometer on the GBT and the Princeton Mark IV instrument at Arecibo, the precursor timing system to ASP. We also briefly discuss several upcoming observations with ASP/GASP.
A GPU OpenCL based cross-platform Monte Carlo dose calculation engine (goMC)
NASA Astrophysics Data System (ADS)
Tian, Zhen; Shi, Feng; Folkerts, Michael; Qin, Nan; Jiang, Steve B.; Jia, Xun
2015-09-01
Monte Carlo (MC) simulation has been recognized as the most accurate dose calculation method for radiotherapy. However, the extremely long computation time impedes its clinical application. Recently, a lot of effort has been made to realize fast MC dose calculation on graphic processing units (GPUs). However, most of the GPU-based MC dose engines have been developed under NVidia’s CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a GPU OpenCL based cross-platform MC dose engine named goMC with coupled photon-electron simulation for external photon and electron radiotherapy in the MeV energy range. Compared to our previously developed GPU-based MC code named gDPM (Jia et al 2012 Phys. Med. Biol. 57 7783-97), goMC has two major differences. First, it was developed under the OpenCL environment for high code portability and hence could be run not only on different GPU cards but also on CPU platforms. Second, we adopted the electron transport model used in EGSnrc MC package and PENELOPE’s random hinge method in our new dose engine, instead of the dose planning method employed in gDPM. Dose distributions were calculated for a 15 MeV electron beam and a 6 MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. Satisfactory agreement between the two MC dose engines goMC and gDPM was observed in all cases. The average dose differences in the regions that received a dose higher than 10% of the maximum dose were 0.48-0.53% for the electron beam cases and 0.15-0.17% for the photon beam cases. In terms of efficiency, goMC was ~4-16% slower than gDPM when running on the same NVidia TITAN card for all the cases we tested, due to both the different electron transport models and the different development environments. The code portability of our new dose engine goMC was validated by successfully running it on a variety of different computing devices including an NVidia GPU card, two AMD GPU cards and an Intel CPU processor. Computational efficiency among these platforms was compared.
A GPU OpenCL based cross-platform Monte Carlo dose calculation engine (goMC).
Tian, Zhen; Shi, Feng; Folkerts, Michael; Qin, Nan; Jiang, Steve B; Jia, Xun
2015-10-07
Monte Carlo (MC) simulation has been recognized as the most accurate dose calculation method for radiotherapy. However, the extremely long computation time impedes its clinical application. Recently, a lot of effort has been made to realize fast MC dose calculation on graphic processing units (GPUs). However, most of the GPU-based MC dose engines have been developed under NVidia's CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a GPU OpenCL based cross-platform MC dose engine named goMC with coupled photon-electron simulation for external photon and electron radiotherapy in the MeV energy range. Compared to our previously developed GPU-based MC code named gDPM (Jia et al 2012 Phys. Med. Biol. 57 7783-97), goMC has two major differences. First, it was developed under the OpenCL environment for high code portability and hence could be run not only on different GPU cards but also on CPU platforms. Second, we adopted the electron transport model used in EGSnrc MC package and PENELOPE's random hinge method in our new dose engine, instead of the dose planning method employed in gDPM. Dose distributions were calculated for a 15 MeV electron beam and a 6 MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. Satisfactory agreement between the two MC dose engines goMC and gDPM was observed in all cases. The average dose differences in the regions that received a dose higher than 10% of the maximum dose were 0.48-0.53% for the electron beam cases and 0.15-0.17% for the photon beam cases. In terms of efficiency, goMC was ~4-16% slower than gDPM when running on the same NVidia TITAN card for all the cases we tested, due to both the different electron transport models and the different development environments. The code portability of our new dose engine goMC was validated by successfully running it on a variety of different computing devices including an NVidia GPU card, two AMD GPU cards and an Intel CPU processor. Computational efficiency among these platforms was compared.
Efficient algorithms and implementations of entropy-based moment closures for rarefied gases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schaerer, Roman Pascal, E-mail: schaerer@mathcces.rwth-aachen.de; Bansal, Pratyuksh; Torrilhon, Manuel
We present efficient algorithms and implementations of the 35-moment system equipped with the maximum-entropy closure in the context of rarefied gases. While closures based on the principle of entropy maximization have been shown to yield very promising results for moderately rarefied gas flows, the computational cost of these closures is in general much higher than for closure theories with explicit closed-form expressions of the closing fluxes, such as Grad's classical closure. Following a similar approach as Garrett et al. (2015) , we investigate efficient implementations of the computationally expensive numerical quadrature method used for the moment evaluations of the maximum-entropymore » distribution by exploiting its inherent fine-grained parallelism with the parallelism offered by multi-core processors and graphics cards. We show that using a single graphics card as an accelerator allows speed-ups of two orders of magnitude when compared to a serial CPU implementation. To accelerate the time-to-solution for steady-state problems, we propose a new semi-implicit time discretization scheme. The resulting nonlinear system of equations is solved with a Newton type method in the Lagrange multipliers of the dual optimization problem in order to reduce the computational cost. Additionally, fully explicit time-stepping schemes of first and second order accuracy are presented. We investigate the accuracy and efficiency of the numerical schemes for several numerical test cases, including a steady-state shock-structure problem.« less
Adapting the SpaceCube v2.0 Data Processing System for Mission-Unique Application Requirements
NASA Technical Reports Server (NTRS)
Petrick, David; Gill, Nat; Hasouneh, Munther; Stone, Robert; Winternitz, Luke; Thomas, Luke; Davis, Milton; Sparacino, Pietro; Flatley, Thomas
2015-01-01
The SpaceCube (sup TM) v2.0 system is a superior high performance, reconfigurable, hybrid data processing system that can be used in a multitude of applications including those that require a radiation hardened and reliable solution. This paper provides an overview of the design architecture, flexibility, and the advantages of the modular SpaceCube v2.0 high performance data processing system for space applications. The current state of the proven SpaceCube technology is based on nine years of engineering and operations. Five systems have been successfully operated in space starting in 2008 with four more to be delivered for launch vehicle integration in 2015. The SpaceCube v2.0 system is also baselined as the avionics solution for five additional flight projects and is always a top consideration as the core avionics for new instruments or spacecraft control. This paper will highlight how this multipurpose system is currently being used to solve design challenges of three independent applications. The SpaceCube hardware adapts to new system requirements by allowing for application-unique interface cards that are utilized by reconfiguring the underlying programmable elements on the core processor card. We will show how this system is being used to improve on a heritage NASA GPS technology, enable a cutting-edge LiDAR instrument, and serve as a typical command and data handling (C&DH) computer for a space robotics technology demonstration.
Adapting the SpaceCube v2.0 Data Processing System for Mission-Unique Application Requirements
NASA Technical Reports Server (NTRS)
Petrick, David
2015-01-01
The SpaceCubeTM v2.0 system is a superior high performance, reconfigurable, hybrid data processing system that can be used in a multitude of applications including those that require a radiation hardened and reliable solution. This paper provides an overview of the design architecture, flexibility, and the advantages of the modular SpaceCube v2.0 high performance data processing system for space applications. The current state of the proven SpaceCube technology is based on nine years of engineering and operations. Five systems have been successfully operated in space starting in 2008 with four more to be delivered for launch vehicle integration in 2015. The SpaceCube v2.0 system is also baselined as the avionics solution for five additional flight projects and is always a top consideration as the core avionics for new instruments or spacecraft control. This paper will highlight how this multipurpose system is currently being used to solve design challenges of three independent applications. The SpaceCube hardware adapts to new system requirements by allowing for application-unique interface cards that are utilized by reconfiguring the underlying programmable elements on the core processor card. We will show how this system is being used to improve on a heritage NASA GPS technology, enable a cutting-edge LiDAR instrument, and serve as a typical command and data handling (CDH) computer for a space robotics technology demonstration.
Ng, C M
2013-10-01
The development of a population PK/PD model, an essential component for model-based drug development, is both time- and labor-intensive. A graphical-processing unit (GPU) computing technology has been proposed and used to accelerate many scientific computations. The objective of this study was to develop a hybrid GPU-CPU implementation of parallelized Monte Carlo parametric expectation maximization (MCPEM) estimation algorithm for population PK data analysis. A hybrid GPU-CPU implementation of the MCPEM algorithm (MCPEMGPU) and identical algorithm that is designed for the single CPU (MCPEMCPU) were developed using MATLAB in a single computer equipped with dual Xeon 6-Core E5690 CPU and a NVIDIA Tesla C2070 GPU parallel computing card that contained 448 stream processors. Two different PK models with rich/sparse sampling design schemes were used to simulate population data in assessing the performance of MCPEMCPU and MCPEMGPU. Results were analyzed by comparing the parameter estimation and model computation times. Speedup factor was used to assess the relative benefit of parallelized MCPEMGPU over MCPEMCPU in shortening model computation time. The MCPEMGPU consistently achieved shorter computation time than the MCPEMCPU and can offer more than 48-fold speedup using a single GPU card. The novel hybrid GPU-CPU implementation of parallelized MCPEM algorithm developed in this study holds a great promise in serving as the core for the next-generation of modeling software for population PK/PD analysis.
Multi-GPU Accelerated Admittance Method for High-Resolution Human Exposure Evaluation.
Xiong, Zubiao; Feng, Shi; Kautz, Richard; Chandra, Sandeep; Altunyurt, Nevin; Chen, Ji
2015-12-01
A multi-graphics processing unit (GPU) accelerated admittance method solver is presented for solving the induced electric field in high-resolution anatomical models of human body when exposed to external low-frequency magnetic fields. In the solver, the anatomical model is discretized as a three-dimensional network of admittances. The conjugate orthogonal conjugate gradient (COCG) iterative algorithm is employed to take advantage of the symmetric property of the complex-valued linear system of equations. Compared against the widely used biconjugate gradient stabilized method, the COCG algorithm can reduce the solving time by 3.5 times and reduce the storage requirement by about 40%. The iterative algorithm is then accelerated further by using multiple NVIDIA GPUs. The computations and data transfers between GPUs are overlapped in time by using asynchronous concurrent execution design. The communication overhead is well hidden so that the acceleration is nearly linear with the number of GPU cards. Numerical examples show that our GPU implementation running on four NVIDIA Tesla K20c cards can reach 90 times faster than the CPU implementation running on eight CPU cores (two Intel Xeon E5-2603 processors). The implemented solver is able to solve large dimensional problems efficiently. A whole adult body discretized in 1-mm resolution can be solved in just several minutes. The high efficiency achieved makes it practical to investigate human exposure involving a large number of cases with a high resolution that meets the requirements of international dosimetry guidelines.
The growth of the UniTree mass storage system at the NASA Center for Computational Sciences
NASA Technical Reports Server (NTRS)
Tarshish, Adina; Salmon, Ellen
1993-01-01
In October 1992, the NASA Center for Computational Sciences made its Convex-based UniTree system generally available to users. The ensuing months saw the growth of near-online data from nil to nearly three terabytes, a doubling of the number of CPU's on the facility's Cray YMP (the primary data source for UniTree), and the necessity for an aggressive regimen for repacking sparse tapes and hierarchical 'vaulting' of old files to freestanding tape. Connectivity was enhanced as well with the addition of UltraNet HiPPI. This paper describes the increasing demands placed on the storage system's performance and throughput that resulted from the significant augmentation of compute-server processor power and network speed.
Augmented burst-error correction for UNICON laser memory. [digital memory
NASA Technical Reports Server (NTRS)
Lim, R. S.
1974-01-01
A single-burst-error correction system is described for data stored in the UNICON laser memory. In the proposed system, a long fire code with code length n greater than 16,768 bits was used as an outer code to augment an existing inner shorter fire code for burst error corrections. The inner fire code is a (80,64) code shortened from the (630,614) code, and it is used to correct a single-burst-error on a per-word basis with burst length b less than or equal to 6. The outer code, with b less than or equal to 12, would be used to correct a single-burst-error on a per-page basis, where a page consists of 512 32-bit words. In the proposed system, the encoding and error detection processes are implemented by hardware. A minicomputer, currently used as a UNICON memory management processor, is used on a time-demanding basis for error correction. Based upon existing error statistics, this combination of an inner code and an outer code would enable the UNICON system to obtain a very low error rate in spite of flaws affecting the recorded data.
NASA Technical Reports Server (NTRS)
Mitchell, Julie L.; Broyan, James L.; Pickering, Karen D.; Adam, Niklas; Casteel, Michael; Callahan, Michael; Carrier, Chris
2012-01-01
In support of the Urine Processor Assembly Precipitation Prevention Project (UPA PPP), multiple technologies were explored to prevent CaSO4 2H2O (gypsum) precipitation during the on-orbit distillation process. Gypsum precipitation currently limits the water recovery rate onboard the International Space Station (ISS) to 70% versus the planned 85% target water recovery rate. Due to its ability to remove calcium cations in pretreated augmented urine (PTAU), ion exchange was selected as one of the technologies for further development by the PPP team. A total of 13 ion exchange resins were evaluated in various equilibrium and dynamic column tests with solutions of dissolved gypsum, urine ersatz, PTAU, and PTAU brine at 85% water recovery. While initial evaluations indicated that the Purolite SST60 resin had the highest calcium capacity in PTAU (0.30 meq/mL average), later tests showed that the Dowex G26 and Amberlite FPC12H resins had the highest capacity (0.5 meq/mL average). Testing at the Marshall Spaceflight Center (MSFC) integrates the ion exchange technology with a UPA ground article under flight-like pulsed flow conditions with PTAU. To date, no gypsum precipitation has taken place in any of the initial evaluations.
Leang, Sarom S; Rendell, Alistair P; Gordon, Mark S
2014-03-11
Increasingly, modern computer systems comprise a multicore general-purpose processor augmented with a number of special purpose devices or accelerators connected via an external interface such as a PCI bus. The NVIDIA Kepler Graphical Processing Unit (GPU) and the Intel Phi are two examples of such accelerators. Accelerators offer peak performances that can be well above those of the host processor. How to exploit this heterogeneous environment for legacy application codes is not, however, straightforward. This paper considers how matrix operations in typical quantum chemical calculations can be migrated to the GPU and Phi systems. Double precision general matrix multiply operations are endemic in electronic structure calculations, especially methods that include electron correlation, such as density functional theory, second order perturbation theory, and coupled cluster theory. The use of approaches that automatically determine whether to use the host or an accelerator, based on problem size, is explored, with computations that are occurring on the accelerator and/or the host. For data-transfers over PCI-e, the GPU provides the best overall performance for data sizes up to 4096 MB with consistent upload and download rates between 5-5.6 GB/s and 5.4-6.3 GB/s, respectively. The GPU outperforms the Phi for both square and nonsquare matrix multiplications.
Digital Beamforming Scatterometer
NASA Technical Reports Server (NTRS)
Rincon, Rafael F.; Vega, Manuel; Kman, Luko; Buenfil, Manuel; Geist, Alessandro; Hillard, Larry; Racette, Paul
2009-01-01
This paper discusses scatterometer measurements collected with multi-mode Digital Beamforming Synthetic Aperture Radar (DBSAR) during the SMAP-VEX 2008 campaign. The 2008 SMAP Validation Experiment was conducted to address a number of specific questions related to the soil moisture retrieval algorithms. SMAP-VEX 2008 consisted on a series of aircraft-based.flights conducted on the Eastern Shore of Maryland and Delaware in the fall of 2008. Several other instruments participated in the campaign including the Passive Active L-Band System (PALS), the Marshall Airborne Polarimetric Imaging Radiometer (MAPIR), and the Global Positioning System Reflectometer (GPSR). This campaign was the first SMAP Validation Experiment. DBSAR is a multimode radar system developed at NASA/Goddard Space Flight Center that combines state-of-the-art radar technologies, on-board processing, and advances in signal processing techniques in order to enable new remote sensing capabilities applicable to Earth science and planetary applications [l]. The instrument can be configured to operate in scatterometer, Synthetic Aperture Radar (SAR), or altimeter mode. The system builds upon the L-band Imaging Scatterometer (LIS) developed as part of the RadSTAR program. The radar is a phased array system designed to fly on the NASA P3 aircraft. The instrument consists of a programmable waveform generator, eight transmit/receive (T/R) channels, a microstrip antenna, and a reconfigurable data acquisition and processor system. Each transmit channel incorporates a digital attenuator, and digital phase shifter that enables amplitude and phase modulation on transmit. The attenuators, phase shifters, and calibration switches are digitally controlled by the radar control card (RCC) on a pulse by pulse basis. The antenna is a corporate fed microstrip patch-array centered at 1.26 GHz with a 20 MHz bandwidth. Although only one feed is used with the present configuration, a provision was made for separate corporate feeds for vertical and horizontal polarization. System upgrades to dual polarization are currently under way. The DBSAR processor is a reconfigurable data acquisition and processor system capable of real-time, high-speed data processing. DBSAR uses an FPGA-based architecture to implement digitally down-conversion, in-phase and quadrature (I/Q) demodulation, and subsequent radar specific algorithms. The core of the processor board consists of an analog-to-digital (AID) section, three Altera Stratix field programmable gate arrays (FPGAs), an ARM microcontroller, several memory devices, and an Ethernet interface. The processor also interfaces with a navigation board consisting of a GPS and a MEMS gyro. The processor has been configured to operate in scatterometer, Synthetic Aperture Radar (SAR), and altimeter modes. All the modes are based on digital beamforming which is a digital process that generates the far-field beam patterns at various scan angles from voltages sampled in the antenna array. This technique allows steering the received beam and controlling its beam-width and side-lobe. Several beamforming techniques can be implemented each characterized by unique strengths and weaknesses, and each applicable to different measurement scenarios. In Scatterometer mode, the radar is capable to.generate a wide beam or scan a narrow beam on transmit, and to steer the received beam on processing while controlling its beamwidth and side-lobe level. Table I lists some important radar characteristics
NASA Astrophysics Data System (ADS)
Gunay, Omer; Ozsarac, Ismail; Kamisli, Fatih
2017-05-01
Video recording is an essential property of new generation military imaging systems. Playback of the stored video on the same device is also desirable as it provides several operational benefits to end users. Two very important constraints for many military imaging systems, especially for hand-held devices and thermal weapon sights, are power consumption and size. To meet these constraints, it is essential to perform most of the processing applied to the video signal, such as preprocessing, compression, storing, decoding, playback and other system functions on a single programmable chip, such as FPGA, DSP, GPU or ASIC. In this work, H.264/AVC (Advanced Video Coding) compatible video compression, storage, decoding and playback blocks are efficiently designed and implemented on FPGA platforms using FPGA fabric and Altera NIOS II soft processor. Many subblocks that are used in video encoding are also used during video decoding in order to save FPGA resources and power. Computationally complex blocks are designed using FPGA fabric, while blocks such as SD card write/read, H.264 syntax decoding and CAVLC decoding are done using NIOS processor to benefit from software flexibility. In addition, to keep power consumption low, the system was designed to require limited external memory access. The design was tested using 640x480 25 fps thermal camera on CYCLONE V FPGA, which is the ALTERA's lowest power FPGA family, and consumes lower than 40% of CYCLONE V 5CEFA7 FPGA resources on average.
Mission Use of the SpaceCube Hybrid Data Processing System
NASA Technical Reports Server (NTRS)
Petrick, Dave
2017-01-01
The award-winning SpaceCube v2.0 system is a high performance, reconfigurable, hybrid data processing system that can be used in a multitude of applications including those that require a radiation hardened and reliable solution. This presentation provides an overview of the design architecture, flexibility, and the advantages of the modular SpaceCube v2.0 high performance data processing system for space applications. The current state of the proven SpaceCube technology is based on 11 years of engineering and operations. Eight systems have been successfully operated in space starting in 2008 with eight more to be delivered for payload integration in 2018 in support of various missions. This presentation will highlight how this multipurpose system is currently being used to solve design challenges of a variety of independent applications. The SpaceCube hardware adapts to new system requirements by allowing for application-unique interface cards that are utilized by reconfiguring the underlying programmable elements on the core processor card. We will show how this system is being used to improve on a heritage NASA GPS technology, enable a cutting-edge LiDAR instrument, and serve as a typical command and data handling (CDH) computer for a space robotics technology demonstration.Finally, this presentation will highlight the use of the SpaceCube v2.0 system on the Restore-L robotic satellite servicing mission. SpaceCube v2.0 is the central avionics responsible for the real-time vision system and autonomous robotic control necessary to find, capture, and service a national asset weather satellite.
Design of efficient and simple interface testing equipment for opto-electric tracking system
NASA Astrophysics Data System (ADS)
Liu, Qiong; Deng, Chao; Tian, Jing; Mao, Yao
2016-10-01
Interface testing for opto-electric tracking system is one important work to assure system running performance, aiming to verify the design result of every electronic interface matching the communication protocols or not, by different levels. Opto-electric tracking system nowadays is more complicated, composed of many functional units. Usually, interface testing is executed between units manufactured completely, highly depending on unit design and manufacture progress as well as relative people. As a result, it always takes days or weeks, inefficiently. To solve the problem, this paper promotes an efficient and simple interface testing equipment for opto-electric tracking system, consisting of optional interface circuit card, processor and test program. The hardware cards provide matched hardware interface(s), easily offered from hardware engineer. Automatic code generation technique is imported, providing adaption to new communication protocols. Automatic acquiring items, automatic constructing code architecture and automatic encoding are used to form a new program quickly with adaption. After simple steps, a standard customized new interface testing equipment with matching test program and interface(s) is ready for a waiting-test system in minutes. The efficient and simple interface testing equipment for opto-electric tracking system has worked for many opto-electric tracking system to test entire or part interfaces, reducing test time from days to hours, greatly improving test efficiency, with high software quality and stability, without manual coding. Used as a common tool, the efficient and simple interface testing equipment for opto-electric tracking system promoted by this paper has changed traditional interface testing method and created much higher efficiency.
Plant, Richard R; Turner, Garry
2009-08-01
Since the publication of Plant, Hammond, and Turner (2004), which highlighted a pressing need for researchers to pay more attention to sources of error in computer-based experiments, the landscape has undoubtedly changed, but not necessarily for the better. Readily available hardware has improved in terms of raw speed; multi core processors abound; graphics cards now have hundreds of megabytes of RAM; main memory is measured in gigabytes; drive space is measured in terabytes; ever larger thin film transistor displays capable of single-digit response times, together with newer Digital Light Processing multimedia projectors, enable much greater graphic complexity; and new 64-bit operating systems, such as Microsoft Vista, are now commonplace. However, have millisecond-accurate presentation and response timing improved, and will they ever be available in commodity computers and peripherals? In the present article, we used a Black Box ToolKit to measure the variability in timing characteristics of hardware used commonly in psychological research.
Graphical processors for HEP trigger systems
NASA Astrophysics Data System (ADS)
Ammendola, R.; Biagioni, A.; Chiozzi, S.; Cotta Ramusino, A.; Di Lorenzo, S.; Fantechi, R.; Fiorini, M.; Frezza, O.; Lamanna, G.; Lo Cicero, F.; Lonardo, A.; Martinelli, M.; Neri, I.; Paolucci, P. S.; Pastorelli, E.; Piandani, R.; Pontisso, L.; Rossetti, D.; Simula, F.; Sozzi, M.; Vicini, P.
2017-02-01
General-purpose computing on GPUs is emerging as a new paradigm in several fields of science, although so far applications have been tailored to employ GPUs as accelerators in offline computations. With the steady decrease of GPU latencies and the increase in link and memory throughputs, time is ripe for real-time applications using GPUs in high-energy physics data acquisition and trigger systems. We will discuss the use of online parallel computing on GPUs for synchronous low level trigger systems, focusing on tests performed on the trigger of the CERN NA62 experiment. Latencies of all components need analysing, networking being the most critical. To keep it under control, we envisioned NaNet, an FPGA-based PCIe Network Interface Card (NIC) enabling GPUDirect connection. Moreover, we discuss how specific trigger algorithms can be parallelised and thus benefit from a GPU implementation, in terms of increased execution speed. Such improvements are particularly relevant for the foreseen LHC luminosity upgrade where highly selective algorithms will be crucial to maintain sustainable trigger rates with very high pileup.
Non-radiation hardened microprocessors in space-based remote sensing systems
NASA Astrophysics Data System (ADS)
DeCoursey, R.; Melton, Ryan; Estes, Robert R., Jr.
2006-09-01
The CALIPSO (Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations) mission is a comprehensive suite of active and passive sensors including a 20Hz 230mj Nd:YAG lidar, a visible wavelength Earth-looking camera and an imaging infrared radiometer. CALIPSO flies in formation with the Earth Observing System Post-Meridian (EOS PM) train, provides continuous, near-simultaneous measurements and is a planned 3 year mission. CALIPSO was launched into a 98 degree sun synchronous Earth orbit in April of 2006 to study clouds and aerosols and acquires over 5 gigabytes of data every 24 hours. Figure 1 shows the ground track of one CALIPSO orbit as well as high and low intensity South Atlantic Anomaly outlines. CALIPSO passes through the SAA several times each day. Spaced based remote sensing systems that include multiple instruments and/or instruments such as lidar generate large volumes of data and require robust real-time hardware and software mechanisms and high throughput processors. Due to onboard storage restrictions and telemetry downlink limitations these systems must pre-process and reduce the data before sending it to the ground. This onboard processing and realtime requirement load may mean that newer more powerful processors are needed even though acceptable radiation-hardened versions have not yet been released. CALIPSO's single board computer payload controller processor is actually a set of four (4) voting non-radiation hardened COTS Power PC 603r's built on a single width VME card by General Dynamics Advanced Information Systems (GDAIS). Significant radiation concerns for CALIPSO and other Low Earth Orbit (LEO) satellites include the South Atlantic Anomaly (SAA), the north and south poles and strong solar events. Over much of South America and extending into the South Atlantic Ocean (see figure 1) the Van Allen radiation belts dip to just 200-800km and spacecraft entering this area are subjected to high energy protons and experience higher than normal Single Event Upset (SEU) and Single Event Latch-up (SEL) rates. Although less significant, spacecraft flying in the area around the poles experience similar upsets. Finally, powerful solar proton events in the range of 10MeV/10pfu to 100MeV/1pfu as are forecasted and tracked by NOAA's Space Environment Center in Colorado can result in SingleEvent Upset (SEU), Single Event Latch-up (SEL) and permanent failures such as Single Event Gate Rupture (SEGR) in some technologies. (Galactic Cosmic Rays (GCRs) are another source, especially for gate rupture) CALIPSO mitigates common radiation concerns in its data handling through the use of redundant processors, radiation-hardened Application Specific Integrated Circuits (ASIC), hardware-based Error Detection and Correction (EDAC), processor and memory scrubbing, redundant boot code and mirrored files. After presenting a system overview this paper will expand on each of these strategies. Where applicable, related on-orbit data collected since the CALIPSO initial boot on May 4, 2006 will be noted.
Non Radiation Hardened Microprocessors in Spaced Based Remote Sensing Systems
NASA Technical Reports Server (NTRS)
Decoursey, Robert J.; Estes, Robert F.; Melton, Ryan
2006-01-01
The CALIPSO (Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations) mission is a comprehensive suite of active and passive sensors including a 20Hz 230mj Nd:YAG lidar, a visible wavelength Earth-looking camera and an imaging infrared radiometer. CALIPSO flies in formation with the Earth Observing System Post-Meridian (EOS PM) train, provides continuous, near-simultaneous measurements and is a planned 3 year mission. CALIPSO was launched into a 98 degree sun synchronous Earth orbit in April of 2006 to study clouds and aerosols and acquires over 5 gigabytes of data every 24 hours. The ground track of one CALIPSO orbit as well as high and low intensity South Atlantic Anomaly outlines is shown. CALIPSO passes through the SAA several times each day. Spaced based remote sensing systems that include multiple instruments and/or instruments such as lidar generate large volumes of data and require robust real-time hardware and software mechanisms and high throughput processors. Due to onboard storage restrictions and telemetry downlink limitations these systems must pre-process and reduce the data before sending it to the ground. This onboard processing and realtime requirement load may mean that newer more powerful processors are needed even though acceptable radiation-hardened versions have not yet been released. CALIPSO's single board computer payload controller processor is actually a set of four (4) voting non-radiation hardened COTS Power PC 603r's built on a single width VME card by General Dynamics Advanced Information Systems (GDAIS). Significant radiation concerns for CALIPSO and other Low Earth Orbit (LEO) satellites include the South Atlantic Anomaly (SAA), the north and south poles and strong solar events. Over much of South America and extending into the South Atlantic Ocean the Van Allen radiation belts dip to just 200-800km and spacecraft entering this area are subjected to high energy protons and experience higher than normal Single Event Upset (SEU) and Single Event Latch-up (SEL) rates. Although less significant, spacecraft flying in the area around the poles experience similar upsets. Finally, powerful solar proton events in the range of 10MeV/10pfu to 100MeV/1pfu as are forecasted and tracked by NOAA's Space Environment Center in Colorado can result in Single Event Upset (SEU), Single Event Latch-up (SEL) and permanent failures such as Single Event Gate Rupture (SEGR) in some technologies. (Galactic Cosmic Rays (GCRs) are another source, especially for gate rupture) CALIPSO mitigates common radiation concerns in its data handling through the use of redundant processors, radiation-hardened Application Specific Integrated Circuits (ASIC), hardware-based Error Detection and Correction (EDAC), processor and memory scrubbing, redundant boot code and mirrored files. After presenting a system overview this paper will expand on each of these strategies. Where applicable, related on-orbit data collected since the CALIPSO initial boot on May 4, 2006 will be noted.
Bio-inspired optical rotation sensor
NASA Astrophysics Data System (ADS)
O'Carroll, David C.; Shoemaker, Patrick A.; Brinkworth, Russell S. A.
2007-01-01
Traditional approaches to calculating self-motion from visual information in artificial devices have generally relied on object identification and/or correlation of image sections between successive frames. Such calculations are computationally expensive and real-time digital implementation requires powerful processors. In contrast flies arrive at essentially the same outcome, the estimation of self-motion, in a much smaller package using vastly less power. Despite the potential advantages and a few notable successes, few neuromorphic analog VLSI devices based on biological vision have been employed in practical applications to date. This paper describes a hardware implementation in aVLSI of our recently developed adaptive model for motion detection. The chip integrates motion over a linear array of local motion processors to give a single voltage output. Although the device lacks on-chip photodetectors, it includes bias circuits to use currents from external photodiodes, and we have integrated it with a ring-array of 40 photodiodes to form a visual rotation sensor. The ring configuration reduces pattern noise and combined with the pixel-wise adaptive characteristic of the underlying circuitry, permits a robust output that is proportional to image rotational velocity over a large range of speeds, and is largely independent of either mean luminance or the spatial structure of the image viewed. In principle, such devices could be used as an element of a velocity-based servo to replace or augment inertial guidance systems in applications such as mUAVs.
CUDA compatible GPU cards as efficient hardware accelerators for Smith-Waterman sequence alignment
Manavski, Svetlin A; Valle, Giorgio
2008-01-01
Background Searching for similarities in protein and DNA databases has become a routine procedure in Molecular Biology. The Smith-Waterman algorithm has been available for more than 25 years. It is based on a dynamic programming approach that explores all the possible alignments between two sequences; as a result it returns the optimal local alignment. Unfortunately, the computational cost is very high, requiring a number of operations proportional to the product of the length of two sequences. Furthermore, the exponential growth of protein and DNA databases makes the Smith-Waterman algorithm unrealistic for searching similarities in large sets of sequences. For these reasons heuristic approaches such as those implemented in FASTA and BLAST tend to be preferred, allowing faster execution times at the cost of reduced sensitivity. The main motivation of our work is to exploit the huge computational power of commonly available graphic cards, to develop high performance solutions for sequence alignment. Results In this paper we present what we believe is the fastest solution of the exact Smith-Waterman algorithm running on commodity hardware. It is implemented in the recently released CUDA programming environment by NVidia. CUDA allows direct access to the hardware primitives of the last-generation Graphics Processing Units (GPU) G80. Speeds of more than 3.5 GCUPS (Giga Cell Updates Per Second) are achieved on a workstation running two GeForce 8800 GTX. Exhaustive tests have been done to compare our implementation to SSEARCH and BLAST, running on a 3 GHz Intel Pentium IV processor. Our solution was also compared to a recently published GPU implementation and to a Single Instruction Multiple Data (SIMD) solution. These tests show that our implementation performs from 2 to 30 times faster than any other previous attempt available on commodity hardware. Conclusions The results show that graphic cards are now sufficiently advanced to be used as efficient hardware accelerators for sequence alignment. Their performance is better than any alternative available on commodity hardware platforms. The solution presented in this paper allows large scale alignments to be performed at low cost, using the exact Smith-Waterman algorithm instead of the largely adopted heuristic approaches. PMID:18387198
Pedretti, Kevin
2008-11-18
A compute processor allocator architecture for allocating compute processors to run applications in a multiple processor computing apparatus is distributed among a subset of processors within the computing apparatus. Each processor of the subset includes a compute processor allocator. The compute processor allocators can share a common database of information pertinent to compute processor allocation. A communication path permits retrieval of information from the database independently of the compute processor allocators.
Scalable and portable visualization of large atomistic datasets
NASA Astrophysics Data System (ADS)
Sharma, Ashish; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya
2004-10-01
A scalable and portable code named Atomsviewer has been developed to interactively visualize a large atomistic dataset consisting of up to a billion atoms. The code uses a hierarchical view frustum-culling algorithm based on the octree data structure to efficiently remove atoms outside of the user's field-of-view. Probabilistic and depth-based occlusion-culling algorithms then select atoms, which have a high probability of being visible. Finally a multiresolution algorithm is used to render the selected subset of visible atoms at varying levels of detail. Atomsviewer is written in C++ and OpenGL, and it has been tested on a number of architectures including Windows, Macintosh, and SGI. Atomsviewer has been used to visualize tens of millions of atoms on a standard desktop computer and, in its parallel version, up to a billion atoms. Program summaryTitle of program: Atomsviewer Catalogue identifier: ADUM Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADUM Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer for which the program is designed and others on which it has been tested: 2.4 GHz Pentium 4/Xeon processor, professional graphics card; Apple G4 (867 MHz)/G5, professional graphics card Operating systems under which the program has been tested: Windows 2000/XP, Mac OS 10.2/10.3, SGI IRIX 6.5 Programming languages used: C++, C and OpenGL Memory required to execute with typical data: 1 gigabyte of RAM High speed storage required: 60 gigabytes No. of lines in the distributed program including test data, etc.: 550 241 No. of bytes in the distributed program including test data, etc.: 6 258 245 Number of bits in a word: Arbitrary Number of processors used: 1 Has the code been vectorized or parallelized: No Distribution format: tar gzip file Nature of physical problem: Scientific visualization of atomic systems Method of solution: Rendering of atoms using computer graphic techniques, culling algorithms for data minimization, and levels-of-detail for minimal rendering Restrictions on the complexity of the problem: None Typical running time: The program is interactive in its execution Unusual features of the program: None References: The conceptual foundation and subsequent implementation of the algorithms are found in [A. Sharma, A. Nakano, R.K. Kalia, P. Vashishta, S. Kodiyalam, P. Miller, W. Zhao, X.L. Liu, T.J. Campbell, A. Haas, Presence—Teleoperators and Virtual Environments 12 (1) (2003)].
Case Study of Using High Performance Commercial Processors in Space
NASA Technical Reports Server (NTRS)
Ferguson, Roscoe C.; Olivas, Zulema
2009-01-01
The purpose of the Space Shuttle Cockpit Avionics Upgrade project (1999 2004) was to reduce crew workload and improve situational awareness. The upgrade was to augment the Shuttle avionics system with new hardware and software. A major success of this project was the validation of the hardware architecture and software design. This was significant because the project incorporated new technology and approaches for the development of human rated space software. An early version of this system was tested at the Johnson Space Center for one month by teams of astronauts. The results were positive, but NASA eventually cancelled the project towards the end of the development cycle. The goal to reduce crew workload and improve situational awareness resulted in the need for high performance Central Processing Units (CPUs). The choice of CPU selected was the PowerPC family, which is a reduced instruction set computer (RISC) known for its high performance. However, the requirement for radiation tolerance resulted in the re-evaluation of the selected family member of the PowerPC line. Radiation testing revealed that the original selected processor (PowerPC 7400) was too soft to meet mission objectives and an effort was established to perform trade studies and performance testing to determine a feasible candidate. At that time, the PowerPC RAD750s were radiation tolerant, but did not meet the required performance needs of the project. Thus, the final solution was to select the PowerPC 7455. This processor did not have a radiation tolerant version, but had some ability to detect failures. However, its cache tags did not provide parity and thus the project incorporated a software strategy to detect radiation failures. The strategy was to incorporate dual paths for software generating commands to the legacy Space Shuttle avionics to prevent failures due to the softness of the upgraded avionics.
Case Study of Using High Performance Commercial Processors in a Space Environment
NASA Technical Reports Server (NTRS)
Ferguson, Roscoe C.; Olivas, Zulema
2009-01-01
The purpose of the Space Shuttle Cockpit Avionics Upgrade project was to reduce crew workload and improve situational awareness. The upgrade was to augment the Shuttle avionics system with new hardware and software. A major success of this project was the validation of the hardware architecture and software design. This was significant because the project incorporated new technology and approaches for the development of human rated space software. An early version of this system was tested at the Johnson Space Center for one month by teams of astronauts. The results were positive, but NASA eventually cancelled the project towards the end of the development cycle. The goal to reduce crew workload and improve situational awareness resulted in the need for high performance Central Processing Units (CPUs). The choice of CPU selected was the PowerPC family, which is a reduced instruction set computer (RISC) known for its high performance. However, the requirement for radiation tolerance resulted in the reevaluation of the selected family member of the PowerPC line. Radiation testing revealed that the original selected processor (PowerPC 7400) was too soft to meet mission objectives and an effort was established to perform trade studies and performance testing to determine a feasible candidate. At that time, the PowerPC RAD750s where radiation tolerant, but did not meet the required performance needs of the project. Thus, the final solution was to select the PowerPC 7455. This processor did not have a radiation tolerant version, but faired better than the 7400 in the ability to detect failures. However, its cache tags did not provide parity and thus the project incorporated a software strategy to detect radiation failures. The strategy was to incorporate dual paths for software generating commands to the legacy Space Shuttle avionics to prevent failures due to the softness of the upgraded avionics.
A plug-in to Eclipse for VHDL source codes: functionalities
NASA Astrophysics Data System (ADS)
Niton, B.; Poźniak, K. T.; Romaniuk, R. S.
The paper presents an original application, written by authors, which supports writing and edition of source codes in VHDL language. It is a step towards fully automatic, augmented code writing for photonic and electronic systems, also systems based on FPGA and/or DSP processors. An implementation is described, based on VEditor. VEditor is a free license program. Thus, the work presented in this paper supplements and extends this free license. The introduction characterizes shortly available tools on the market which serve for aiding the design processes of electronic systems in VHDL. Particular attention was put on plug-ins to the Eclipse environment and Emacs program. There are presented detailed properties of the written plug-in such as: programming extension conception, and the results of the activities of formatter, re-factorizer, code hider, and other new additions to the VEditor program.
NASA Technical Reports Server (NTRS)
Snow, Walter L.; Childers, Brooks A.; Jones, Stephen B.; Fremaux, Charles M.
1993-01-01
A model space positioning system (MSPS), a state-of-the-art, real-time tracking system to provide the test engineer with on line model pitch and spin rate information, is described. It is noted that the six-degree-of-freedom post processor program will require additional programming effort both in the automated tracking mode for high spin rates and in accuracy to meet the measurement objectives. An independent multicamera system intended to augment the MSPS is studied using laboratory calibration methods based on photogrammetry to characterize the losses in various recording options. Data acquired to Super VHS tape encoded with Vertical Interval Time Code and transcribed to video disk are considered to be a reasonable priced choice for post editing and processing video data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fragoso, Margarida; Kawrakow, Iwan; Faddegon, Bruce A.
In this work, an investigation of efficiency enhancing methods and cross-section data in the BEAMnrc Monte Carlo (MC) code system is presented. Additionally, BEAMnrc was compared with VMC++, another special-purpose MC code system that has recently been enhanced for the simulation of the entire treatment head. BEAMnrc and VMC++ were used to simulate a 6 MV photon beam from a Siemens Primus linear accelerator (linac) and phase space (PHSP) files were generated at 100 cm source-to-surface distance for the 10x10 and 40x40 cm{sup 2} field sizes. The BEAMnrc parameters/techniques under investigation were grouped by (i) photon and bremsstrahlung cross sections,more » (ii) approximate efficiency improving techniques (AEITs), (iii) variance reduction techniques (VRTs), and (iv) a VRT (bremsstrahlung photon splitting) in combination with an AEIT (charged particle range rejection). The BEAMnrc PHSP file obtained without the efficiency enhancing techniques under study or, when not possible, with their default values (e.g., EXACT algorithm for the boundary crossing algorithm) and with the default cross-section data (PEGS4 and Bethe-Heitler) was used as the ''base line'' for accuracy verification of the PHSP files generated from the different groups described previously. Subsequently, a selection of the PHSP files was used as input for DOSXYZnrc-based water phantom dose calculations, which were verified against measurements. The performance of the different VRTs and AEITs available in BEAMnrc and of VMC++ was specified by the relative efficiency, i.e., by the efficiency of the MC simulation relative to that of the BEAMnrc base-line calculation. The highest relative efficiencies were {approx}935 ({approx}111 min on a single 2.6 GHz processor) and {approx}200 ({approx}45 min on a single processor) for the 10x10 field size with 50 million histories and 40x40 cm{sup 2} field size with 100 million histories, respectively, using the VRT directional bremsstrahlung splitting (DBS) with no electron splitting. When DBS was used with electron splitting and combined with augmented charged particle range rejection, a technique recently introduced in BEAMnrc, relative efficiencies were {approx}420 ({approx}253 min on a single processor) and {approx}175 ({approx}58 min on a single processor) for the 10x10 and 40x40 cm{sup 2} field sizes, respectively. Calculations of the Siemens Primus treatment head with VMC++ produced relative efficiencies of {approx}1400 ({approx}6 min on a single processor) and {approx}60 ({approx}4 min on a single processor) for the 10x10 and 40x40 cm{sup 2} field sizes, respectively. BEAMnrc PHSP calculations with DBS alone or DBS in combination with charged particle range rejection were more efficient than the other efficiency enhancing techniques used. Using VMC++, accurate simulations of the entire linac treatment head were performed within minutes on a single processor. Noteworthy differences ({+-}1%-3%) in the mean energy, planar fluence, and angular and spectral distributions were observed with the NIST bremsstrahlung cross sections compared with those of Bethe-Heitler (BEAMnrc default bremsstrahlung cross section). However, MC calculated dose distributions in water phantoms (using combinations of VRTs/AEITs and cross-section data) agreed within 2% of measurements. Furthermore, MC calculated dose distributions in a simulated water/air/water phantom, using NIST cross sections, were within 2% agreement with the BEAMnrc Bethe-Heitler default case.« less
A bunch to bucket phase detector for the RHIC LLRF upgrade platform
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, K.S.; Harvey, M.; Hayes, T.
2011-03-28
As part of the overall development effort for the RHIC LLRF Upgrade Platform [1,2,3], a generic four channel 16 bit Analog-to-Digital Converter (ADC) daughter module was developed to provide high speed, wide dynamic range digitizing and processing of signals from DC to several hundred megahertz. The first operational use of this card was to implement the bunch to bucket phase detector for the RHIC LLRF beam control feedback loops. This paper will describe the design and performance features of this daughter module as a bunch to bucket phase detector, and also provide an overview of its place within the overallmore » LLRF platform architecture as a high performance digitizer and signal processing module suitable to a variety of applications. In modern digital control and signal processing systems, ADCs provide the interface between the analog and digital signal domains. Once digitized, signals are then typically processed using algorithms implemented in field programmable gate array (FPGA) logic, general purpose processors (GPPs), digital signal processors (DSPs) or a combination of these. For the recently developed and commissioned RHIC LLRF Upgrade Platform, we've developed a four channel ADC daughter module based on the Linear Technology LTC2209 16 bit, 160 MSPS ADC and the Xilinx V5FX70T FPGA. The module is designed to be relatively generic in application, and with minimal analog filtering on board, is capable of processing signals from DC to 500 MHz or more. The module's first application was to implement the bunch to bucket phase detector (BTB-PD) for the RHIC LLRF system. The same module also provides DC digitizing of analog processed BPM signals used by the LLRF system for radial feedback.« less
Accelerated Monte Carlo Simulation on the Chemical Stage in Water Radiolysis using GPU
Tian, Zhen; Jiang, Steve B.; Jia, Xun
2018-01-01
The accurate simulation of water radiolysis is an important step to understand the mechanisms of radiobiology and quantitatively test some hypotheses regarding radiobiological effects. However, the simulation of water radiolysis is highly time consuming, taking hours or even days to be completed by a conventional CPU processor. This time limitation hinders cell-level simulations for a number of research studies. We recently initiated efforts to develop gMicroMC, a GPU-based fast microscopic MC simulation package for water radiolysis. The first step of this project focused on accelerating the simulation of the chemical stage, the most time consuming stage in the entire water radiolysis process. A GPU-friendly parallelization strategy was designed to address the highly correlated many-body simulation problem caused by the mutual competitive chemical reactions between the radiolytic molecules. Two cases were tested, using a 750 keV electron and a 5 MeV proton incident in pure water, respectively. The time-dependent yields of all the radiolytic species during the chemical stage were used to evaluate the accuracy of the simulation. The relative differences between our simulation and the Geant4-DNA simulation were on average 5.3% and 4.4% for the two cases. Our package, executed on an Nvidia Titan black GPU card, successfully completed the chemical stage simulation of the two cases within 599.2 s and 489.0 s. As compared with Geant4-DNA that was executed on an Intel i7-5500U CPU processor and needed 28.6 h and 26.8 h for the two cases using a single CPU core, our package achieved a speed-up factor of 171.1-197.2. PMID:28323637
Accelerated Monte Carlo simulation on the chemical stage in water radiolysis using GPU
NASA Astrophysics Data System (ADS)
Tian, Zhen; Jiang, Steve B.; Jia, Xun
2017-04-01
The accurate simulation of water radiolysis is an important step to understand the mechanisms of radiobiology and quantitatively test some hypotheses regarding radiobiological effects. However, the simulation of water radiolysis is highly time consuming, taking hours or even days to be completed by a conventional CPU processor. This time limitation hinders cell-level simulations for a number of research studies. We recently initiated efforts to develop gMicroMC, a GPU-based fast microscopic MC simulation package for water radiolysis. The first step of this project focused on accelerating the simulation of the chemical stage, the most time consuming stage in the entire water radiolysis process. A GPU-friendly parallelization strategy was designed to address the highly correlated many-body simulation problem caused by the mutual competitive chemical reactions between the radiolytic molecules. Two cases were tested, using a 750 keV electron and a 5 MeV proton incident in pure water, respectively. The time-dependent yields of all the radiolytic species during the chemical stage were used to evaluate the accuracy of the simulation. The relative differences between our simulation and the Geant4-DNA simulation were on average 5.3% and 4.4% for the two cases. Our package, executed on an Nvidia Titan black GPU card, successfully completed the chemical stage simulation of the two cases within 599.2 s and 489.0 s. As compared with Geant4-DNA that was executed on an Intel i7-5500U CPU processor and needed 28.6 h and 26.8 h for the two cases using a single CPU core, our package achieved a speed-up factor of 171.1-197.2.
Accelerated Monte Carlo simulation on the chemical stage in water radiolysis using GPU.
Tian, Zhen; Jiang, Steve B; Jia, Xun
2017-04-21
The accurate simulation of water radiolysis is an important step to understand the mechanisms of radiobiology and quantitatively test some hypotheses regarding radiobiological effects. However, the simulation of water radiolysis is highly time consuming, taking hours or even days to be completed by a conventional CPU processor. This time limitation hinders cell-level simulations for a number of research studies. We recently initiated efforts to develop gMicroMC, a GPU-based fast microscopic MC simulation package for water radiolysis. The first step of this project focused on accelerating the simulation of the chemical stage, the most time consuming stage in the entire water radiolysis process. A GPU-friendly parallelization strategy was designed to address the highly correlated many-body simulation problem caused by the mutual competitive chemical reactions between the radiolytic molecules. Two cases were tested, using a 750 keV electron and a 5 MeV proton incident in pure water, respectively. The time-dependent yields of all the radiolytic species during the chemical stage were used to evaluate the accuracy of the simulation. The relative differences between our simulation and the Geant4-DNA simulation were on average 5.3% and 4.4% for the two cases. Our package, executed on an Nvidia Titan black GPU card, successfully completed the chemical stage simulation of the two cases within 599.2 s and 489.0 s. As compared with Geant4-DNA that was executed on an Intel i7-5500U CPU processor and needed 28.6 h and 26.8 h for the two cases using a single CPU core, our package achieved a speed-up factor of 171.1-197.2.
Malchow, Berend; Keller, Katriona; Hasan, Alkomiet; Dörfler, Sebastian; Schneider-Axmann, Thomas; Hillmer-Vogel, Ursula; Honer, William G.; Schulze, Thomas G.; Niklas, Andree; Wobrock, Thomas; Schmitt, Andrea; Falkai, Peter
2015-01-01
Aerobic exercise has been shown to improve symptoms in multiepisode schizophrenia, including cognitive impairments, but results are inconsistent. Therefore, we evaluated the effects of an enriched environment paradigm consisting of bicycle ergometer training and add-on computer-assisted cognitive remediation (CACR) training. To our knowledge, this is the first study to evaluate such an enriched environment paradigm in multiepisode schizophrenia. Twenty-two multiepisode schizophrenia patients and 22 age- and gender-matched healthy controls underwent 3 months of endurance training (30min, 3 times/wk); CACR training (30min, 2 times/wk) was added from week 6. Twenty-one additionally recruited schizophrenia patients played table soccer (known as “foosball” in the United States) over the same period and also received the same CACR training. At baseline and after 6 weeks and 3 months, we measured the Global Assessment of Functioning (GAF), Social Adjustment Scale-II (SAS-II), schizophrenia symptoms (Positive and Negative Syndrome Scale), and cognitive domains (Verbal Learning Memory Test [VLMT], Wisconsin Card Sorting Test [WCST], and Trail Making Test). After 3 months, we observed a significant improvement in GAF and in SAS-II social/leisure activities and household functioning adaptation in the endurance training augmented with cognitive remediation, but not in the table soccer augmented with cognitive remediation group. The severity of negative symptoms and performance in the VLMT and WCST improved significantly in the schizophrenia endurance training augmented with cognitive remediation group from week 6 to the end of the 3-month training period. Future studies should investigate longer intervention periods to show whether endurance training induces stable improvements in everyday functioning. PMID:25782770
Optical character recognition of camera-captured images based on phase features
NASA Astrophysics Data System (ADS)
Diaz-Escobar, Julia; Kober, Vitaly
2015-09-01
Nowadays most of digital information is obtained using mobile devices specially smartphones. In particular, it brings the opportunity for optical character recognition in camera-captured images. For this reason many recognition applications have been recently developed such as recognition of license plates, business cards, receipts and street signal; document classification, augmented reality, language translator and so on. Camera-captured images are usually affected by geometric distortions, nonuniform illumination, shadow, noise, which make difficult the recognition task with existing systems. It is well known that the Fourier phase contains a lot of important information regardless of the Fourier magnitude. So, in this work we propose a phase-based recognition system exploiting phase-congruency features for illumination/scale invariance. The performance of the proposed system is tested in terms of miss classifications and false alarms with the help of computer simulation.
26 CFR 301.6311-2 - Payment by credit card and debit card.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 26 Internal Revenue 18 2012-04-01 2012-04-01 false Payment by credit card and debit card. 301.6311....6311-2 Payment by credit card and debit card. (a) Authority to receive—(1) Payments by credit card and debit card. Internal revenue taxes may be paid by credit card or debit card as authorized by this...
26 CFR 301.6311-2 - Payment by credit card and debit card.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 26 Internal Revenue 18 2011-04-01 2011-04-01 false Payment by credit card and debit card. 301.6311....6311-2 Payment by credit card and debit card. (a) Authority to receive—(1) Payments by credit card and debit card. Internal revenue taxes may be paid by credit card or debit card as authorized by this...
26 CFR 301.6311-2 - Payment by credit card and debit card.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 26 Internal Revenue 18 2013-04-01 2013-04-01 false Payment by credit card and debit card. 301.6311....6311-2 Payment by credit card and debit card. (a) Authority to receive—(1) Payments by credit card and debit card. Internal revenue taxes may be paid by credit card or debit card as authorized by this...
26 CFR 301.6311-2 - Payment by credit card and debit card.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 26 Internal Revenue 18 2014-04-01 2014-04-01 false Payment by credit card and debit card. 301.6311....6311-2 Payment by credit card and debit card. (a) Authority to receive—(1) Payments by credit card and debit card. Internal revenue taxes may be paid by credit card or debit card as authorized by this...
Spectral-element Seismic Wave Propagation on CUDA/OpenCL Hardware Accelerators
NASA Astrophysics Data System (ADS)
Peter, D. B.; Videau, B.; Pouget, K.; Komatitsch, D.
2015-12-01
Seismic wave propagation codes are essential tools to investigate a variety of wave phenomena in the Earth. Furthermore, they can now be used for seismic full-waveform inversions in regional- and global-scale adjoint tomography. Although these seismic wave propagation solvers are crucial ingredients to improve the resolution of tomographic images to answer important questions about the nature of Earth's internal processes and subsurface structure, their practical application is often limited due to high computational costs. They thus need high-performance computing (HPC) facilities to improving the current state of knowledge. At present, numerous large HPC systems embed many-core architectures such as graphics processing units (GPUs) to enhance numerical performance. Such hardware accelerators can be programmed using either the CUDA programming environment or the OpenCL language standard. CUDA software development targets NVIDIA graphic cards while OpenCL was adopted by additional hardware accelerators, like e.g. AMD graphic cards, ARM-based processors as well as Intel Xeon Phi coprocessors. For seismic wave propagation simulations using the open-source spectral-element code package SPECFEM3D_GLOBE, we incorporated an automatic source-to-source code generation tool (BOAST) which allows us to use meta-programming of all computational kernels for forward and adjoint runs. Using our BOAST kernels, we generate optimized source code for both CUDA and OpenCL languages within the source code package. Thus, seismic wave simulations are able now to fully utilize CUDA and OpenCL hardware accelerators. We show benchmarks of forward seismic wave propagation simulations using SPECFEM3D_GLOBE on CUDA/OpenCL GPUs, validating results and comparing performances for different simulations and hardware usages.
Discovering epistasis in large scale genetic association studies by exploiting graphics cards.
Chen, Gary K; Guo, Yunfei
2013-12-03
Despite the enormous investments made in collecting DNA samples and generating germline variation data across thousands of individuals in modern genome-wide association studies (GWAS), progress has been frustratingly slow in explaining much of the heritability in common disease. Today's paradigm of testing independent hypotheses on each single nucleotide polymorphism (SNP) marker is unlikely to adequately reflect the complex biological processes in disease risk. Alternatively, modeling risk as an ensemble of SNPs that act in concert in a pathway, and/or interact non-additively on log risk for example, may be a more sensible way to approach gene mapping in modern studies. Implementing such analyzes genome-wide can quickly become intractable due to the fact that even modest size SNP panels on modern genotype arrays (500k markers) pose a combinatorial nightmare, require tens of billions of models to be tested for evidence of interaction. In this article, we provide an in-depth analysis of programs that have been developed to explicitly overcome these enormous computational barriers through the use of processors on graphics cards known as Graphics Processing Units (GPU). We include tutorials on GPU technology, which will convey why they are growing in appeal with today's numerical scientists. One obvious advantage is the impressive density of microprocessor cores that are available on only a single GPU. Whereas high end servers feature up to 24 Intel or AMD CPU cores, the latest GPU offerings from nVidia feature over 2600 cores. Each compute node may be outfitted with up to 4 GPU devices. Success on GPUs varies across problems. However, epistasis screens fare well due to the high degree of parallelism exposed in these problems. Papers that we review routinely report GPU speedups of over two orders of magnitude (>100x) over standard CPU implementations.
High-performance parallel computing in the classroom using the public goods game as an example
NASA Astrophysics Data System (ADS)
Perc, Matjaž
2017-07-01
The use of computers in statistical physics is common because the sheer number of equations that describe the behaviour of an entire system particle by particle often makes it impossible to solve them exactly. Monte Carlo methods form a particularly important class of numerical methods for solving problems in statistical physics. Although these methods are simple in principle, their proper use requires a good command of statistical mechanics, as well as considerable computational resources. The aim of this paper is to demonstrate how the usage of widely accessible graphics cards on personal computers can elevate the computing power in Monte Carlo simulations by orders of magnitude, thus allowing live classroom demonstration of phenomena that would otherwise be out of reach. As an example, we use the public goods game on a square lattice where two strategies compete for common resources in a social dilemma situation. We show that the second-order phase transition to an absorbing phase in the system belongs to the directed percolation universality class, and we compare the time needed to arrive at this result by means of the main processor and by means of a suitable graphics card. Parallel computing on graphics processing units has been developed actively during the last decade, to the point where today the learning curve for entry is anything but steep for those familiar with programming. The subject is thus ripe for inclusion in graduate and advanced undergraduate curricula, and we hope that this paper will facilitate this process in the realm of physics education. To that end, we provide a documented source code for an easy reproduction of presented results and for further development of Monte Carlo simulations of similar systems.
Discovering epistasis in large scale genetic association studies by exploiting graphics cards
Chen, Gary K.; Guo, Yunfei
2013-01-01
Despite the enormous investments made in collecting DNA samples and generating germline variation data across thousands of individuals in modern genome-wide association studies (GWAS), progress has been frustratingly slow in explaining much of the heritability in common disease. Today's paradigm of testing independent hypotheses on each single nucleotide polymorphism (SNP) marker is unlikely to adequately reflect the complex biological processes in disease risk. Alternatively, modeling risk as an ensemble of SNPs that act in concert in a pathway, and/or interact non-additively on log risk for example, may be a more sensible way to approach gene mapping in modern studies. Implementing such analyzes genome-wide can quickly become intractable due to the fact that even modest size SNP panels on modern genotype arrays (500k markers) pose a combinatorial nightmare, require tens of billions of models to be tested for evidence of interaction. In this article, we provide an in-depth analysis of programs that have been developed to explicitly overcome these enormous computational barriers through the use of processors on graphics cards known as Graphics Processing Units (GPU). We include tutorials on GPU technology, which will convey why they are growing in appeal with today's numerical scientists. One obvious advantage is the impressive density of microprocessor cores that are available on only a single GPU. Whereas high end servers feature up to 24 Intel or AMD CPU cores, the latest GPU offerings from nVidia feature over 2600 cores. Each compute node may be outfitted with up to 4 GPU devices. Success on GPUs varies across problems. However, epistasis screens fare well due to the high degree of parallelism exposed in these problems. Papers that we review routinely report GPU speedups of over two orders of magnitude (>100x) over standard CPU implementations. PMID:24348518
ERIC Educational Resources Information Center
Zuzack, Christine A.
1997-01-01
Enhanced magnetic strip cards and "smart cards" offer varied service options to college students. Enhanced magnetic strip cards serve as cash cards and provide access to services. Smart cards, which resemble credit cards but contain a microchip, can be used as phone cards, bus passes, library cards, admission tickets, point-of-sale debit…
[Application of patient card technology to health care].
Sayag, E; Danon, Y L
1995-03-15
The potential benefits of patient card technology in improving management and delivery of health services have been explored. Patient cards can be used for numerous applications and functions: as a means of identification, as a key for an insurance payment system, and as a communication medium. Advanced card technologies allow for the storage of data on the card, creating the possibility of a comprehensive and portable patient record. There are many types of patient cards: paper or plastic cards, microfilm cards, bar-code cards, magnetic-strip cards and integrated circuit smart-cards. Choosing the right card depends on the amount of information to be stored, the degree of security required and the cost of the cards and their supporting infrastructure. Problems with patient cards are related to storage capacity, backup and data consistency, access authorization and ownership and compatibility. We think it is worth evaluating the place of patient card technology in the delivery of health services in Israel.
NASA Technical Reports Server (NTRS)
Mitchell, Julie L.; Broyan, James L.; Pickering, Karen D.; Adam, Niklas; Casteel, Michael; Callaham, Michael; Carrier, Chris
2011-01-01
In support of the Urine Processor Assembly Precipitation Prevention Project (UPA PPP), multiple technologies were explored to prevent CaSO4 dot 2H2O (gypsum) precipitation during the on-orbit distillation process. Gypsum precipitation currently limits the water recovery rate onboard the International Space Station (ISS) to 70% versus the planned 85% target water recovery rate. Due to its advanced performance in removing calcium cations in pretreated augmented urine (PTAU), ion exchange was selected as one of the technologies for further development by the PPP team. A total of 12 ion exchange resins were evaluated in various equilibrium and dynamic column tests with solutions of dissolved gypsum, urine ersatz, PTAU, and PTAU brine at 85% water recovery. While initial evaluations indicated that the Purolite SST60 resin had the highest calcium capacity in PTAU (0.30 meq/mL average), later tests showed that the Dowex G26 and Amberlite FPC12H resins had the highest capacity (0.5 meq/mL average). Further dynamic column testing proved that G26 performance is +/- 10% of that value at flow rates of 0.45 and 0.79 Lph under continuous flow, and 10.45 Lph under pulsed flow. Testing at the Marshall Spaceflight Center (MSFC) integrates the ion exchange technology with a UPA ground article under flight-like pulsed flow conditions with PTAU. To date, no gypsum precipitation has taken place in any of the initial evaluations.
NASA Technical Reports Server (NTRS)
Geist, Alessandro; Lin, Michael; Flatley, Tom; Petrick, David
2013-01-01
SpaceCube 1.5 is a high-performance and low-power system in a compact form factor. It is a hybrid processing system consisting of CPU (central processing unit), FPGA (field-programmable gate array), and DSP (digital signal processor) processing elements. The primary processing engine is the Virtex- 5 FX100T FPGA, which has two embedded processors. The SpaceCube 1.5 System was a bridge to the SpaceCube 2.0 and SpaceCube 2.0 Mini processing systems. The SpaceCube 1.5 system was the primary avionics in the successful SMART (Small Rocket/Spacecraft Technology) Sounding Rocket mission that was launched in the summer of 2011. For SMART and similar missions, an avionics processor is required that is reconfigurable, has high processing capability, has multi-gigabit interfaces, is low power, and comes in a rugged/compact form factor. The original SpaceCube 1.0 met a number of the criteria, but did not possess the multi-gigabit interfaces that were required and is a higher-cost system. The SpaceCube 1.5 was designed with those mission requirements in mind. The SpaceCube 1.5 features one Xilinx Virtex-5 FX100T FPGA and has excellent size, weight, and power characteristics [4×4×3 in. (approx. = 10×10×8 cm), 3 lb (approx. = 1.4 kg), and 5 to 15 W depending on the application]. The estimated computing power of the two PowerPC 440s in the Virtex-5 FPGA is 1100 DMIPS each. The SpaceCube 1.5 includes two Gigabit Ethernet (1 Gbps) interfaces as well as two SATA-I/II interfaces (1.5 to 3.0 Gbps) for recording to data drives. The SpaceCube 1.5 also features DDR2 SDRAM (double data rate synchronous dynamic random access memory); 4- Gbit Flash for storing application code for the CPU, FPGA, and DSP processing elements; and a Xilinx Platform Flash XL to store FPGA configuration files or application code. The system also incorporates a 12 bit analog to digital converter with the ability to read 32 discrete analog sensor inputs. The SpaceCube 1.5 design also has a built-in accelerometer. In addition, the system has 12 receive and transmit RS- 422 interfaces for legacy support. The SpaceCube 1.5 processor card represents the first NASA Goddard design in a compact form factor featuring the Xilinx Virtex- 5. The SpaceCube 1.5 incorporates backward compatibility with the Space- Cube 1.0 form factor and stackable architecture. It also makes use of low-cost commercial parts, but is designed for operation in harsh environments.
TEJAS - TELEROBOTICS/EVA JOINT ANALYSIS SYSTEM VERSION 1.0
NASA Technical Reports Server (NTRS)
Drews, M. L.
1994-01-01
The primary objective of space telerobotics as a research discipline is the augmentation and/or support of extravehicular activity (EVA) with telerobotic activity; this allows increased emplacement of on-orbit assets while providing for their "in situ" management. Development of the requisite telerobot work system requires a well-understood correspondence between EVA and telerobotics that to date has been only partially established. The Telerobotics/EVA Joint Analysis Systems (TEJAS) hypermedia information system uses object-oriented programming to bridge the gap between crew-EVA and telerobotics activities. TEJAS Version 1.0 contains twenty HyperCard stacks that use a visual, customizable interface of icon buttons, pop-up menus, and relational commands to store, link, and standardize related information about the primitives, technologies, tasks, assumptions, and open issues involved in space telerobot or crew EVA tasks. These stacks are meant to be interactive and can be used with any database system running on a Macintosh, including spreadsheets, relational databases, word-processed documents, and hypermedia utilities. The software provides a means for managing volumes of data and for communicating complex ideas, relationships, and processes inherent to task planning. The stack system contains 3MB of data and utilities to aid referencing, discussion, communication, and analysis within the EVA and telerobotics communities. The six baseline analysis stacks (EVATasks, EVAAssume, EVAIssues, TeleTasks, TeleAssume, and TeleIssues) work interactively to manage and relate basic information which you enter about the crew-EVA and telerobot tasks you wish to analyze in depth. Analysis stacks draw on information in the Reference stacks as part of a rapid point-and-click utility for building scripts of specific task primitives or for any EVA or telerobotics task. Any or all of these stacks can be completely incorporated within other hypermedia applications, or they can be referenced as is, without requiring data to be transferred into any other database. TEJAS is simple to use and requires no formal training. Some knowledge of HyperCard is helpful, but not essential. All Help cards printed in the TEJAS User's Guide are part of the TEJAS Help Stack and are available from a pop-up menu any time you are using TEJAS. Specific stacks created in TEJAS can be exchanged between groups, divisions, companies, or centers for complete communication of fundamental information that forms the basis for further analyses. TEJAS runs on any Apple Macintosh personal computer with at least one megabyte of RAM, a hard disk, and HyperCard 1.21, or later version. TEJAS is a copyrighted work with all copyright vested in NASA. HyperCard and Macintosh are registered trademarks of Apple Computer, Inc.
Schlosser, Ralf W; Koul, Rajinder K
2015-01-01
The purpose of this scoping review was to (a) map the research evidence on the effectiveness of augmentative and alternative communication (AAC) interventions using speech output technologies (e.g., speech-generating devices, mobile technologies with AAC-specific applications, talking word processors) for individuals with autism spectrum disorders, (b) identify gaps in the existing literature, and (c) posit directions for future research. Outcomes related to speech, language, and communication were considered. A total of 48 studies (47 single case experimental designs and 1 randomized control trial) involving 187 individuals were included. Results were reviewed in terms of three study groupings: (a) studies that evaluated the effectiveness of treatment packages involving speech output, (b) studies comparing one treatment package with speech output to other AAC modalities, and (c) studies comparing the presence with the absence of speech output. The state of the evidence base is discussed and several directions for future research are posited.
Advanced Communications Architecture Demonstration Made Significant Progress
NASA Technical Reports Server (NTRS)
Carek, David Andrew
2004-01-01
Simulation for a ground station located at 44.5 deg latitude. The Advanced Communications Architecture Demonstration (ACAD) is a concept architecture to provide high-rate Ka-band (27-GHz) direct-to-ground delivery of payload data from the International Space Station. This new concept in delivering data from the space station targets scientific experiments that buffer data onboard. The concept design provides a method to augment the current downlink capability through the Tracking Data Relay Satellite System (TDRSS) Ku-band (15-GHz) communications system. The ACAD concept pushes the limits of technology in high-rate data communications for space-qualified systems. Research activities are ongoing in examining the various aspects of high-rate communications systems including: (1) link budget parametric analyses, (2) antenna configuration trade studies, (3) orbital simulations (see the preceding figure), (4) optimization of ground station contact time (see the following graph), (5) processor and storage architecture definition, and (6) protocol evaluations and dependencies.
Wireless augmented reality communication system
NASA Technical Reports Server (NTRS)
Devereaux, Ann (Inventor); Agan, Martin (Inventor); Jedrey, Thomas (Inventor)
2006-01-01
The system of the present invention is a highly integrated radio communication system with a multimedia co-processor which allows true two-way multimedia (video, audio, data) access as well as real-time biomedical monitoring in a pager-sized portable access unit. The system is integrated in a network structure including one or more general purpose nodes for providing a wireless-to-wired interface. The network architecture allows video, audio and data (including biomedical data) streams to be connected directly to external users and devices. The portable access units may also be mated to various non-personal devices such as cameras or environmental sensors for providing a method for setting up wireless sensor nets from which reported data may be accessed through the portable access unit. The reported data may alternatively be automatically logged at a remote computer for access and viewing through a portable access unit, including the user's own.
Wireless Augmented Reality Communication System
NASA Technical Reports Server (NTRS)
Jedrey, Thomas (Inventor); Agan, Martin (Inventor); Devereaux, Ann (Inventor)
2014-01-01
The system of the present invention is a highly integrated radio communication system with a multimedia co-processor which allows true two-way multimedia (video, audio, data) access as well as real-time biomedical monitoring in a pager-sized portable access unit. The system is integrated in a network structure including one or more general purpose nodes for providing a wireless-to-wired interface. The network architecture allows video, audio and data (including biomedical data) streams to be connected directly to external users and devices. The portable access units may also be mated to various non-personal devices such as cameras or environmental sensors for providing a method for setting up wireless sensor nets from which reported data may be accessed through the portable access unit. The reported data may alternatively be automatically logged at a remote computer for access and viewing through a portable access unit, including the user's own.
Wireless Augmented Reality Communication System
NASA Technical Reports Server (NTRS)
Agan, Martin (Inventor); Devereaux, Ann (Inventor); Jedrey, Thomas (Inventor)
2016-01-01
The system of the present invention is a highly integrated radio communication system with a multimedia co-processor which allows true two-way multimedia (video, audio, data) access as well as real-time biomedical monitoring in a pager-sized portable access unit. The system is integrated in a network structure including one or more general purpose nodes for providing a wireless-to-wired interface. The network architecture allows video, audio and data (including biomedical data) streams to be connected directly to external users and devices. The portable access units may also be mated to various non-personal devices such as cameras or environmental sensors for providing a method for setting up wireless sensor nets from which reported data may be accessed through the portable access unit. The reported data may alternatively be automatically logged at a remote computer for access and viewing through a portable access unit, including the user's own.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ilsche, Thomas; Schuchart, Joseph; Cope, Joseph
Event tracing is an important tool for understanding the performance of parallel applications. As concurrency increases in leadership-class computing systems, the quantity of performance log data can overload the parallel file system, perturbing the application being observed. In this work we present a solution for event tracing at leadership scales. We enhance the I/O forwarding system software to aggregate and reorganize log data prior to writing to the storage system, significantly reducing the burden on the underlying file system for this type of traffic. Furthermore, we augment the I/O forwarding system with a write buffering capability to limit the impactmore » of artificial perturbations from log data accesses on traced applications. To validate the approach, we modify the Vampir tracing tool to take advantage of this new capability and show that the approach increases the maximum traced application size by a factor of 5x to more than 200,000 processors.« less
Characterization of active hair-bundle motility by a mechanical-load clamp
NASA Astrophysics Data System (ADS)
Salvi, Joshua D.; Maoiléidigh, Dáibhid Ó.; Fabella, Brian A.; Tobin, Mélanie; Hudspeth, A. J.
2015-12-01
Active hair-bundle motility endows hair cells with several traits that augment auditory stimuli. The activity of a hair bundle might be controlled by adjusting its mechanical properties. Indeed, the mechanical properties of bundles vary between different organisms and along the tonotopic axis of a single auditory organ. Motivated by these biological differences and a dynamical model of hair-bundle motility, we explore how adjusting the mass, drag, stiffness, and offset force applied to a bundle control its dynamics and response to external perturbations. Utilizing a mechanical-load clamp, we systematically mapped the two-dimensional state diagram of a hair bundle. The clamp system used a real-time processor to tightly control each of the virtual mechanical elements. Increasing the stiffness of a hair bundle advances its operating point from a spontaneously oscillating regime into a quiescent regime. As predicted by a dynamical model of hair-bundle mechanics, this boundary constitutes a Hopf bifurcation.
Draper Laboratory small autonomous aerial vehicle
NASA Astrophysics Data System (ADS)
DeBitetto, Paul A.; Johnson, Eric N.; Bosse, Michael C.; Trott, Christian A.
1997-06-01
The Charles Stark Draper Laboratory, Inc. and students from Massachusetts Institute of Technology and Boston University have cooperated to develop an autonomous aerial vehicle that won the 1996 International Aerial Robotics Competition. This paper describes the approach, system architecture and subsystem designs for the entry. This entry represents a combination of many technology areas: navigation, guidance, control, vision processing, human factors, packaging, power, real-time software, and others. The aerial vehicle, an autonomous helicopter, performs navigation and control functions using multiple sensors: differential GPS, inertial measurement unit, sonar altimeter, and a flux compass. The aerial transmits video imagery to the ground. A ground based vision processor converts the image data into target position and classification estimates. The system was designed, built, and flown in less than one year and has provided many lessons about autonomous vehicle systems, several of which are discussed. In an appendix, our current research in augmenting the navigation system with vision- based estimates is presented.
FPGA Techniques Based New Hybrid Modulation Strategies for Voltage Source Inverters
Sudha, L. U.; Baskaran, J.; Elankurisil, S. A.
2015-01-01
This paper corroborates three different hybrid modulation strategies suitable for single-phase voltage source inverter. The proposed method is formulated using fundamental switching and carrier based pulse width modulation methods. The main tale of this proposed method is to optimize a specific performance criterion, such as minimization of the total harmonic distortion (THD), lower order harmonics, switching losses, and heat losses. The proposed method is articulated using fundamental switching and carrier based pulse width modulation methods. Thus, the harmonic pollution in the power system will be reduced and the power quality will be augmented with better harmonic profile for a target fundamental output voltage. The proposed modulation strategies are simulated in MATLAB r2010a and implemented in a Xilinx spartan 3E-500 FG 320 FPGA processor. The feasibility of these modulation strategies is authenticated through simulation and experimental results. PMID:25821852
Efficiency of static core turn-off in a system-on-a-chip with variation
Cher, Chen-Yong; Coteus, Paul W; Gara, Alan; Kursun, Eren; Paulsen, David P; Schuelke, Brian A; Sheets, II, John E; Tian, Shurong
2013-10-29
A processor-implemented method for improving efficiency of a static core turn-off in a multi-core processor with variation, the method comprising: conducting via a simulation a turn-off analysis of the multi-core processor at the multi-core processor's design stage, wherein the turn-off analysis of the multi-core processor at the multi-core processor's design stage includes a first output corresponding to a first multi-core processor core to turn off; conducting a turn-off analysis of the multi-core processor at the multi-core processor's testing stage, wherein the turn-off analysis of the multi-core processor at the multi-core processor's testing stage includes a second output corresponding to a second multi-core processor core to turn off; comparing the first output and the second output to determine if the first output is referring to the same core to turn off as the second output; outputting a third output corresponding to the first multi-core processor core if the first output and the second output are both referring to the same core to turn off.
NASA Technical Reports Server (NTRS)
Knauber, R. N.
1982-01-01
A FORTRAN IV coded computer program is presented for post-flight analysis of a missile's control surface response. It includes preprocessing of digitized telemetry data for time lags, biases, non-linear calibration changes and filtering. Measurements include autopilot attitude rate and displacement gyro output and four control surface deflections. Simple first order lags are assumed for the pitch, yaw and roll axes of control. Each actuator is also assumed to be represented by a first order lag. Mixing of pitch, yaw and roll commands to four control surfaces is assumed. A pseudo-inverse technique is used to obtain the pitch, yaw and roll components from the four measured deflections. This program has been used for over 10 years on the NASA/SCOUT launch vehicle for post-flight analysis and was helpful in detecting incipient actuator stall due to excessive hinge moments. The program is currently set up for a CDC CYBER 175 computer system. It requires 34K words of memory and contains 675 cards. A sample problem presented herein including the optional plotting requires eleven (11) seconds of central processor time.
A system for rapid analysis of the femoral blood velocity waveform at the bedside.
Capper, W L; Amoore, J N; Clifford, P C; Immelman, E J; Harries-Jones, E P
1986-01-01
The shape of the arterial blood velocity waveform varies with atherosclerotic disease and several methods of quantifying the shape in order to predict the severity of the disease have been described. These methods include pulsatility index, the Laplace transform method, and principal component analysis. This paper describes the development of a system which allows the operator to acquire, display, and store waveforms from each limb and then to quantify the waveforms at the bedside within a few minutes. The system includes a 10 MHz bi-directional Doppler unit, an instantaneous mean frequency processor, and an Apple II microcomputer fitted with an accelerator card. Both the Laplace transform parameters and the pulsatility index are computed and each result is printed in tabular form together with the averaged results of five waveforms from each limb. The printout is suitable for inclusion in the patient's folder. In initial clinical studies Laplace transform analysis exhibited a good correlation with aorto-iliac stenosis as assessed angiographically (R = 0.73 P less than 0.001 t test).
permGPU: Using graphics processing units in RNA microarray association studies.
Shterev, Ivo D; Jung, Sin-Ho; George, Stephen L; Owzar, Kouros
2010-06-16
Many analyses of microarray association studies involve permutation, bootstrap resampling and cross-validation, that are ideally formulated as embarrassingly parallel computing problems. Given that these analyses are computationally intensive, scalable approaches that can take advantage of multi-core processor systems need to be developed. We have developed a CUDA based implementation, permGPU, that employs graphics processing units in microarray association studies. We illustrate the performance and applicability of permGPU within the context of permutation resampling for a number of test statistics. An extensive simulation study demonstrates a dramatic increase in performance when using permGPU on an NVIDIA GTX 280 card compared to an optimized C/C++ solution running on a conventional Linux server. permGPU is available as an open-source stand-alone application and as an extension package for the R statistical environment. It provides a dramatic increase in performance for permutation resampling analysis in the context of microarray association studies. The current version offers six test statistics for carrying out permutation resampling analyses for binary, quantitative and censored time-to-event traits.
Development of a material processing plant for lunar soil
NASA Technical Reports Server (NTRS)
Goettsch, Ulix; Ousterhout, Karl
1992-01-01
Currently there is considerable interest in developing in-situ materials processing plants for both the Moon and Mars. Two of the most important aspects of developing such a materials processing plant is the overall system design and the integration of the different technologies into a reliable, lightweight, and cost-effective unit. The concept of an autonomous materials processing plant that is capable of producing useful substances from lunar regolith was developed. In order for such a materials processing plant to be considered as a viable option, it must be totally self-contained, able to operate autonomously, cost effective, light weight, and fault tolerant. In order to assess the impact of different technologies on the overall systems design and integration, a one-half scale model was constructed that is capable of scooping up (or digging) lunar soil, transferring the soil to a solar furnace, heating the soil in the furnace to liberate the gasses, and transferring the spent soil to a 'tile' processing center. All aspects of the control system are handled by a 386 class PC via D/A, A/D, and DSP (Digital Signal Processor) control cards.
Design of quantum efficiency measurement system for variable doping GaAs photocathode
NASA Astrophysics Data System (ADS)
Chen, Liang; Yang, Kai; Liu, HongLin; Chang, Benkang
2008-03-01
To achieve high quantum efficiency and good stability has been a main direction to develop GaAs photocathode recently. Through early research, we proved that variable doping structure is executable and practical, and has great potential. In order to optimize variable doping GaAs photocathode preparation techniques and study the variable doping theory deeply, a real-time quantum efficiency measurement system for GaAs Photocathode has been designed. The system uses FPGA (Field-programmable gate array) device, and high speed A/D converter to design a high signal noise ratio and high speed data acquisition card. ARM (Advanced RISC Machines) core processor s3c2410 and real-time embedded system are used to obtain and show measurement results. The measurement precision of photocurrent could reach 1nA, and measurement range of spectral response curve is within 400~1000nm. GaAs photocathode preparation process can be real-time monitored by using this system. This system could easily be added other functions to show the physic variation of photocathode during the preparation process more roundly in the future.
Dolphin Sounds-Inspired Covert Underwater Acoustic Communication and Micro-Modem
Qiao, Gang; Liu, Songzuo; Bilal, Muhammad
2017-01-01
A novel portable underwater acoustic modem is proposed in this paper for covert communication between divers or underwater unmanned vehicles (UUVs) and divers at a short distance. For the first time, real dolphin calls are used in the modem to realize biologically inspired Covert Underwater Acoustic Communication (CUAC). A variety of dolphin whistles and clicks stored in an SD card inside the modem helps to realize different biomimetic CUAC algorithms based on the specified covert scenario. In this paper, the information is conveyed during the time interval between dolphin clicks. TMS320C6748 and TLV320AIC3106 are the core processors used in our unique modem for fast digital processing and interconnection with other terminals or sensors. Simulation results show that the bit error rate (BER) of the CUAC algorithm is less than 10−5 when the signal to noise ratio is over ‒5 dB. The modem was tested in an underwater pool, and a data rate of 27.1 bits per second at a distance of 10 m was achieved. PMID:29068363
A geophone wireless sensor network for investigating glacier stick-slip motion
NASA Astrophysics Data System (ADS)
Martinez, Kirk; Hart, Jane K.; Basford, Philip J.; Bragg, Graeme M.; Ward, Tyler; Young, David S.
2017-08-01
We have developed an innovative passive borehole geophone system, as part of a wireless environmental sensor network to investigate glacier stick-slip motion. The new geophone nodes use an ARM Cortex-M3 processor with a low power design capable of running on battery power while embedded in the ice. Only data from seismic events was stored, held temporarily on a micro-SD card until they were retrieved by systems on the glacier surface which are connected to the internet. The sampling rates, detection and filtering levels were determined from a field trial using a standard commercial passive seismic system. The new system was installed on the Skalafellsjökull glacier in Iceland and provided encouraging results. The results showed that there was a relationship between surface melt water production and seismic event (ice quakes), and these occurred on a pattern related to the glacier surface melt-water controlled velocity changes (stick-slip motion). Three types of seismic events were identified, which were interpreted to reflect a pattern of till deformation (Type A), basal sliding (Type B) and hydraulic transience (Type C) associated with stick-slip motion.
Real-Time Three-Dimensional Cell Segmentation in Large-Scale Microscopy Data of Developing Embryos.
Stegmaier, Johannes; Amat, Fernando; Lemon, William C; McDole, Katie; Wan, Yinan; Teodoro, George; Mikut, Ralf; Keller, Philipp J
2016-01-25
We present the Real-time Accurate Cell-shape Extractor (RACE), a high-throughput image analysis framework for automated three-dimensional cell segmentation in large-scale images. RACE is 55-330 times faster and 2-5 times more accurate than state-of-the-art methods. We demonstrate the generality of RACE by extracting cell-shape information from entire Drosophila, zebrafish, and mouse embryos imaged with confocal and light-sheet microscopes. Using RACE, we automatically reconstructed cellular-resolution tissue anisotropy maps across developing Drosophila embryos and quantified differences in cell-shape dynamics in wild-type and mutant embryos. We furthermore integrated RACE with our framework for automated cell lineaging and performed joint segmentation and cell tracking in entire Drosophila embryos. RACE processed these terabyte-sized datasets on a single computer within 1.4 days. RACE is easy to use, as it requires adjustment of only three parameters, takes full advantage of state-of-the-art multi-core processors and graphics cards, and is available as open-source software for Windows, Linux, and Mac OS. Copyright © 2016 Elsevier Inc. All rights reserved.
Design and Development of the Solar Dynamics Observatory (SDO) Electrical Power System
NASA Technical Reports Server (NTRS)
Denney, Keys; Burns, Michael; Kercheval, Bradford
2009-01-01
The SDO spacecraft was designed to help us understand the Sun's influence on Earth and Near-Earth space by studying the solar atmosphere on small scales of space and time and in many wavelengths simultaneously. It will perform its operations in a geosynchronous orbit of the earth. This paper will present background on the SDO mission, an overview of the design and development activities associated specifically with the SDO electrical power system (EPS), as well as the major driving requirements behind the mission design. The primary coverage of the paper will be devoted to some of the challenges faced during the design and development phase. This will include the challenges associated with development of a compatible CompactPCI (cPCI) interface within the Power System Electronics (PSE) in order to utilize a "common" processor card, implementation of new solid state power controllers (SSPC) for primary load distribution switching and over current protection in the PSE, and the design approach adopted to meet single fault tolerance requirements for all of the SDO EPS functions.
Calibrating thermal behavior of electronics
Chainer, Timothy J.; Parida, Pritish R.; Schultz, Mark D.
2017-07-11
A method includes determining a relationship between indirect thermal data for a processor and a measured temperature associated with the processor, during a calibration process, obtaining the indirect thermal data for the processor during actual operation of the processor, and determining an actual significant temperature associated with the processor during the actual operation using the indirect thermal data for the processor during actual operation of the processor and the relationship.
Calibrating thermal behavior of electronics
Chainer, Timothy J.; Parida, Pritish R.; Schultz, Mark D.
2016-05-31
A method includes determining a relationship between indirect thermal data for a processor and a measured temperature associated with the processor, during a calibration process, obtaining the indirect thermal data for the processor during actual operation of the processor, and determining an actual significant temperature associated with the processor during the actual operation using the indirect thermal data for the processor during actual operation of the processor and the relationship.
Calibrating thermal behavior of electronics
Chainer, Timothy J.; Parida, Pritish R.; Schultz, Mark D.
2017-01-03
A method includes determining a relationship between indirect thermal data for a processor and a measured temperature associated with the processor, during a calibration process, obtaining the indirect thermal data for the processor during actual operation of the processor, and determining an actual significant temperature associated with the processor during the actual operation using the indirect thermal data for the processor during actual operation of the processor and the relationship.
[New electronic data carriers in Bosnia-Herzegovina].
Masić, I; Pandza, H; Knezević, Z; Toromanović, S
1999-01-01
Bosnia and Herzegovina has been developing new Health Care System based on Electronic Registration Card. Developing countries proceeded from the manual and semiautomatic method of medical data processing to the new method of entering, storage, transfer, searching and protection of data using electronic equipment. Currently, many European countries have developed a Medical Card Based Electronic Information System. Both technologies offer the advantages and disadvantages. Three types of electronic card are currently in use: Hybrid Card, Smart Card and Laser Card. Hybrid Card offers characteristics of both Smart Card and Laser Card. The differences among these cards, such as a capacity, total price, price per byte, security system are discussed here. The dilemma is, which card should be used as a data carrier. The Electronic Family Registration Card is a question of strategic interest for B&H, but also a big investment. We should avoid the errors of other countries that have been developing card-based system. In this article we present all mentioned cards and compare advantages and disadvantages of different technologies.
Mechanization of Cataloging Procedures *
Kilgour, Frederick G.
1965-01-01
The Columbia-Harvard-Yale Medical Libraries Computerization Project has put into operation its mechanized procedure for the production of catalog cards. Cards produced are in final form ready to be filed into a card catalog. Catalogers prepare copy on a worksheet from which punched cards are punched. An IBM 1401 computer processes the decklets of punched cards on magnetic tape to produce the expanded decklets of punched cards needed to print the various packs of catalog cards required to go into different catalogs. Next, the computer punches the expanded decklets of cards to operate an 870 Document Writer, which types out the catalog cards in final form. Cost of cards ready to file is 12.5 cents per card. Images PMID:14271110
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lawrence, J.D.
1970-03-12
The Control Data 405 card reader, modified by the Control Data 3649 card read controller, is the primary mechanism for transferring information from a deck of punched cards into the CDC 6600 and CDC 7600 computers of the LLL Octopus system. The card reader operates at a maximum rate of 1200 cards per minute. A description of the card reader and its operation is given. A discussion of formates is included. (RWR)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Avramova, Maria; Blyth, Taylor S.; Salko, Robert K.
This document describes how to make a CTF input deck. A CTF input deck is organized into Card Groups and Cards. A Card Group is a collection of Cards. A Card is defined as a line of input. Each Card may contain multiple data. A Card is terminated by making a new line.
29 CFR 4.123 - Administrative limitations, variances, tolerances, and exemptions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... servicing of cards (including credit cards, debit cards, purchase cards, smart cards, and similar card... military personnel in buying and selling homes (which shall not include actual moving or storage of...
Representation of Cultural Role-Play for Training
NASA Technical Reports Server (NTRS)
Santarelli, Thomas; Pepe, Aaron; Rosenzweiz, Larry; Paulus, John; Yi, Ahn Na
2010-01-01
The Department of Defense (000) has successfully applied a number of methods for cultural familiarization training ranging from stand-up classroom training, to face-to-face live role-play, to so-called smart-cards. Recent interest has turned to the use of single and mUlti-player gaming technologies to augment these traditional methods of cultural familiarization. One such system, termed CulturePad, has been designed as a game-based role-play environment suitable for use in training and experimentation involving cultural roleplay scenarios. This paper describes the initial CulturePad effort focused on a literature review regarding the use of role-play for cultural training and a feasibility assessment of using a game-mediated environment for role-play. A small-scale pilot involving cultural experts was conducted to collect qualitative behavioral data comparing live role-play to game-mediated role-play in a multiplayer gaming engine.
A Mechanism for Anonymous Credit Card Systems
NASA Astrophysics Data System (ADS)
Tamura, Shinsuke; Yanase, Tatsuro
This paper proposes a mechanism for anonymous credit card systems, in which each credit card holder can conceal individual transactions from the credit card company, while enabling the credit card company to calculate the total expenditures of transactions of individual card holders during specified periods, and to identify card holders who executed dishonest transactions. Based on three existing mechanisms, i.e. anonymous authentication, blind signature and secure statistical data gathering, together with implicit transaction links proposed here, the proposed mechanism enables development of anonymous credit card systems without assuming any absolutely trustworthy entity like tamper resistant devices or organizations faithful both to the credit card company and card holders.
Data Collection and Recording on the Wisconsin/GSFC X-ray Quantum Calorimeter
NASA Astrophysics Data System (ADS)
O'Neill, Laura; X-ray Astrophysics Group at the University of Wisconsin-Madison
2016-01-01
The Wisconsin/GSFC X-ray Quantum Calorimeter (XQC) is an astronomical X-ray sounding rocket payload which uses a micro-calorimeter array to detect low (less than1keV) X-rays. Three different devices were evaluated to upgrade XQC's data collection and recording system. The system takes incoming data from XQC's pixel sensors and stores it to a memory card. The upgrade is a much smaller board and much more compact storage device. The Terasic DE0-Nano, Terasic DE0-Nano SoC, and the BeagleBone Black were tested to determine which would suit the needs of XQC best. The device needed to take incoming data, store it to an SD card, and be able to output it through a USB connection. The Terasic DE0-Nano is a simple FPGA, but needed some peripheral additions for an SD card slot and USB readout. The Terasic DE0-Nano SoC was a powerful FPGA and hard processor running Linux combined. It was able to do what was needed, but pulled too much power in the process. The BeagleBone Black had a microcontroller and also ran Linux. This last device ended up being the best choice, as it did not require too much power and had a very easy system already in place for USB readout. The only difficulty to deal with was programming the microcontroller in assembly language. This device is necessary due to the telemetry on XQC not being able to send all of the data down during the flight. It records valuable data about low energy X-rays so that the X-ray Astrophysics Groups at the University of Wisconsin-Madison and Goddard Space Flight Center can analyze and resolve the spectrum of the soft X-ray background.Later, using the digital logic on a Terasic DE0-Nano FPGA, a data simulator for the BeagleBone Black data collection and recording device was created. Programmed with Quartus II, the simulator uses basic digital logic components to fabricate trackable data signals and related timing signals to send to the data management device, as well as other timing signals that are asynchronous to the rest of the circuit, a failsafe enable for outputs, and several user feedback components
Turning a Private Label Bank Card into a Multi-function Campus ID Card.
ERIC Educational Resources Information Center
James, Thomas G.; Norwood, Bill R.
1991-01-01
This article describes the development at Florida State University of the Seminole ACCESS card, which functions simultaneously as a bank automated teller machine card, a student identification card, and a debit card. Explained are the partnership between the university and the bank charge card center, funding system, technologies involved, and…
Zhou, P; Chou, J; Olea, R S; Yuan, J; Wagner, G
1999-09-28
Direct recruitment and activation of caspase-9 by Apaf-1 through the homophilic CARD/CARD (Caspase Recruitment Domain) interaction is critical for the activation of caspases downstream of mitochondrial damage in apoptosis. Here we report the solution structure of the Apaf-1 CARD domain and its surface of interaction with caspase-9 CARD. Apaf-1 CARD consists of six tightly packed amphipathic alpha-helices and is topologically similar to the RAIDD CARD, with the exception of a kink observed in the middle of the N-terminal helix. By using chemical shift perturbation data, the homophilic interaction was mapped to the acidic surface of Apaf-1 CARD centered around helices 2 and 3. Interestingly, a significant portion of the chemically perturbed residues are hydrophobic, indicating that in addition to the electrostatic interactions predicted previously, hydrophobic interaction is also an important driving force underlying the CARD/CARD interaction. On the basis of the identified functional residues of Apaf-1 CARD and the surface charge complementarity, we propose a model of CARD/CARD interaction between Apaf-1 and caspase-9.
NASA Astrophysics Data System (ADS)
Caplan, R. M.
2013-04-01
We present a simple to use, yet powerful code package called NLSEmagic to numerically integrate the nonlinear Schrödinger equation in one, two, and three dimensions. NLSEmagic is a high-order finite-difference code package which utilizes graphic processing unit (GPU) parallel architectures. The codes running on the GPU are many times faster than their serial counterparts, and are much cheaper to run than on standard parallel clusters. The codes are developed with usability and portability in mind, and therefore are written to interface with MATLAB utilizing custom GPU-enabled C codes with the MEX-compiler interface. The packages are freely distributed, including user manuals and set-up files. Catalogue identifier: AEOJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOJ_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 124453 No. of bytes in distributed program, including test data, etc.: 4728604 Distribution format: tar.gz Programming language: C, CUDA, MATLAB. Computer: PC, MAC. Operating system: Windows, MacOS, Linux. Has the code been vectorized or parallelized?: Yes. Number of processors used: Single CPU, number of GPU processors dependent on chosen GPU card (max is currently 3072 cores on GeForce GTX 690). Supplementary material: Setup guide, Installation guide. RAM: Highly dependent on dimensionality and grid size. For typical medium-large problem size in three dimensions, 4GB is sufficient. Keywords: Nonlinear Schröodinger Equation, GPU, high-order finite difference, Bose-Einstien condensates. Classification: 4.3, 7.7. Nature of problem: Integrate solutions of the time-dependent one-, two-, and three-dimensional cubic nonlinear Schrödinger equation. Solution method: The integrators utilize a fully-explicit fourth-order Runge-Kutta scheme in time and both second- and fourth-order differencing in space. The integrators are written to run on NVIDIA GPUs and are interfaced with MATLAB including built-in visualization and analysis tools. Restrictions: The main restriction for the GPU integrators is the amount of RAM on the GPU as the code is currently only designed for running on a single GPU. Unusual features: Ability to visualize real-time simulations through the interaction of MATLAB and the compiled GPU integrators. Additional comments: Setup guide and Installation guide provided. Program has a dedicated web site at www.nlsemagic.com. Running time: A three-dimensional run with a grid dimension of 87×87×203 for 3360 time steps (100 non-dimensional time units) takes about one and a half minutes on a GeForce GTX 580 GPU card.
Methods and systems for providing reconfigurable and recoverable computing resources
NASA Technical Reports Server (NTRS)
Stange, Kent (Inventor); Hess, Richard (Inventor); Kelley, Gerald B (Inventor); Rogers, Randy (Inventor)
2010-01-01
A method for optimizing the use of digital computing resources to achieve reliability and availability of the computing resources is disclosed. The method comprises providing one or more processors with a recovery mechanism, the one or more processors executing one or more applications. A determination is made whether the one or more processors needs to be reconfigured. A rapid recovery is employed to reconfigure the one or more processors when needed. A computing system that provides reconfigurable and recoverable computing resources is also disclosed. The system comprises one or more processors with a recovery mechanism, with the one or more processors configured to execute a first application, and an additional processor configured to execute a second application different than the first application. The additional processor is reconfigurable with rapid recovery such that the additional processor can execute the first application when one of the one more processors fails.
Rectangular Array Of Digital Processors For Planning Paths
NASA Technical Reports Server (NTRS)
Kemeny, Sabrina E.; Fossum, Eric R.; Nixon, Robert H.
1993-01-01
Prototype 24 x 25 rectangular array of asynchronous parallel digital processors rapidly finds best path across two-dimensional field, which could be patch of terrain traversed by robotic or military vehicle. Implemented as single-chip very-large-scale integrated circuit. Excepting processors on edges, each processor communicates with four nearest neighbors along paths representing travel to north, south, east, and west. Each processor contains delay generator in form of 8-bit ripple counter, preset to 1 of 256 possible values. Operation begins with choice of processor representing starting point. Transmits signals to nearest neighbor processors, which retransmits to other neighboring processors, and process repeats until signals propagated across entire field.
In vitro reconstitution of interactions in the CARD9 signalosome
Park, Jin Hee; Choi, Jae Young; Mustafa, Mir Faisal; Park, Hyun Ho
2017-01-01
The caspase-associated recruitment domain (CARD)-containing protein 9 (CARD9) signalosome is composed of CARD9, B-cell CLL/lymphoma 10 (BCL10) and mucosa-associated lymphoid tissue lymphoma translocation protein 1 (MALT1). The CARD9 signalosome has been reported to exert critical functions in the immunoreceptor tyrosine-based activation motif-coupled receptor-mediated activation of myeloid cells, through nuclear factor-κB pathways during innate immunity processes. During CARD9 signalosome assembly, BCL10 has been revealed to function as an adaptor protein and to interact with CARD9 via CARD-CARD interactions; BCL10 also interacts with MALT1 via its C-terminal Ser/Thr-rich region and the first immunoglobulin domain of MALT1. The CARD9 signalosome is implicated in critical biological processes; however, its structural and biochemical characteristics have yet to be elucidated. In the present study, CARD9 and BCL10 CARDs were successfully purified and characterized, and their biochemical properties were investigated. In addition, CARD9-BCL10 complexes were reconstituted in vitro under low salt and pH conditions. Furthermore, based on structural modeling data, a scheme was proposed to describe the interactions between CARD9 and BCL10. This provides a further understanding of the mechanism of how the CARD9 signalosome may be assembled. PMID:28765954
Buffered coscheduling for parallel programming and enhanced fault tolerance
Petrini, Fabrizio [Los Alamos, NM; Feng, Wu-chun [Los Alamos, NM
2006-01-31
A computer implemented method schedules processor jobs on a network of parallel machine processors or distributed system processors. Control information communications generated by each process performed by each processor during a defined time interval is accumulated in buffers, where adjacent time intervals are separated by strobe intervals for a global exchange of control information. A global exchange of the control information communications at the end of each defined time interval is performed during an intervening strobe interval so that each processor is informed by all of the other processors of the number of incoming jobs to be received by each processor in a subsequent time interval. The buffered coscheduling method of this invention also enhances the fault tolerance of a network of parallel machine processors or distributed system processors
Imaging standards for smart cards
NASA Astrophysics Data System (ADS)
Ellson, Richard N.; Ray, Lawrence A.
1996-02-01
"Smart cards" are plastic cards the size of credit cards which contain integrated circuits for the storage of digital information. The applications of these cards for image storage has been growing as card data capacities have moved from tens of bytes to thousands of bytes. This has prompted the recommendation of standards by the X3B10 committee of ANSI for inclusion in ISO standards for card image storage of a variety of image data types including digitized signatures and color portrait images. This paper will review imaging requirements of the smart card industry, challenges of image storage for small memory devices, card image communications, and the present status of standards. The paper will conclude with recommendations for the evolution of smart card image standards towards image formats customized to the image content and more optimized for smart card memory constraints.
Imaging standards for smart cards
NASA Astrophysics Data System (ADS)
Ellson, Richard N.; Ray, Lawrence A.
1996-01-01
'Smart cards' are plastic cards the size of credit cards which contain integrated circuits for the storage of digital information. The applications of these cards for image storage has been growing as card data capacities have moved from tens of bytes to thousands of bytes. This has prompted the recommendation of standards by the X3B10 committee of ANSI for inclusion in ISO standards for card image storage of a variety of image data types including digitized signatures and color portrait images. This paper reviews imaging requirements of the smart card industry, challenges of image storage for small memory devices, card image communications, and the present status of standards. The paper concludes with recommendations for the evolution of smart card image standards towards image formats customized to the image content and more optimized for smart card memory constraints.
NASA Technical Reports Server (NTRS)
Seale, R. H.
1979-01-01
The prediction of the SRB and ET impact areas requires six separate processors. The SRB impact prediction processor computes the impact areas and related trajectory data for each SRB element. Output from this processor is stored on a secure file accessible by the SRB impact plot processor which generates the required plots. Similarly the ET RTLS impact prediction processor and the ET RTLS impact plot processor generates the ET impact footprints for return-to-launch-site (RTLS) profiles. The ET nominal/AOA/ATO impact prediction processor and the ET nominal/AOA/ATO impact plot processor generate the ET impact footprints for non-RTLS profiles. The SRB and ET impact processors compute the size and shape of the impact footprints by tabular lookup in a stored footprint dispersion data base. The location of each footprint is determined by simulating a reference trajectory and computing the reference impact point location. To insure consistency among all flight design system (FDS) users, much input required by these processors will be obtained from the FDS master data base.
Macdonald, S A; Wells, S L; Giesbrecht, N; West, P M
1999-05-01
In 1994, regulatory changes were introduced in Ontario, Canada, permitting the purchase of alcoholic beverages with credit cards at government-operated liquor stores. Two objectives of this study were: (1) to compare the characteristics of credit card shoppers with non credit card shoppers at liquor stores, and (2) to assess whether changes occurred in alcohol consumption patterns among shoppers following the introduction of credit cards. Random digit dialing was used to interview 2,039 telephone participants prior to the introduction of credit cards (Time 1); 1,401 of these subjects were contacted 1 year later (Time 2). Independent sample t tests were used to compare credit card shoppers with shoppers not using credit cards, and paired t tests were performed to assess whether drinking behaviors changed from Time 1 to Time 2. The credit card shoppers were more likely than the non credit card shoppers to be highly educated (p < .001) and to have high incomes (p < .05). Credit card shoppers drank an average of 6.3 drinks over the previous week compared with 4.0 drinks among non credit card shoppers (p < .01). Although the overall amount of alcohol consumed among credit card shoppers dropped from 6.7 drinks at Time 1 to 6.3 at Time 2 (NS), credit card shoppers reported drinking significantly more often after credit cards were introduced (p < .05). The results suggest that credit cards may not present public health problems since significant increases in alcohol consumption among credit card shoppers were not found.
36 CFR 1254.84 - How may I use a debit card for copiers in the Washington, DC, area?
Code of Federal Regulations, 2010 CFR
2010-07-01
..., money order, debit card, or credit card. Your researcher identification card number as encoded on the... 1253 of this chapter, you may use cash or credit card to purchase a debit card from the vending... 36 Parks, Forests, and Public Property 3 2010-07-01 2010-07-01 false How may I use a debit card...
Dynamic Virtual Credit Card Numbers
NASA Astrophysics Data System (ADS)
Molloy, Ian; Li, Jiangtao; Li, Ninghui
Theft of stored credit card information is an increasing threat to e-commerce. We propose a dynamic virtual credit card number scheme that reduces the damage caused by stolen credit card numbers. A user can use an existing credit card account to generate multiple virtual credit card numbers that are either usable for a single transaction or are tied with a particular merchant. We call the scheme dynamic because the virtual credit card numbers can be generated without online contact with the credit card issuers. These numbers can be processed without changing any of the infrastructure currently in place; the only changes will be at the end points, namely, the card users and the card issuers. We analyze the security requirements for dynamic virtual credit card numbers, discuss the design space, propose a scheme using HMAC, and prove its security under the assumption the underlying function is a PRF.
Citizen empowerment using healthcare and welfare cards.
Cheshire, Paul
2006-01-01
Cards are used in health and welfare to establish the identity of the person presenting the card; to prove their entitlement to a welfare or healthcare service; to store data needed within the care process; and to store data to use in the administration process. There is a desire to empower citizens - to give them greater control over their lives, their health and wellbeing. How can a healthcare and welfare card support this aim? Does having a card empower the citizen? What can a citizen do more easily, reliably, securely or cost-effectively because they have a card? A number of possibilities include: Choice of service provider; Mobility across regional and national boundaries; Privacy; and Anonymity. But in all of these possibilities a card is just one component of a total system and process, and there may be other solutions--technological and manual. There are risks and problems from relying on a card; and issues of Inclusion for people who are unable use a card. The article concludes that: cards need to be viewed in the context of the whole solution; cards are not the only technological mechanism; cards are not the best mechanism in all circumstances; but cards are very convenient method in very many situations.
Coding, testing and documentation of processors for the flight design system
NASA Technical Reports Server (NTRS)
1980-01-01
The general functional design and implementation of processors for a space flight design system are briefly described. Discussions of a basetime initialization processor; conic, analytical, and precision coasting flight processors; and an orbit lifetime processor are included. The functions of several utility routines are also discussed.
The computational structural mechanics testbed generic structural-element processor manual
NASA Technical Reports Server (NTRS)
Stanley, Gary M.; Nour-Omid, Shahram
1990-01-01
The usage and development of structural finite element processors based on the CSM Testbed's Generic Element Processor (GEP) template is documented. By convention, such processors have names of the form ESi, where i is an integer. This manual is therefore intended for both Testbed users who wish to invoke ES processors during the course of a structural analysis, and Testbed developers who wish to construct new element processors (or modify existing ones).
NASA Technical Reports Server (NTRS)
Fijany, Amir (Inventor); Bejczy, Antal K. (Inventor)
1994-01-01
In a computer having a large number of single-instruction multiple data (SIMD) processors, each of the SIMD processors has two sets of three individual processor elements controlled by a master control unit and interconnected among a plurality of register file units where data is stored. The register files input and output data in synchronism with a minor cycle clock under control of two slave control units controlling the register file units connected to respective ones of the two sets of processor elements. Depending upon which ones of the register file units are enabled to store or transmit data during a particular minor clock cycle, the processor elements within an SIMD processor are connected in rings or in pipeline arrays, and may exchange data with the internal bus or with neighboring SIMD processors through interface units controlled by respective ones of the two slave control units.
Karasick, Michael S.; Strip, David R.
1996-01-01
A parallel computing system is described that comprises a plurality of uniquely labeled, parallel processors, each processor capable of modelling a three-dimensional object that includes a plurality of vertices, faces and edges. The system comprises a front-end processor for issuing a modelling command to the parallel processors, relating to a three-dimensional object. Each parallel processor, in response to the command and through the use of its own unique label, creates a directed-edge (d-edge) data structure that uniquely relates an edge of the three-dimensional object to one face of the object. Each d-edge data structure at least includes vertex descriptions of the edge and a description of the one face. As a result, each processor, in response to the modelling command, operates upon a small component of the model and generates results, in parallel with all other processors, without the need for processor-to-processor intercommunication.
Switch for serial or parallel communication networks
Crosette, D.B.
1994-07-19
A communication switch apparatus and a method for use in a geographically extensive serial, parallel or hybrid communication network linking a multi-processor or parallel processing system has a very low software processing overhead in order to accommodate random burst of high density data. Associated with each processor is a communication switch. A data source and a data destination, a sensor suite or robot for example, may also be associated with a switch. The configuration of the switches in the network are coordinated through a master processor node and depends on the operational phase of the multi-processor network: data acquisition, data processing, and data exchange. The master processor node passes information on the state to be assumed by each switch to the processor node associated with the switch. The processor node then operates a series of multi-state switches internal to each communication switch. The communication switch does not parse and interpret communication protocol and message routing information. During a data acquisition phase, the communication switch couples sensors producing data to the processor node associated with the switch, to a downlink destination on the communications network, or to both. It also may couple an uplink data source to its processor node. During the data exchange phase, the switch couples its processor node or an uplink data source to a downlink destination (which may include a processor node or a robot), or couples an uplink source to its processor node and its processor node to a downlink destination. 9 figs.
Switch for serial or parallel communication networks
Crosette, Dario B.
1994-01-01
A communication switch apparatus and a method for use in a geographically extensive serial, parallel or hybrid communication network linking a multi-processor or parallel processing system has a very low software processing overhead in order to accommodate random burst of high density data. Associated with each processor is a communication switch. A data source and a data destination, a sensor suite or robot for example, may also be associated with a switch. The configuration of the switches in the network are coordinated through a master processor node and depends on the operational phase of the multi-processor network: data acquisition, data processing, and data exchange. The master processor node passes information on the state to be assumed by each switch to the processor node associated with the switch. The processor node then operates a series of multi-state switches internal to each communication switch. The communication switch does not parse and interpret communication protocol and message routing information. During a data acquisition phase, the communication switch couples sensors producing data to the processor node associated with the switch, to a downlink destination on the communications network, or to both. It also may couple an uplink data source to its processor node. During the data exchange phase, the switch couples its processor node or an uplink data source to a downlink destination (which may include a processor node or a robot), or couples an uplink source to its processor node and its processor node to a downlink destination.
Aubertine, C. L.; Rivera, M.; Rohan, S. M.; Larone, D. H.
2006-01-01
The new VITEK 2 colorimetric card was compared to the previous fluorometric card for identification of yeast. API 20C was considered the “gold standard.” The new card consistently performed better than the older card. Isolates from CHROMagar Candida plates were identified equally as well as those from Sabouraud dextrose agar. PMID:16390976
A microprocessor card software server to support the Quebec health microprocessor card project.
Durant, P; Bérubé, J; Lavoie, G; Gamache, A; Ardouin, P; Papillon, M J; Fortin, J P
1995-01-01
The Quebec Health Smart Card Project is advocating the use of a memory card software server[1] (SCAM) to implement a portable medical record (PMR) on a smart card. The PMR is viewed as an object that can be manipulated by SCAM's services. In fact, we can talk about a pseudo-object-oriented approach. This software architecture provides a flexible and evolutive way to manage and optimize the PMR. SCAM is a generic software server; it can manage smart cards as well as optical (laser) cards or other types of memory cards. But, in the specific case of the Quebec Health Card Project, SCAM is used to provide services between physicians' or pharmacists' software and IBM smart card technology. We propose to expose the concepts and techniques used to provide a generic environment to deal with smart cards (and more generally with memory cards), to obtain a dynamic an evolutive PMR, to raise the system global security level and the data integrity, to optimize significantly the management of the PMR, and to provide statistic information about the use of the PMR.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Avramova, Maria; Toptan, Aysenur; Porter, Nathan
This document describes how to make a CTF input deck. A CTF input deck is organized into Card Groups and Cards. A Card Group is a collection of Cards. A Card is de ned as a line of input. Each Card may contain multiple data. A Card is terminated by making a new line. This document has been organized so that each Card Group is discussed in its own dedicated chapter. Each card is discused in its own dedicated section. Each data in the card is discussed in its own block. The block gives information about the data, including themore » number of the input, the title, a description of the meaning of the data, units, data type, and so on. An example block is shown below to discuss the meaning of each entry in the block.« less
FLY MPI-2: a parallel tree code for LSS
NASA Astrophysics Data System (ADS)
Becciani, U.; Comparato, M.; Antonuccio-Delogu, V.
2006-04-01
New version program summaryProgram title: FLY 3.1 Catalogue identifier: ADSC_v2_0 Licensing provisions: yes Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSC_v2_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland No. of lines in distributed program, including test data, etc.: 158 172 No. of bytes in distributed program, including test data, etc.: 4 719 953 Distribution format: tar.gz Programming language: Fortran 90, C Computer: Beowulf cluster, PC, MPP systems Operating system: Linux, Aix RAM: 100M words Catalogue identifier of previous version: ADSC_v1_0 Journal reference of previous version: Comput. Phys. Comm. 155 (2003) 159 Does the new version supersede the previous version?: yes Nature of problem: FLY is a parallel collisionless N-body code for the calculation of the gravitational force Solution method: FLY is based on the hierarchical oct-tree domain decomposition introduced by Barnes and Hut (1986) Reasons for the new version: The new version of FLY is implemented by using the MPI-2 standard: the distributed version 3.1 was developed by using the MPICH2 library on a PC Linux cluster. Today the FLY performance allows us to consider the FLY code among the most powerful parallel codes for tree N-body simulations. Another important new feature regards the availability of an interface with hydrodynamical Paramesh based codes. Simulations must follow a box large enough to accurately represent the power spectrum of fluctuations on very large scales so that we may hope to compare them meaningfully with real data. The number of particles then sets the mass resolution of the simulation, which we would like to make as fine as possible. The idea to build an interface between two codes, that have different and complementary cosmological tasks, allows us to execute complex cosmological simulations with FLY, specialized for DM evolution, and a code specialized for hydrodynamical components that uses a Paramesh block structure. Summary of revisions: The parallel communication schema was totally changed. The new version adopts the MPICH2 library. Now FLY can be executed on all Unix systems having an MPI-2 standard library. The main data structure, is declared in a module procedure of FLY (fly_h.F90 routine). FLY creates the MPI Window object for one-sided communication for all the shared arrays, with a call like the following: CALL MPI_WIN_CREATE(POS, SIZE, REAL8, MPI_INFO_NULL, MPI_COMM_WORLD, WIN_POS, IERR) the following main window objects are created: win_pos, win_vel, win_acc: particles positions velocities and accelerations, win_pos_cell, win_mass_cell, win_quad, win_subp, win_grouping: cells positions, masses, quadrupole momenta, tree structure and grouping cells. Other windows are created for dynamic load balance and global counters. Restrictions: The program uses the leapfrog integrator schema, but could be changed by the user. Unusual features: FLY uses the MPI-2 standard: the MPICH2 library on Linux systems was adopted. To run this version of FLY the working directory must be shared among all the processors that execute FLY. Additional comments: Full documentation for the program is included in the distribution in the form of a README file, a User Guide and a Reference manuscript. Running time: IBM Linux Cluster 1350, 512 nodes with 2 processors for each node and 2 GB RAM for each processor, at Cineca, was adopted to make performance tests. Processor type: Intel Xeon Pentium IV 3.0 GHz and 512 KB cache (128 nodes have Nocona processors). Internal Network: Myricom LAN Card "C" Version and "D" Version. Operating System: Linux SuSE SLES 8. The code was compiled using the mpif90 compiler version 8.1 and with basic optimization options in order to have performances that could be useful compared with other generic clusters Processors
Conditions for space invariance in optical data processors used with coherent or noncoherent light.
Arsenault, H R
1972-10-01
The conditions for space invariance in coherent and noncoherent optical processors are considered. All linear optical processors are shown to belong to one of two types. The conditions for space invariance are more stringent for noncoherent processors than for coherent processors, so that a system that is linear in coherent light may be nonlinear in noncoherent light. However, any processor that is linear in noncoherent light is also linear in the coherent limit.
Patients' Awareness, Usage and Impact of Hospital Report Cards in the US.
Emmert, Martin; Schlesinger, Mark
2017-12-01
Little knowledge is available about the importance of hospital report cards in the US from the patients' perspective. It also remains unknown whether specific report cards with a stronger emphasis on clinical measures have a greater impact on hospital choice than general report cards that focus on online-derived ratings. The aim of this study was to determine the awareness and usage of hospital report cards as well as their impact on hospital choice in the US. We conducted a cross-sectional study by surveying a stratified online sample (N = 1332) to ensure representativeness to the US online population (February 2015). Overall, 75% of all respondents (mean age 45.4 years; 54% female) were aware of hospital report cards. Among these, 56% had used a report card to search for a hospital, and 80% of report card users stated having been influenced by a report card. Both the awareness and usage of general report cards were shown to be higher than for specific report cards. No significant differences could be detected regarding the impact between general or specific report cards on hospital choice. Our results indicate that hospital report cards play a considerable role among patients when searching for a hospital in the US; however, patients do not seem to have a preference regarding the type of report cards they use when selecting a hospital.
Augmented-reality integrated robotics in neurosurgery: are we there yet?
Madhavan, Karthik; Kolcun, John Paul G; Chieng, Lee Onn; Wang, Michael Y
2017-05-01
Surgical robots have captured the interest-if not the widespread acceptance-of spinal neurosurgeons. But successful innovation, scientific or commercial, requires the majority to adopt a new practice. "Faster, better, cheaper" products should in theory conquer the market, but often fail. The psychology of change is complex, and the "follow the leader" mentality, common in the field today, lends little trust to the process of disseminating new technology. Beyond product quality, timing has proven to be a key factor in the inception, design, and execution of new technologies. Although the first robotic surgery was performed in 1985, scant progress was seen until the era of minimally invasive surgery. This movement increased neurosurgeons' dependence on navigation and fluoroscopy, intensifying the drive for enhanced precision. Outside the field of medicine, various technology companies have made great progress in popularizing co-robots ("cobots"), augmented reality, and processor chips. This has helped to ease practicing surgeons into familiarity with and acceptance of these technologies. The adoption among neurosurgeons in training is a "follow the leader" phenomenon, wherein new surgeons tend to adopt the technology used during residency. In neurosurgery today, robots are limited to computers functioning between the surgeon and patient. Their functions are confined to establishing a trajectory for navigation, with task execution solely in the surgeon's hands. In this review, the authors discuss significant untapped technologies waiting to be used for more meaningful applications. They explore the history and current manifestations of various modern technologies, and project what innovations may lie ahead.
Self-contained eye-safe laser radar using an erbium-doped fiber laser
NASA Astrophysics Data System (ADS)
Driscoll, Thomas A.; Radecki, Dan J.; Tindal, Nan E.; Corriveau, John P.; Denman, Richard
2003-07-01
An Eye-safe Laser Radar has been developed under White Sands Missile Range sponsorship. The SEAL system, the Self-contained Eyesafe Autonomous Laser system, is designed to measure target position within a 0.5 meter box. Targets are augmented with Scotchlite for ranging out to 6 km and augmented with a retroreflector for targets out to 20 km. The data latency is less than 1.5 ms, and the position update rate is 1 kHz. The system is air-cooled, contained in a single 200-lb, 6-cubic-foot box, and uses less than 600 watts of prime power. The angle-angle-range data will be used to measure target dynamics and to control a tracking mount. The optical system is built around a diode-pumped, erbium-doped fiber laser rated at 1.5 watts average power at 10 kHz repetition rate with 25 nsec pulse duration. An 8 inch-diameter, F/2.84 telescope is relayed to a quadrant detector at F/0.85 giving a 5 mrad field of view. Two detectors have been evaluated, a Germanium PIN diode and an Intevac TE-IPD. The receiver electronics uses a DSP network of 6 SHARC processors to implement ranging and angle error algorithms along with an Optical AGC, including beam divergence/FOV control loops.Laboratory measurements of the laser characteristics, and system range and angle accuracies will be compared to simulations. Field measurements against actual targets will be presented.
Broadcasting collective operation contributions throughout a parallel computer
Faraj, Ahmad [Rochester, MN
2012-02-21
Methods, systems, and products are disclosed for broadcasting collective operation contributions throughout a parallel computer. The parallel computer includes a plurality of compute nodes connected together through a data communications network. Each compute node has a plurality of processors for use in collective parallel operations on the parallel computer. Broadcasting collective operation contributions throughout a parallel computer according to embodiments of the present invention includes: transmitting, by each processor on each compute node, that processor's collective operation contribution to the other processors on that compute node using intra-node communications; and transmitting on a designated network link, by each processor on each compute node according to a serial processor transmission sequence, that processor's collective operation contribution to the other processors on the other compute nodes using inter-node communications.
Saieg, Mauro Ajaj; Geddie, William R; Boerner, Scott L; Liu, Ni; Tsao, Ming; Zhang, Tong; Kamel-Reid, Suzanne; da Cunha Santos, Gilda
2012-06-25
Novel high-throughput molecular technologies have made the collection and storage of cells and small tissue specimens a critical issue. The FTA card provides an alternative to cryopreservation for biobanking fresh unfixed cells. The current study compared the quality and integrity of the DNA obtained from 2 types of FTA cards (Classic and Elute) using 2 different extraction protocols ("Classic" and "Elute") and assessed the feasibility of performing multiplex mutational screening using fine-needle aspiration (FNA) biopsy samples. Residual material from 42 FNA biopsies was collected in the cards (21 Classic and 21 Elute cards). DNA was extracted using the Classic protocol for Classic cards and both protocols for Elute cards. Polymerase chain reaction for p53 (1.5 kilobase) and CARD11 (500 base pair) was performed to assess DNA integrity. Successful p53 amplification was achieved in 95.2% of the samples from the Classic cards and in 80.9% of the samples from the Elute cards using the Classic protocol and 28.5% using the Elute protocol (P = .001). All samples (both cards) could be amplified for CARD11. There was no significant difference in the DNA concentration or 260/280 purity ratio when the 2 types of cards were compared. Five samples were also successfully analyzed by multiplex MassARRAY spectrometry, with a mutation in KRAS found in 1 case. High molecular weight DNA was extracted from the cards in sufficient amounts and quality to perform high-throughput multiplex mutation assays. The results of the current study also suggest that FTA Classic cards preserve better DNA integrity for molecular applications compared with the FTA Elute cards. Copyright © 2012 American Cancer Society.
Maurer, Kathryn; Luo, Hongxue; Shen, Zhiyong; Wang, Guixiang; Du, Hui; Wang, Chun; Liu, Xiaobo; Wang, Xiamen; Qu, Xinfeng; Wu, Ruifang; Belinson, Jerome
2016-03-01
Solid media transport can be used to design adaptable cervical cancer screening programs but currently is limited by one card with published data. To develop and evaluate a solid media transport card for use in high-risk human papillomavirus detection (HR-HPV). The Preventative Oncology International (POI) card was constructed using PK 226 paper(®) treated with cell-lysing solution and indicating dye. Vaginal samples were applied to the POI card and the indicating FTA (iFTA) elute card. A cervical sample was placed in liquid media. All specimens were tested for HR-HPV. Color change was assessed at sample application and at card processing. Stability of the POI card and iFTA elute card was tested at humidity. 319 women were enrolled. Twelve women had at least one insufficient sample with no difference between media (p=0.36). Compared to liquid samples, there was good agreement for HR-HPV detection with kappa of 0.81 (95% CI 0.74-0.88) and 0.71 (95% CI 0.62-0.79) for the POI and iFTA elute card respectively. Sensitivity for ≥CIN2 was 100% (CI 100-100%), 95.1% (CI 92.7-97.6%), and 93.5% (CI 90.7-96.3%) for the HR-HPV test from the liquid media, POI card, and iFTA elute card respectively. There was no color change of the POI card noted in humidity but the iFTA elute card changed color at 90% humidity. The POI card is suitable for DNA transport and HR-HPV testing. This card has the potential to make cervical cancer screening programs more affordable worldwide. Copyright © 2015 Elsevier B.V. All rights reserved.
LANDSAT-D flight segment operations manual. Appendix B: OBC software operations
NASA Technical Reports Server (NTRS)
Talipsky, R.
1981-01-01
The LANDSAT 4 satellite contains two NASA standard spacecraft computers and 65,536 words of memory. Onboard computer software is divided into flight executive and applications processors. Both applications processors and the flight executive use one or more of 67 system tables to obtain variables, constants, and software flags. Output from the software for monitoring operation is via 49 OBC telemetry reports subcommutated in the spacecraft telemetry. Information is provided about the flight software as it is used to control the various spacecraft operations and interpret operational OBC telemetry. Processor function descriptions, processor operation, software constraints, processor system tables, processor telemetry, and processor flow charts are presented.
NASA Astrophysics Data System (ADS)
Pruhs, Kirk
A particularly important emergent technology is heterogeneous processors (or cores), which many computer architects believe will be the dominant architectural design in the future. The main advantage of a heterogeneous architecture, relative to an architecture of identical processors, is that it allows for the inclusion of processors whose design is specialized for particular types of jobs, and for jobs to be assigned to a processor best suited for that job. Most notably, it is envisioned that these heterogeneous architectures will consist of a small number of high-power high-performance processors for critical jobs, and a larger number of lower-power lower-performance processors for less critical jobs. Naturally, the lower-power processors would be more energy efficient in terms of the computation performed per unit of energy expended, and would generate less heat per unit of computation. For a given area and power budget, heterogeneous designs can give significantly better performance for standard workloads. Moreover, even processors that were designed to be homogeneous, are increasingly likely to be heterogeneous at run time: the dominant underlying cause is the increasing variability in the fabrication process as the feature size is scaled down (although run time faults will also play a role). Since manufacturing yields would be unacceptably low if every processor/core was required to be perfect, and since there would be significant performance loss from derating the entire chip to the functioning of the least functional processor (which is what would be required in order to attain processor homogeneity), some processor heterogeneity seems inevitable in chips with many processors/cores.
Multi-Core Processor Memory Contention Benchmark Analysis Case Study
NASA Technical Reports Server (NTRS)
Simon, Tyler; McGalliard, James
2009-01-01
Multi-core processors dominate current mainframe, server, and high performance computing (HPC) systems. This paper provides synthetic kernel and natural benchmark results from an HPC system at the NASA Goddard Space Flight Center that illustrate the performance impacts of multi-core (dual- and quad-core) vs. single core processor systems. Analysis of processor design, application source code, and synthetic and natural test results all indicate that multi-core processors can suffer from significant memory subsystem contention compared to similar single-core processors.
Iridovirus CARD Protein Inhibits Apoptosis through Intrinsic and Extrinsic Pathways
Chen, Chien-Wen; Wu, Ming-Shan; Huang, Yi-Jen; Lin, Pei-Wen; Shih, Chueh-Ju; Lin, Fu-Pang; Chang, Chi-Yao
2015-01-01
Grouper iridovirus (GIV) belongs to the genus Ranavirus of the family Iridoviridae; the genomes of such viruses contain an anti-apoptotic caspase recruitment domain (CARD) gene. The GIV-CARD gene encodes a protein of 91 amino acids with a molecular mass of 10,505 Daltons, and shows high similarity to other viral CARD genes and human ICEBERG. In this study, we used Northern blot to demonstrate that GIV-CARD transcription begins at 4 h post-infection; furthermore, we report that its transcription is completely inhibited by cycloheximide but not by aphidicolin, indicating that GIV-CARD is an early gene. GIV-CARD-EGFP and GIV-CARD-FLAG recombinant proteins were observed to translocate from the cytoplasm into the nucleus, but no obvious nuclear localization sequence was observed within GIV-CARD. RNA interference-mediated knockdown of GIV-CARD in GK cells infected with GIV inhibited expression of GIV-CARD and five other viral genes during the early stages of infection, and also reduced GIV infection ability. Immunostaining was performed to show that apoptosis was effectively inhibited in cells expressing GIV-CARD. HeLa cells irradiated with UV or treated with anti-Fas antibody will undergo apoptosis through the intrinsic and extrinsic pathways, respectively. However, over-expression of recombinant GIV-CARD protein in HeLa cells inhibited apoptosis induced by mitochondrial and death receptor signaling. Finally, we report that expression of GIV-CARD in HeLa cells significantly reduced the activities of caspase-8 and -9 following apoptosis triggered by anti-Fas antibody. Taken together, these results demonstrate that GIV-CARD inhibits apoptosis through both intrinsic and extrinsic pathways. PMID:26047333
Simulink/PARS Integration Support
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vacaliuc, B.; Nakhaee, N.
2013-12-18
The state of the art for signal processor hardware has far out-paced the development tools for placing applications on that hardware. In addition, signal processors are available in a variety of architectures, each uniquely capable of handling specific types of signal processing efficiently. With these processors becoming smaller and demanding less power, it has become possible to group multiple processors, a heterogeneous set of processors, into single systems. Different portions of the desired problem set can be assigned to different processor types as appropriate. As software development tools do not keep pace with these processors, especially when multiple processors ofmore » different types are used, a method is needed to enable software code portability among multiple processors and multiple types of processors along with their respective software environments. Sundance DSP, Inc. has developed a software toolkit called “PARS”, whose objective is to provide a framework that uses suites of tools provided by different vendors, along with modeling tools and a real time operating system, to build an application that spans different processor types. The software language used to express the behavior of the system is a very high level modeling language, “Simulink”, a MathWorks product. ORNL has used this toolkit to effectively implement several deliverables. This CRADA describes this collaboration between ORNL and Sundance DSP, Inc.« less
Patron ID Cards Made Easy and Cheap
ERIC Educational Resources Information Center
Peischl, Tom
1978-01-01
Four major problems of academic library identification cards are expense, distribution, timeliness, and validation. By moving from a commercially produced plastic library card to a locally produced paper IBM-type card, this library solved these four library card problems. (Author)
A Low-Cost, Efficient, Machine-Assisted Manual Circulation System
ERIC Educational Resources Information Center
Stangl, Peter
1975-01-01
A circulation system uses plastic embossed user cards, an addressograph electric imprinter, a copy of the catalog card as a book card, and a pocket imprinted by the user's card and holding the book card during circulation. (LS)
NASA Astrophysics Data System (ADS)
Esepkina, N. A.; Lavrov, A. P.; Anan'ev, M. N.; Blagodarnyi, V. S.; Ivanov, S. I.; Mansyrev, M. I.; Molodyakov, S. A.
1995-10-01
Two new types of optoelectronic radio-signal processors were investigated. Charge-coupled device (CCD) photodetectors are used in these processors under continuous scanning conditions, i.e. in a time delay and storage mode. One of these processors is based on a CCD photodetector array with a reference-signal amplitude transparency and the other is an adaptive acousto-optical signal processor with linear frequency modulation. The processor with the transparency performs multichannel discrete—analogue convolution of an input signal with a corresponding kernel of the transformation determined by the transparency. If a light source is an array of light-emitting diodes of special (stripe) geometry, the optical stages of the processor can be made from optical fibre components and the whole processor then becomes a rigid 'sandwich' (a compact hybrid optoelectronic microcircuit). A report is given also of a study of a prototype processor with optical fibre components for the reception of signals from a system with antenna aperture synthesis, which forms a radio image of the Earth.
Karasick, M.S.; Strip, D.R.
1996-01-30
A parallel computing system is described that comprises a plurality of uniquely labeled, parallel processors, each processor capable of modeling a three-dimensional object that includes a plurality of vertices, faces and edges. The system comprises a front-end processor for issuing a modeling command to the parallel processors, relating to a three-dimensional object. Each parallel processor, in response to the command and through the use of its own unique label, creates a directed-edge (d-edge) data structure that uniquely relates an edge of the three-dimensional object to one face of the object. Each d-edge data structure at least includes vertex descriptions of the edge and a description of the one face. As a result, each processor, in response to the modeling command, operates upon a small component of the model and generates results, in parallel with all other processors, without the need for processor-to-processor intercommunication. 8 figs.
Shared performance monitor in a multiprocessor system
Chiu, George; Gara, Alan G.; Salapura, Valentina
2012-07-24
A performance monitoring unit (PMU) and method for monitoring performance of events occurring in a multiprocessor system. The multiprocessor system comprises a plurality of processor devices units, each processor device for generating signals representing occurrences of events in the processor device, and, a single shared counter resource for performance monitoring. The performance monitor unit is shared by all processor cores in the multiprocessor system. The PMU comprises: a plurality of performance counters each for counting signals representing occurrences of events from one or more the plurality of processor units in the multiprocessor system; and, a plurality of input devices for receiving the event signals from one or more processor devices of the plurality of processor units, the plurality of input devices programmable to select event signals for receipt by one or more of the plurality of performance counters for counting, wherein the PMU is shared between multiple processing units, or within a group of processors in the multiprocessing system. The PMU is further programmed to monitor event signals issued from non-processor devices.
Implementation of kernels on the Maestro processor
NASA Astrophysics Data System (ADS)
Suh, Jinwoo; Kang, D. I. D.; Crago, S. P.
Currently, most microprocessors use multiple cores to increase performance while limiting power usage. Some processors use not just a few cores, but tens of cores or even 100 cores. One such many-core microprocessor is the Maestro processor, which is based on Tilera's TILE64 processor. The Maestro chip is a 49-core, general-purpose, radiation-hardened processor designed for space applications. The Maestro processor, unlike the TILE64, has a floating point unit (FPU) in each core for improved floating point performance. The Maestro processor runs at 342 MHz clock frequency. On the Maestro processor, we implemented several widely used kernels: matrix multiplication, vector add, FIR filter, and FFT. We measured and analyzed the performance of these kernels. The achieved performance was up to 5.7 GFLOPS, and the speedup compared to single tile was up to 49 using 49 tiles.
Ordering of guarded and unguarded stores for no-sync I/O
Gara, Alan; Ohmacht, Martin
2013-06-25
A parallel computing system processes at least one store instruction. A first processor core issues a store instruction. A first queue, associated with the first processor core, stores the store instruction. A second queue, associated with a first local cache memory device of the first processor core, stores the store instruction. The first processor core updates first data in the first local cache memory device according to the store instruction. The third queue, associated with at least one shared cache memory device, stores the store instruction. The first processor core invalidates second data, associated with the store instruction, in the at least one shared cache memory. The first processor core invalidates third data, associated with the store instruction, in other local cache memory devices of other processor cores. The first processor core flushing only the first queue.
Family Registration Card as electronic medical carrier in Bosnia and Herzegovina.
Novo, Ahmed; Masic, Izet; Toromanovic, Selim; Loncarevic, Nedim; Junuzovic, Dzelaludin; Dizdarevic, Jadranka
2004-01-01
Medical documentation is a very important part of the medical documentalistics and is occupies a large part of daily work of medical staff working in Primary Health Care. Paper documentation is going to be replaced by electronic cards in Bosnia and Herzegovina and a new Health Care System is under development, based on an Electronic Family Registration Card. Developed countries proceeded from the manual and semiautomatic method of medical data processing to the new method of entering, storage, transferring, searching and protecting data, using electronic equipment. Currently, many European countries have developed a Medical Card Based Electronic Information System. Three types of electronic card are currently in use: a Hybrid Card, a Smart Card and a Laser Card. The dilemma is which card should be used as a data carrier. The Electronic Family Registration Cared is a question of strategic interest for B&H, but also a great investment. We should avoid the errors of other countries that have been developing card-based system. In this article we present all mentioned cards and compare advantages and disadvantages of different technologies.
Gas cooled traction drive inverter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chinthavali, Madhu Sudhan
The present invention provides a modular circuit card configuration for distributing heat among a plurality of circuit cards. Each circuit card includes a housing adapted to dissipate heat in response to gas flow over the housing. In one aspect, a gas-cooled inverter includes a plurality of inverter circuit cards, and a plurality of circuit card housings, each of which encloses one of the plurality of inverter cards.
Gas cooled traction drive inverter
Chinthavali, Madhu Sudhan
2013-10-08
The present invention provides a modular circuit card configuration for distributing heat among a plurality of circuit cards. Each circuit card includes a housing adapted to dissipate heat in response to gas flow over the housing. In one aspect, a gas-cooled inverter includes a plurality of inverter circuit cards, and a plurality of circuit card housings, each of which encloses one of the plurality of inverter cards.
Electrochemical sensing using voltage-current time differential
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woo, Leta Yar-Li; Glass, Robert Scott; Fitzpatrick, Joseph Jay
2017-02-28
A device for signal processing. The device includes a signal generator, a signal detector, and a processor. The signal generator generates an original waveform. The signal detector detects an affected waveform. The processor is coupled to the signal detector. The processor receives the affected waveform from the signal detector. The processor also compares at least one portion of the affected waveform with the original waveform. The processor also determines a difference between the affected waveform and the original waveform. The processor also determines a value corresponding to a unique portion of the determined difference between the original and affected waveforms.more » The processor also outputs the determined value.« less
Accuracy requirements of optical linear algebra processors in adaptive optics imaging systems
NASA Technical Reports Server (NTRS)
Downie, John D.; Goodman, Joseph W.
1989-01-01
The accuracy requirements of optical processors in adaptive optics systems are determined by estimating the required accuracy in a general optical linear algebra processor (OLAP) that results in a smaller average residual aberration than that achieved with a conventional electronic digital processor with some specific computation speed. Special attention is given to an error analysis of a general OLAP with regard to the residual aberration that is created in an adaptive mirror system by the inaccuracies of the processor, and to the effect of computational speed of an electronic processor on the correction. Results are presented on the ability of an OLAP to compete with a digital processor in various situations.
Application of an artificial neural network to pump card diagnosis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ashenayi, K.; Lea, J.F.; Kemp, F.
1994-12-01
Beam pumping is the most frequently used artificial-lift technique for oil production. Downhole pump cards are used to evaluate performance of the pumping unit. Pump cards can be generated from surface dynamometer cards using a 1D wave equation with viscous damping, as suggested by Gibbs and Neely. Pump cards contain significant information describing the behavior of the pump. However, interpretation of these cards is tedious and time-consuming; hence, an automated system capable of interpreting these cards could speed interpretation and warn of pump failures. This work presents the results of a DOS-based computer program capable of correctly classifying pump cards.more » The program uses a hybrid artificial neural network (ANN) to identify significant features of the pump card. The hybrid ANN uses classical and sinusoidal perceptrons. The network is trained using an error-back-propagation technique. The program correctly identified pump problems for more than 180 different training and test pump cards. The ANN takes a total of 80 data points as input. Sixty data points are collected from the pump card perimeter, and the remaining 20 data points represent the slope at selected points on the pump card perimeter. Pump problem conditions are grouped into 11 distinct classes. The network is capable of identifying one or more of these problem conditions for each pump card. Eight examples are presented and discussed.« less
Passive microfluidic array card and reader
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dugan, Lawrence Christopher; Coleman, Matthew A
A microfluidic array card and reader system for analyzing a sample. The microfluidic array card includes a sample loading section for loading the sample onto the microfluidic array card, a multiplicity of array windows, and a transport section or sections for transporting the sample from the sample loading section to the array windows. The microfluidic array card reader includes a housing, a receiving section for receiving the microfluidic array card, a viewing section, and a light source that directs light to the array window of the microfluidic array card and to the viewing section.
Medication safety--reliability of preference cards.
Dawson, Anthony; Orsini, Michael J; Cooper, Mary R; Wollenburg, Karol
2005-09-01
A CLINICAL ANALYSIS of surgeons' preference cards was initiated in one hospital as part of a comprehensive analysis to reduce medication-error risks by standardizing and simplifying the intraoperative medication-use process specific to the sterile field. THE PREFERENCE CARD ANALYSIS involved two subanalyses: a review of the information as it appeared on the cards and a failure mode and effects analysis of the process involved in using and maintaining the cards. THE ANALYSIS FOUND that the preference card system in use at this hospital is outdated. Variations and inconsistencies within the preference card system indicate that the use of preference cards as guides for medication selection for surgical procedures presents an opportunity for medication errors to occur.
Modeling heterogeneous processor scheduling for real time systems
NASA Technical Reports Server (NTRS)
Leathrum, J. F.; Mielke, R. R.; Stoughton, J. W.
1994-01-01
A new model is presented to describe dataflow algorithms implemented in a multiprocessing system. Called the resource/data flow graph (RDFG), the model explicitly represents cyclo-static processor schedules as circuits of processor arcs which reflect the order that processors execute graph nodes. The model also allows the guarantee of meeting hard real-time deadlines. When unfolded, the model identifies statically the processor schedule. The model therefore is useful for determining the throughput and latency of systems with heterogeneous processors. The applicability of the model is demonstrated using a space surveillance algorithm.
Qin, Juan-Juan; Mao, Wenzhe; Wang, Xiaozhan; Sun, Peng; Cheng, Daqing; Tian, Song; Zhu, Xue-Yong; Yang, Ling; Huang, Zan; Li, Hongliang
2018-06-26
The comprehensive interplay in sterile inflammation and liver cell death predominantly determines hepatic injury caused by ischemia/reperfusion (I/R) insult. Caspase recruitment domain family member 6 (CARD6) was initially shown to play important roles in NF-κB activation. In our preliminary studies, CARD6 downregulation was closely related to hepatic I/R injury in liver transplantation patients and mouse models. Thus, we hypothesized that CARD6 protects against hepatic I/R injury and investigated the underlying molecular mechanisms. A partial hepatic I/R operation was performed in hepatocyte-specific Card6 knockout mice (HKO), Card6 transgenic mice with CARD6 overexpression specifically in hepatocytes (HTG), and the corresponding control mice. Hepatic histology, serum aminotransferase, inflammatory cytokines/chemokines, cell death, and inflammatory signaling were examined to assess liver damage. The molecular mechanisms of CARD6 function were explored in vivo and in vitro. Card6-HTG mice alleviated liver injury compared with control mice as shown by decreased cell death, lower serum transaminase levels, and reduced inflammation and infiltration, whereas Card6-HKO mice had the opposite phenotype. Mechanistically, phosphorylation of ASK1 and its downstream effectors JNK and p38 were increased in the livers of Card6-HKO mice but repressed in those of Card6-HTG mice. Furthermore, ASK1 knockdown normalized the effect of CARD6 deficiency on the activation of NF-κB, JNK and p38, while ASK1 overexpression abrogated the suppressive effect of CARD6. Finally, CARD6 interacted with Ask1. Mutant CARD6 that lacked the ability to interact with ASK1 could not inhibit ASK1 and failed to protect against hepatic I/R injury. CARD6 is a novel protective factor of hepatic I/R injury that suppresses inflammation and liver cell death by inhibiting the ASK1 signaling pathway. CARD6 participate and play an important role during the process of liver blood flow restriction followed by restoring. Through suppressing the activity of ASK1, CARD6 can protect against hepatocytes injury. Targeting CARD6 can be a strategy for precaution and treatment of this disease. Copyright © 2018 European Association for the Study of the Liver. Published by Elsevier B.V. All rights reserved.
48 CFR 32.1108 - Payment by Governmentwide commercial purchase card.
Code of Federal Regulations, 2010 CFR
2010-10-01
... commercial purchase card. 32.1108 Section 32.1108 Federal Acquisition Regulations System FEDERAL ACQUISITION... Governmentwide commercial purchase card. A Governmentwide commercial purchase card charge authorizes the third party (e.g., financial institution) that issued the purchase card to make immediate payment to the...
75 FR 10414 - Researcher Identification Card
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-08
... capturing administrative information on the characteristics of our users. Other forms of identification are... use bar-codes on researcher identification cards in the Washington, DC, area. The plastic cards we... plastic researcher identification cards as part of their security systems, we issue a plastic card to...
48 CFR 2913.301 - Governmentwide commercial purchase card.
Code of Federal Regulations, 2010 CFR
2010-10-01
... purchase card. 2913.301 Section 2913.301 Federal Acquisition Regulations System DEPARTMENT OF LABOR... commercial purchase card. (a) The Government purchase card has far fewer requirements for documentation than other methods of purchasing. However, the same legal restrictions apply to credit card purchases that...
You Can Teach an Old Magician New Tricks
ERIC Educational Resources Information Center
Bonomo, John P.
2008-01-01
Mathematics forms the basis for many types of card tricks. One of the best-known works as follows: Take any 15 cards out of a deck and ask a volunteer to select one of the cards and place it back in the deck. After shuffling the cards, you deal out three piles of cards and ask the volunteer which pile his or her card is in. You collect up the…
Correlates of credit card ownership in men and women.
Yang, Bijou; Lester, David
2005-06-01
In a sample of 352 students, correlates of credit card ownership differed by sex. For both men and women, credit card ownership was predicted by their affective attitude toward credit cards. However, whereas for men concern with money as a tactic for gaining power predicted credit card ownership, for women feelings of insecurity about having sufficient money and having a conservative approach to money predicted credit card ownership.
Renna, Tania Di; Crooks, Simone; Pigford, Ashlee-Ann; Clarkin, Chantalle; Fraser, Amy B; Bunting, Alexandra C; Bould, M Dylan; Boet, Sylvain
2016-09-01
This study aimed to assess the perceived value of the Cognitive Aids for Role Definition (CARD) protocol for simulated intraoperative cardiac arrests. Sixteen interprofessional operating room teams completed three consecutive simulated intraoperative cardiac arrest scenarios: current standard, no CARD; CARD, no CARD teaching; and CARD, didactic teaching. Each team participated in a focus group interview immediately following the third scenario; data were transcribed verbatim and qualitatively analysed. After 6 months, participants formed eight new teams randomised to two groups (CARD or no CARD) and completed a retention intraoperative cardiac arrest simulation scenario. All simulation sessions were video recorded and expert raters assessed team performance. Qualitative analysis of the 16 focus group interviews revealed 3 thematic dimensions: role definition in crisis management; logistical issues; and the "real life" applicability of CARD. Members of the interprofessional team perceived CARD very positively. Exploratory quantitative analysis found no significant differences in team performance with or without CARD (p > 0.05). In conclusion, qualitative data suggest that the CARD protocol clarifies roles and team coordination during interprofessional crisis management and has the potential to improve the team performance. The concept of a self-organising team with defined roles is promising for patient safety.
Parallel processor for real-time structural control
NASA Astrophysics Data System (ADS)
Tise, Bert L.
1993-07-01
A parallel processor that is optimized for real-time linear control has been developed. This modular system consists of A/D modules, D/A modules, and floating-point processor modules. The scalable processor uses up to 1,000 Motorola DSP96002 floating-point processors for a peak computational rate of 60 GFLOPS. Sampling rates up to 625 kHz are supported by this analog-in to analog-out controller. The high processing rate and parallel architecture make this processor suitable for computing state-space equations and other multiply/accumulate-intensive digital filters. Processor features include 14-bit conversion devices, low input-to-output latency, 240 Mbyte/s synchronous backplane bus, low-skew clock distribution circuit, VME connection to host computer, parallelizing code generator, and look- up-tables for actuator linearization. This processor was designed primarily for experiments in structural control. The A/D modules sample sensors mounted on the structure and the floating- point processor modules compute the outputs using the programmed control equations. The outputs are sent through the D/A module to the power amps used to drive the structure's actuators. The host computer is a Sun workstation. An OpenWindows-based control panel is provided to facilitate data transfer to and from the processor, as well as to control the operating mode of the processor. A diagnostic mode is provided to allow stimulation of the structure and acquisition of the structural response via sensor inputs.
Testing and operating a multiprocessor chip with processor redundancy
Bellofatto, Ralph E; Douskey, Steven M; Haring, Rudolf A; McManus, Moyra K; Ohmacht, Martin; Schmunkamp, Dietmar; Sugavanam, Krishnan; Weatherford, Bryan J
2014-10-21
A system and method for improving the yield rate of a multiprocessor semiconductor chip that includes primary processor cores and one or more redundant processor cores. A first tester conducts a first test on one or more processor cores, and encodes results of the first test in an on-chip non-volatile memory. A second tester conducts a second test on the processor cores, and encodes results of the second test in an external non-volatile storage device. An override bit of a multiplexer is set if a processor core fails the second test. In response to the override bit, the multiplexer selects a physical-to-logical mapping of processor IDs according to one of: the encoded results in the memory device or the encoded results in the external storage device. On-chip logic configures the processor cores according to the selected physical-to-logical mapping.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reed, D.A.; Grunwald, D.C.
The spectrum of parallel processor designs can be divided into three sections according to the number and complexity of the processors. At one end there are simple, bit-serial processors. Any one of thee processors is of little value, but when it is coupled with many others, the aggregate computing power can be large. This approach to parallel processing can be likened to a colony of termites devouring a log. The most notable examples of this approach are the NASA/Goodyear Massively Parallel Processor, which has 16K one-bit processors, and the Thinking Machines Connection Machine, which has 64K one-bit processors. At themore » other end of the spectrum, a small number of processors, each built using the fastest available technology and the most sophisticated architecture, are combined. An example of this approach is the Cray X-MP. This type of parallel processing is akin to four woodmen attacking the log with chainsaws.« less
Multi-Threaded Algorithms for GPGPU in the ATLAS High Level Trigger
NASA Astrophysics Data System (ADS)
Conde Muíño, P.; ATLAS Collaboration
2017-10-01
General purpose Graphics Processor Units (GPGPU) are being evaluated for possible future inclusion in an upgraded ATLAS High Level Trigger farm. We have developed a demonstrator including GPGPU implementations of Inner Detector and Muon tracking and Calorimeter clustering within the ATLAS software framework. ATLAS is a general purpose particle physics experiment located on the LHC collider at CERN. The ATLAS Trigger system consists of two levels, with Level-1 implemented in hardware and the High Level Trigger implemented in software running on a farm of commodity CPU. The High Level Trigger reduces the trigger rate from the 100 kHz Level-1 acceptance rate to 1.5 kHz for recording, requiring an average per-event processing time of ∼ 250 ms for this task. The selection in the high level trigger is based on reconstructing tracks in the Inner Detector and Muon Spectrometer and clusters of energy deposited in the Calorimeter. Performing this reconstruction within the available farm resources presents a significant challenge that will increase significantly with future LHC upgrades. During the LHC data taking period starting in 2021, luminosity will reach up to three times the original design value. Luminosity will increase further to 7.5 times the design value in 2026 following LHC and ATLAS upgrades. Corresponding improvements in the speed of the reconstruction code will be needed to provide the required trigger selection power within affordable computing resources. Key factors determining the potential benefit of including GPGPU as part of the HLT processor farm are: the relative speed of the CPU and GPGPU algorithm implementations; the relative execution times of the GPGPU algorithms and serial code remaining on the CPU; the number of GPGPU required, and the relative financial cost of the selected GPGPU. We give a brief overview of the algorithms implemented and present new measurements that compare the performance of various configurations exploiting GPGPU cards.
Evolution of a Novel Antiviral Immune-Signaling Interaction by Partial-Gene Duplication
Korithoski, Bryan; Kolaczkowski, Oralia; Mukherjee, Krishanu; Kola, Reema; Earl, Chandra; Kolaczkowski, Bryan
2015-01-01
The RIG-like receptors (RLRs) are related proteins that identify viral RNA in the cytoplasm and activate cellular immune responses, primarily through direct protein-protein interactions with the signal transducer, IPS1. Although it has been well established that the RLRs, RIG-I and MDA5, activate IPS1 through binding between the twin caspase activation and recruitment domains (CARDs) on the RLR and a homologous CARD on IPS1, it is less clear which specific RLR CARD(s) are required for this interaction, and almost nothing is known about how the RLR-IPS1 interaction evolved. In contrast to what has been observed in the presence of immune-modulating K63-linked polyubiquitin, here we show that—in the absence of ubiquitin—it is the first CARD domain of human RIG-I and MDA5 (CARD1) that binds directly to IPS1 CARD, and not the second (CARD2). Although the RLRs originated in the earliest animals, both the IPS1 gene and the twin-CARD domain architecture of RIG-I and MDA5 arose much later in the deuterostome lineage, probably through a series of tandem partial-gene duplication events facilitated by tight clustering of RLRs and IPS1 in the ancestral deuterostome genome. Functional differentiation of RIG-I CARD1 and CARD2 appears to have occurred early during this proliferation of RLR and related CARDs, potentially driven by adaptive coevolution between RIG-I CARD domains and IPS1 CARD. However, functional differentiation of MDA5 CARD1 and CARD2 occurred later. These results fit a general model in which duplications of protein-protein interaction domains into novel gene contexts could facilitate the expansion of signaling networks and suggest a potentially important role for functionally-linked gene clusters in generating novel immune-signaling pathways. PMID:26356745
48 CFR 552.232-77 - Payment By Government Charge Card.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Charge Card. 552.232-77 Section 552.232-77 Federal Acquisition Regulations System GENERAL SERVICES....232-77 Payment By Government Charge Card. As prescribed in 532.7003, insert the following clause: Payment By Government Charge Card (NOV 2009) (a) Definitions. “Governmentwide commercial purchase card...
An FPGA Implementation to Detect Selective Cationic Antibacterial Peptides
Polanco González, Carlos; Nuño Maganda, Marco Aurelio; Arias-Estrada, Miguel; del Rio, Gabriel
2011-01-01
Exhaustive prediction of physicochemical properties of peptide sequences is used in different areas of biological research. One example is the identification of selective cationic antibacterial peptides (SCAPs), which may be used in the treatment of different diseases. Due to the discrete nature of peptide sequences, the physicochemical properties calculation is considered a high-performance computing problem. A competitive solution for this class of problems is to embed algorithms into dedicated hardware. In the present work we present the adaptation, design and implementation of an algorithm for SCAPs prediction into a Field Programmable Gate Array (FPGA) platform. Four physicochemical properties codes useful in the identification of peptide sequences with potential selective antibacterial activity were implemented into an FPGA board. The speed-up gained in a single-copy implementation was up to 108 times compared with a single Intel processor cycle for cycle. The inherent scalability of our design allows for replication of this code into multiple FPGA cards and consequently improvements in speed are possible. Our results show the first embedded SCAPs prediction solution described and constitutes the grounds to efficiently perform the exhaustive analysis of the sequence-physicochemical properties relationship of peptides. PMID:21738652
A Ground Testbed to Advance US Capability in Autonomous Rendezvous and Docking Project
NASA Technical Reports Server (NTRS)
D'Souza, Chris
2014-01-01
This project will advance the Autonomous Rendezvous and Docking (AR&D) GNC system by testing it on hardware, particularly in a flight processor, with a goal of testing it in IPAS with the Waypoint L2 AR&D scenario. The entire Agency supports development of a Commodity for Autonomous Rendezvous and Docking (CARD) as outlined in the Agency-wide Community of Practice whitepaper entitled: "A Strategy for the U.S. to Develop and Maintain a Mainstream Capability for Automated/Autonomous Rendezvous and Docking in Low Earth Orbit and Beyond". The whitepaper establishes that 1) the US is in a continual state of AR&D point-designs and therefore there is no US "off-the-shelf" AR&D capability in existence today, 2) the US has fallen behind our foreign counterparts particularly in the autonomy of AR&D systems, 3) development of an AR&D commodity is a national need that would benefit NASA, our commercial partners, and DoD, and 4) an initial estimate indicates that the development of a standardized AR&D capability could save the US approximately $60M for each AR&D project and cut each project's AR&D flight system implementation time in half.
A portable high-definition electronic endoscope based on embedded system
NASA Astrophysics Data System (ADS)
Xu, Guang; Wang, Liqiang; Xu, Jin
2012-11-01
This paper presents a low power and portable highdefinition (HD) electronic endoscope based on CortexA8 embedded system. A 1/6 inch CMOS image sensor is used to acquire HD images with 1280 *800 pixels. The camera interface of A8 is designed to support images of various sizes and support multiple inputs of video format such as ITUR BT601/ 656 standard. Image rotation (90 degrees clockwise) and image process functions are achieved by CAMIF. The decode engine of the processor plays back or records HD videos at speed of 30 frames per second, builtin HDMI interface transmits high definition images to the external display. Image processing procedures such as demosaicking, color correction and auto white balance are realized on the A8 platform. Other functions are selected through OSD settings. An LCD panel displays the real time images. The snapshot pictures or compressed videos are saved in an SD card or transmited to a computer through USB interface. The size of the camera head is 4×4.8×15 mm with more than 3 meters working distance. The whole endoscope system can be powered by a lithium battery, with the advantages of miniature, low cost and portability.
Graphics Processors in HEP Low-Level Trigger Systems
NASA Astrophysics Data System (ADS)
Ammendola, Roberto; Biagioni, Andrea; Chiozzi, Stefano; Cotta Ramusino, Angelo; Cretaro, Paolo; Di Lorenzo, Stefano; Fantechi, Riccardo; Fiorini, Massimiliano; Frezza, Ottorino; Lamanna, Gianluca; Lo Cicero, Francesca; Lonardo, Alessandro; Martinelli, Michele; Neri, Ilaria; Paolucci, Pier Stanislao; Pastorelli, Elena; Piandani, Roberto; Pontisso, Luca; Rossetti, Davide; Simula, Francesco; Sozzi, Marco; Vicini, Piero
2016-11-01
Usage of Graphics Processing Units (GPUs) in the so called general-purpose computing is emerging as an effective approach in several fields of science, although so far applications have been employing GPUs typically for offline computations. Taking into account the steady performance increase of GPU architectures in terms of computing power and I/O capacity, the real-time applications of these devices can thrive in high-energy physics data acquisition and trigger systems. We will examine the use of online parallel computing on GPUs for the synchronous low-level trigger, focusing on tests performed on the trigger system of the CERN NA62 experiment. To successfully integrate GPUs in such an online environment, latencies of all components need analysing, networking being the most critical. To keep it under control, we envisioned NaNet, an FPGA-based PCIe Network Interface Card (NIC) enabling GPUDirect connection. Furthermore, it is assessed how specific trigger algorithms can be parallelized and thus benefit from a GPU implementation, in terms of increased execution speed. Such improvements are particularly relevant for the foreseen Large Hadron Collider (LHC) luminosity upgrade where highly selective algorithms will be essential to maintain sustainable trigger rates with very high pileup.
CUDAICA: GPU Optimization of Infomax-ICA EEG Analysis
Raimondo, Federico; Kamienkowski, Juan E.; Sigman, Mariano; Fernandez Slezak, Diego
2012-01-01
In recent years, Independent Component Analysis (ICA) has become a standard to identify relevant dimensions of the data in neuroscience. ICA is a very reliable method to analyze data but it is, computationally, very costly. The use of ICA for online analysis of the data, used in brain computing interfaces, results are almost completely prohibitive. We show an increase with almost no cost (a rapid video card) of speed of ICA by about 25 fold. The EEG data, which is a repetition of many independent signals in multiple channels, is very suitable for processing using the vector processors included in the graphical units. We profiled the implementation of this algorithm and detected two main types of operations responsible of the processing bottleneck and taking almost 80% of computing time: vector-matrix and matrix-matrix multiplications. By replacing function calls to basic linear algebra functions to the standard CUBLAS routines provided by GPU manufacturers, it does not increase performance due to CUDA kernel launch overhead. Instead, we developed a GPU-based solution that, comparing with the original BLAS and CUBLAS versions, obtains a 25x increase of performance for the ICA calculation. PMID:22811699
NASA Astrophysics Data System (ADS)
Abramov, G. V.; Gavrilov, A. N.
2018-03-01
The article deals with the numerical solution of the mathematical model of the particles motion and interaction in multicomponent plasma by the example of electric arc synthesis of carbon nanostructures. The high order of the particles and the number of their interactions requires a significant input of machine resources and time for calculations. Application of the large particles method makes it possible to reduce the amount of computation and the requirements for hardware resources without affecting the accuracy of numerical calculations. The use of technology of GPGPU parallel computing using the Nvidia CUDA technology allows organizing all General purpose computation on the basis of the graphical processor graphics card. The comparative analysis of different approaches to parallelization of computations to speed up calculations with the choice of the algorithm in which to calculate the accuracy of the solution shared memory is used. Numerical study of the influence of particles density in the macro particle on the motion parameters and the total number of particle collisions in the plasma for different modes of synthesis has been carried out. The rational range of the coherence coefficient of particle in the macro particle is computed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woo, Leta Yar-Li; Glass, Robert Scott; Fitzpatrick, Joseph Jay
2018-01-02
A device for signal processing. The device includes a signal generator, a signal detector, and a processor. The signal generator generates an original waveform. The signal detector detects an affected waveform. The processor is coupled to the signal detector. The processor receives the affected waveform from the signal detector. The processor also compares at least one portion of the affected waveform with the original waveform. The processor also determines a difference between the affected waveform and the original waveform. The processor also determines a value corresponding to a unique portion of the determined difference between the original and affected waveforms.more » The processor also outputs the determined value.« less
Credit Cards. Bulletin No. 721. (Revised.)
ERIC Educational Resources Information Center
Fox, Linda Kirk
This cooperative extension bulletin provides basic information about credit cards and their use. It covers the following topics: types of credit cards (revolving credit, travel and entertainment, and debit); factors to consider when evaluating a credit card (interest rates, grace period, and annual membership fee); other credit card costs (late…
Code of Federal Regulations, 2014 CFR
2014-01-01
... 12 Banks and Banking 1 2014-01-01 2014-01-01 false Purchased credit card relationships, servicing assets, intangible assets (other than purchased credit card relationships and servicing assets), credit... Purchased credit card relationships, servicing assets, intangible assets (other than purchased credit card...
Code of Federal Regulations, 2013 CFR
2013-01-01
... 12 Banks and Banking 1 2013-01-01 2013-01-01 false Purchased credit card relationships, servicing assets, intangible assets (other than purchased credit card relationships and servicing assets), credit... Purchased credit card relationships, servicing assets, intangible assets (other than purchased credit card...
Code of Federal Regulations, 2013 CFR
2013-01-01
... 12 Banks and Banking 6 2013-01-01 2012-01-01 true Purchased credit card relationships, servicing assets, intangible assets (other than purchased credit card relationships and servicing assets), credit... credit card relationships, servicing assets, intangible assets (other than purchased credit card...
Code of Federal Regulations, 2012 CFR
2012-01-01
... 12 Banks and Banking 1 2012-01-01 2012-01-01 false Purchased credit card relationships, servicing assets, intangible assets (other than purchased credit card relationships and servicing assets), credit... Purchased credit card relationships, servicing assets, intangible assets (other than purchased credit card...
Code of Federal Regulations, 2012 CFR
2012-01-01
... 12 Banks and Banking 6 2012-01-01 2012-01-01 false Purchased credit card relationships, servicing assets, intangible assets (other than purchased credit card relationships and servicing assets), credit... credit card relationships, servicing assets, intangible assets (other than purchased credit card...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-25
... savings associations). The OCC requires independent credit card banks and independent credit card Federal... by the bank or Federal savings association. Independent credit card banks and independent credit card... credit card operations and are not affiliated with a full service national bank or Federal savings...
Code of Federal Regulations, 2014 CFR
2014-01-01
... 12 Banks and Banking 6 2014-01-01 2012-01-01 true Purchased credit card relationships, servicing assets, intangible assets (other than purchased credit card relationships and servicing assets), credit... credit card relationships, servicing assets, intangible assets (other than purchased credit card...
Index Nuclear Wallet Cards Contents Current Version Radioactive Nuclides (Homeland Security) Nuclear Materials Management & Safeguards System 8th Edition 2011 Nuclear Wallet Cards Resources Search Nuclear Wallet Cards 8th Edition PDF Format 8thEdition, Android Market Download Nuclear Wallet Cards Nuclear
Evolution of optically nondestructive and data-non-intrusive credit card verifiers
NASA Astrophysics Data System (ADS)
Sumriddetchkajorn, Sarun; Intaravanne, Yuttana
2010-04-01
Since the deployment of the credit card, the number of credit card fraud cases has grown rapidly with a huge amount of loss in millions of US dollars. Instead of asking more information from the credit card's holder or taking risk through payment approval, a nondestructive and data-non-intrusive credit card verifier is highly desirable before transaction begins. In this paper, we review optical techniques that have been proposed and invented in order to make the genuine credit card more distinguishable than the counterfeit credit card. Several optical approaches for the implementation of credit card verifiers are also included. In particular, we highlight our invention on a hyperspectral-imaging based portable credit card verifier structure that offers a very low false error rate of 0.79%. Other key features include low cost, simplicity in design and implementation, no moving part, no need of an additional decoding key, and adaptive learning.
Three dimensional identification card and applications
NASA Astrophysics Data System (ADS)
Zhou, Changhe; Wang, Shaoqing; Li, Chao; Li, Hao; Liu, Zhao
2016-10-01
Three dimensional Identification Card, with its three-dimensional personal image displayed and stored for personal identification, is supposed be the advanced version of the present two-dimensional identification card in the future [1]. Three dimensional Identification Card means that there are three-dimensional optical techniques are used, the personal image on ID card is displayed to be three-dimensional, so we can see three dimensional personal face. The ID card also stores the three-dimensional face information in its inside electronics chip, which might be recorded by using two-channel cameras, and it can be displayed in computer as three-dimensional images for personal identification. Three-dimensional ID card might be one interesting direction to update the present two-dimensional card in the future. Three-dimension ID card might be widely used in airport custom, entrance of hotel, school, university, as passport for on-line banking, registration of on-line game, etc...
Design and implementation of a smart card based healthcare information system.
Kardas, Geylani; Tunali, E Turhan
2006-01-01
Smart cards are used in information technologies as portable integrated devices with data storage and data processing capabilities. As in other fields, smart card use in health systems became popular due to their increased capacity and performance. Their efficient use with easy and fast data access facilities leads to implementation particularly widespread in security systems. In this paper, a smart card based healthcare information system is developed. The system uses smart card for personal identification and transfer of health data and provides data communication via a distributed protocol which is particularly developed for this study. Two smart card software modules are implemented that run on patient and healthcare professional smart cards, respectively. In addition to personal information, general health information about the patient is also loaded to patient smart card. Health care providers use their own smart cards to be authenticated on the system and to access data on patient cards. Encryption keys and digital signature keys stored on smart cards of the system are used for secure and authenticated data communication between clients and database servers over distributed object protocol. System is developed on Java platform by using object oriented architecture and design patterns.
Qin, Haihong; Jin, Jiang; Fischer, Heinz; Mildner, Michael; Gschwandtner, Maria; Mlitz, Veronika; Eckhart, Leopold; Tschachler, Erwin
2017-08-01
CARD18 contains a caspase recruitment domain (CARD) via which it binds to caspase-1 and thereby inhibits caspase-1-mediated activation of the pro-inflammatory cytokine interleukin (IL)-1β. To determine the expression profile and the role of CARD18 during differentiation of keratinocytes and to compare the expression of CARD18 in normal skin and in inflammatory skin diseases. Human keratinocytes were induced to differentiate in monolayer and in 3D skin equivalent cultures. In some experiments, CARD18-specific siRNAs were used to knock down expression of CARD18. CARD18 mRNA levels were determined by quantitative real-time PCR, and CARD18 protein was detected by Western blot and immunofluorescence analyses. In situ expression was analyzed in skin biopsies obtained from healthy donors and patients with psoriasis and lichen planus. CARD18 mRNA was expressed in the epidermis at more than 100-fold higher levels than in any other human tissue. Within the epidermis, CARD18 was specifically expressed in the granular layer. In vitro CARD18 was strongly upregulated at both mRNA and protein levels in keratinocytes undergoing terminal differentiation. In skin equivalent cultures the expression of CARD18 was efficiently suppressed by siRNAs without impairing stratum corneum formation. Epidermal expression of CARD18 was increased after ultraviolet (UV)B irradiation of skin explants. In skin biopsies of patients with psoriasis no consistent regulation of CARD18 expression was observed, however, in lesional epidermis of patients with lichen planus, CARD18 expression was either greatly diminished or entirely absent whereas in non-lesional areas expression was comparable to normal skin. Our results identify CARD18 as a differentiation-associated keratinocyte protein that is altered in abundance by UV stress. Its downregulation in lichen planus indicates a potential role in inflammatory reactions of the epidermis in this disease. Copyright © 2017 Japanese Society for Investigative Dermatology. Published by Elsevier B.V. All rights reserved.
Oweiss, Karim G
2006-07-01
This paper suggests a new approach for data compression during extracutaneous transmission of neural signals recorded by high-density microelectrode array in the cortex. The approach is based on exploiting the temporal and spatial characteristics of the neural recordings in order to strip the redundancy and infer the useful information early in the data stream. The proposed signal processing algorithms augment current filtering and amplification capability and may be a viable replacement to on chip spike detection and sorting currently employed to remedy the bandwidth limitations. Temporal processing is devised by exploiting the sparseness capabilities of the discrete wavelet transform, while spatial processing exploits the reduction in the number of physical channels through quasi-periodic eigendecomposition of the data covariance matrix. Our results demonstrate that substantial improvements are obtained in terms of lower transmission bandwidth, reduced latency and optimized processor utilization. We also demonstrate the improvements qualitatively in terms of superior denoising capabilities and higher fidelity of the obtained signals.
Autonomous space processor for orbital debris
NASA Technical Reports Server (NTRS)
Ramohalli, Kumar; Campbell, David; Brockman, Jeff P.; Carter, Bruce; Donelson, Leslie; John, Lawrence E.; Marine, Micky C.; Rodina, Dan D.
1989-01-01
This work continues to develop advanced designs toward the ultimate goal of a GETAWAY SPECIAL to demonstrate economical removal of orbital debris utilizing local resources in orbit. The fundamental technical feasibility was demonstrated last year through theoretical calculations, quantitative computer animation, a solar focal point cutter, a robotic arm design and a subscale model. During this reporting period, several improvements are made in the solar cutter, such as auto track capabilities, better quality reflectors and a more versatile framework. The major advance has been in the design, fabrication and working demonstration of a ROBOTIC ARM that has several degrees of freedom. The functions were specifically tailored for the orbital debris handling. These advances are discussed here. Also a small fraction of the resources were allocated towards research in flame augmentation in SCRAMJETS for the NASP. Here, the fundamental advance was the attainment of Mach numbers up to 0.6 in the flame zone and a vastly improved injection system; the current work is expected to achieve supersonic combustion in the laboratory and an advanced monitoring system.
Low Latency DESDynI Data Products for Disaster Response, Resource Management and Other Applications
NASA Technical Reports Server (NTRS)
Doubleday, Joshua R.; Chien, Steve A.; Lou, Yunling
2011-01-01
We are developing onboard processor technology targeted at the L-band SAR instrument onboard the planned DESDynI mission to enable formation of SAR images onboard opening possibilities for near-real-time data products to augment full data streams. Several image processing and/or interpretation techniques are being explored as possible direct-broadcast products for use by agencies in need of low-latency data, responsible for disaster mitigation and assessment, resource management, agricultural development, shipping, etc. Data collected through UAVSAR (L-band) serves as surrogate to the future DESDynI instrument. We have explored surface water extent as a tool for flooding response, and disturbance images on polarimetric backscatter of repeat pass imagery potentially useful for structural collapse (earthquake), mud/land/debris-slides etc. We have also explored building vegetation and snow/ice classifiers, via support vector machines utilizing quad-pol backscatter, cross-pol phase, and a number of derivatives (radar vegetation index, dielectric estimates, etc.). We share our qualitative and quantitative results thus far.
Analyzing checkpointing trends for applications on the IBM Blue Gene/P system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Naik, H.; Gupta, R.; Beckman, P.
Current petascale systems have tens of thousands of hardware components and complex system software stacks, which increase the probability of faults occurring during the lifetime of a process. Checkpointing has been a popular method of providing fault tolerance in high-end systems. While considerable research has been done to optimize checkpointing, in practice the method still involves a high-cost overhead for users. In this paper, we study the checkpointing overhead seen by applications running on leadership-class machines such as the IBM Blue Gene/P at Argonne National Laboratory. We study various applications and design a methodology to assist users in understanding andmore » choosing checkpointing frequency and reducing the overhead incurred. In particular, we study three popular applications -- the Grid-Based Projector-Augmented Wave application, the Carr-Parrinello Molecular Dynamics application, and a Nek5000 computational fluid dynamics application -- and analyze their memory usage and possible checkpointing trends on 32,768 processors of the Blue Gene/P system.« less
Advanced design for orbital debris removal in support of solar system exploration
NASA Technical Reports Server (NTRS)
1991-01-01
The development of an Autonomous Space Processor for Orbital Debris (ASPOD) is the ultimate goal. The craft will process, in situ, orbital debris using resources available in low Earth orbit (LEO). The serious problem of orbital debris is briefly described and the nature of the large debris population is outlined. This year, focus was on development of a versatile robotic manipulator to augment an existing robotic arm; incorporation of remote operation of robotic arms; and formulation of optimal (time and energy) trajectory planning algorithms for coordinating robotic arms. The mechanical design of the new arm is described in detail. The versatile work envelope is explained showing the flexibility of the new design. Several telemetry communication systems are described which will enable the remote operation of the robotic arms. The trajectory planning algorithms are fully developed for both the time-optimal and energy-optimal problem. The optimal problem is solved using phase plane techniques while the energy optimal problem is solved using dynamics programming.
Charon Toolkit for Parallel, Implicit Structured-Grid Computations: Functional Design
NASA Technical Reports Server (NTRS)
VanderWijngaart, Rob F.; Kutler, Paul (Technical Monitor)
1997-01-01
In a previous report the design concepts of Charon were presented. Charon is a toolkit that aids engineers in developing scientific programs for structured-grid applications to be run on MIMD parallel computers. It constitutes an augmentation of the general-purpose MPI-based message-passing layer, and provides the user with a hierarchy of tools for rapid prototyping and validation of parallel programs, and subsequent piecemeal performance tuning. Here we describe the implementation of the domain decomposition tools used for creating data distributions across sets of processors. We also present the hierarchy of parallelization tools that allows smooth translation of legacy code (or a serial design) into a parallel program. Along with the actual tool descriptions, we will present the considerations that led to the particular design choices. Many of these are motivated by the requirement that Charon must be useful within the traditional computational environments of Fortran 77 and C. Only the Fortran 77 syntax will be presented in this report.
Hybrid Electro-Optic Processor
1991-07-01
This report describes the design of a hybrid electro - optic processor to perform adaptive interference cancellation in radar systems. The processor is...modulator is reported. Included is this report is a discussion of the design, partial fabrication in the laboratory, and partial testing of the hybrid electro ... optic processor. A follow on effort is planned to complete the construction and testing of the processor. The work described in this report is the
JPRS Report, Science & Technology, Europe.
1991-04-30
processor in collaboration with Intel . The processor , christened Touchstone, will be used as the core of a parallel computer with 2,000 processors . One of...ELECTRONIQUE HEBDO in French 24 Jan 91 pp 14-15 [Article by Claire Remy: "Everything Set for Neural Signal Processors " first paragraph is ELECTRONIQUE...paving the way for neural signal processors in so doing. The principal advantage of this specific circuit over a neuromimetic software program is
Processor register error correction management
Bose, Pradip; Cher, Chen-Yong; Gupta, Meeta S.
2016-12-27
Processor register protection management is disclosed. In embodiments, a method of processor register protection management can include determining a sensitive logical register for executable code generated by a compiler, generating an error-correction table identifying the sensitive logical register, and storing the error-correction table in a memory accessible by a processor. The processor can be configured to generate a duplicate register of the sensitive logical register identified by the error-correction table.
The CSM testbed matrix processors internal logic and dataflow descriptions
NASA Technical Reports Server (NTRS)
Regelbrugge, Marc E.; Wright, Mary A.
1988-01-01
This report constitutes the final report for subtask 1 of Task 5 of NASA Contract NAS1-18444, Computational Structural Mechanics (CSM) Research. This report contains a detailed description of the coded workings of selected CSM Testbed matrix processors (i.e., TOPO, K, INV, SSOL) and of the arithmetic utility processor AUS. These processors and the current sparse matrix data structures are studied and documented. Items examined include: details of the data structures, interdependence of data structures, data-blocking logic in the data structures, processor data flow and architecture, and processor algorithmic logic flow.
Parallel processor for real-time structural control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tise, B.L.
1992-01-01
A parallel processor that is optimized for real-time linear control has been developed. This modular system consists of A/D modules, D/A modules, and floating-point processor modules. The scalable processor uses up to 1,000 Motorola DSP96002 floating-point processors for a peak computational rate of 60 GFLOPS. Sampling rates up to 625 kHz are supported by this analog-in to analog-out controller. The high processing rate and parallel architecture make this processor suitable for computing state-space equations and other multiply/accumulate-intensive digital filters. Processor features include 14-bit conversion devices, low input-output latency, 240 Mbyte/s synchronous backplane bus, low-skew clock distribution circuit, VME connection tomore » host computer, parallelizing code generator, and look-up-tables for actuator linearization. This processor was designed primarily for experiments in structural control. The A/D modules sample sensors mounted on the structure and the floating-point processor modules compute the outputs using the programmed control equations. The outputs are sent through the D/A module to the power amps used to drive the structure's actuators. The host computer is a Sun workstation. An Open Windows-based control panel is provided to facilitate data transfer to and from the processor, as well as to control the operating mode of the processor. A diagnostic mode is provided to allow stimulation of the structure and acquisition of the structural response via sensor inputs.« less
12 CFR 226.12 - Special credit card provisions.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 3 2010-01-01 2010-01-01 false Special credit card provisions. 226.12 Section... SYSTEM TRUTH IN LENDING (REGULATION Z) Open-End Credit § 226.12 Special credit card provisions. (a) Issuance of credit cards. Regardless of the purpose for which a credit card is to be used, including...
37 CFR 1.23 - Methods of payment.
Code of Federal Regulations, 2010 CFR
2010-07-01
... made by credit card, except for replenishing a deposit account. Payment of a fee by credit card must specify the amount to be charged to the credit card and such other information as is necessary to process... authorization to charge fees to a credit card. If credit card information is provided on a form or document...
12 CFR 226.12 - Special credit card provisions.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 12 Banks and Banking 3 2013-01-01 2013-01-01 false Special credit card provisions. 226.12 Section... SYSTEM (CONTINUED) TRUTH IN LENDING (REGULATION Z) Open-End Credit § 226.12 Special credit card provisions. (a) Issuance of credit cards. Regardless of the purpose for which a credit card is to be used...
37 CFR 1.23 - Methods of payment.
Code of Federal Regulations, 2011 CFR
2011-07-01
... made by credit card, except for replenishing a deposit account. Payment of a fee by credit card must specify the amount to be charged to the credit card and such other information as is necessary to process... authorization to charge fees to a credit card. If credit card information is provided on a form or document...
12 CFR 1026.12 - Special credit card provisions.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 12 Banks and Banking 8 2012-01-01 2012-01-01 false Special credit card provisions. 1026.12 Section...-End Credit § 1026.12 Special credit card provisions. (a) Issuance of credit cards. Regardless of the purpose for which a credit card is to be used, including business, commercial, or agricultural use, no...
37 CFR 1.23 - Methods of payment.
Code of Federal Regulations, 2014 CFR
2014-07-01
... made by credit card, except for replenishing a deposit account. Payment of a fee by credit card must specify the amount to be charged to the credit card and such other information as is necessary to process... authorization to charge fees to a credit card. If credit card information is provided on a form or document...
12 CFR Rules for Card Issuers That... - by-Transaction Basis
Code of Federal Regulations, 2011 CFR
2011-01-01
... THE FEDERAL RESERVE SYSTEM TRUTH IN LENDING (REGULATION Z) Special Rules Applicable to Credit Card... Subpart B apply if credit cards are issued and the card issuer and the seller are the same or related... creditor have an agreement authorizing the seller to honor the creditor's credit card. 1. Section 226.6(a...
12 CFR 1026.12 - Special credit card provisions.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 12 Banks and Banking 9 2014-01-01 2014-01-01 false Special credit card provisions. 1026.12 Section...-End Credit § 1026.12 Special credit card provisions. (a) Issuance of credit cards. Regardless of the purpose for which a credit card is to be used, including business, commercial, or agricultural use, no...
12 CFR Rules for Card Issuers That... - by-Transaction Basis
Code of Federal Regulations, 2012 CFR
2012-01-01
... THE FEDERAL RESERVE SYSTEM TRUTH IN LENDING (REGULATION Z) Special Rules Applicable to Credit Card... Subpart B apply if credit cards are issued and the card issuer and the seller are the same or related... creditor have an agreement authorizing the seller to honor the creditor's credit card. 1. Section 226.6(a...
12 CFR 226.12 - Special credit card provisions.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 12 Banks and Banking 3 2011-01-01 2011-01-01 false Special credit card provisions. 226.12 Section... SYSTEM TRUTH IN LENDING (REGULATION Z) Open-End Credit § 226.12 Special credit card provisions. (a) Issuance of credit cards. Regardless of the purpose for which a credit card is to be used, including...
12 CFR 226.12 - Special credit card provisions.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 12 Banks and Banking 3 2014-01-01 2014-01-01 false Special credit card provisions. 226.12 Section... SYSTEM (CONTINUED) TRUTH IN LENDING (REGULATION Z) Open-End Credit § 226.12 Special credit card provisions. (a) Issuance of credit cards. Regardless of the purpose for which a credit card is to be used...
12 CFR 226.12 - Special credit card provisions.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 12 Banks and Banking 3 2012-01-01 2012-01-01 false Special credit card provisions. 226.12 Section... SYSTEM TRUTH IN LENDING (REGULATION Z) Open-End Credit § 226.12 Special credit card provisions. (a) Issuance of credit cards. Regardless of the purpose for which a credit card is to be used, including...
37 CFR 1.23 - Methods of payment.
Code of Federal Regulations, 2013 CFR
2013-07-01
... made by credit card, except for replenishing a deposit account. Payment of a fee by credit card must specify the amount to be charged to the credit card and such other information as is necessary to process... authorization to charge fees to a credit card. If credit card information is provided on a form or document...
12 CFR 1026.12 - Special credit card provisions.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 12 Banks and Banking 8 2013-01-01 2013-01-01 false Special credit card provisions. 1026.12 Section...-End Credit § 1026.12 Special credit card provisions. (a) Issuance of credit cards. Regardless of the purpose for which a credit card is to be used, including business, commercial, or agricultural use, no...
Code of Federal Regulations, 2014 CFR
2014-04-01
... identification cards; combined border crossing identification cards and B-1/B-2 visitor visas. 41.32 Section 41.32 Foreign Relations DEPARTMENT OF STATE VISAS VISAS: DOCUMENTATION OF NONIMMIGRANTS UNDER THE... crossing identification cards; combined border crossing identification cards and B-1/B-2 visitor visas. (a...
Code of Federal Regulations, 2010 CFR
2010-04-01
... identification cards; combined border crossing identification cards and B-1/B-2 visitor visas. 41.32 Section 41.32 Foreign Relations DEPARTMENT OF STATE VISAS VISAS: DOCUMENTATION OF NONIMMIGRANTS UNDER THE... crossing identification cards; combined border crossing identification cards and B-1/B-2 visitor visas. (a...
Code of Federal Regulations, 2013 CFR
2013-04-01
... identification cards; combined border crossing identification cards and B-1/B-2 visitor visas. 41.32 Section 41.32 Foreign Relations DEPARTMENT OF STATE VISAS VISAS: DOCUMENTATION OF NONIMMIGRANTS UNDER THE... crossing identification cards; combined border crossing identification cards and B-1/B-2 visitor visas. (a...
Code of Federal Regulations, 2011 CFR
2011-04-01
... identification cards; combined border crossing identification cards and B-1/B-2 visitor visas. 41.32 Section 41.32 Foreign Relations DEPARTMENT OF STATE VISAS VISAS: DOCUMENTATION OF NONIMMIGRANTS UNDER THE... crossing identification cards; combined border crossing identification cards and B-1/B-2 visitor visas. (a...
Code of Federal Regulations, 2012 CFR
2012-04-01
... identification cards; combined border crossing identification cards and B-1/B-2 visitor visas. 41.32 Section 41.32 Foreign Relations DEPARTMENT OF STATE VISAS VISAS: DOCUMENTATION OF NONIMMIGRANTS UNDER THE... crossing identification cards; combined border crossing identification cards and B-1/B-2 visitor visas. (a...
Leshchiner, Elizaveta S; Rush, Jason S; Durney, Michael A; Cao, Zhifang; Dančík, Vlado; Chittick, Benjamin; Wu, Huixian; Petrone, Adam; Bittker, Joshua A; Phillips, Andrew; Perez, Jose R; Shamji, Alykhan F; Kaushik, Virendar K; Daly, Mark J; Graham, Daniel B; Schreiber, Stuart L; Xavier, Ramnik J
2017-10-24
Advances in human genetics have dramatically expanded our understanding of complex heritable diseases. Genome-wide association studies have identified an allelic series of CARD9 variants associated with increased risk of or protection from inflammatory bowel disease (IBD). The predisposing variant of CARD9 is associated with increased NF-κB-mediated cytokine production. Conversely, the protective variant lacks a functional C-terminal domain and is unable to recruit the E3 ubiquitin ligase TRIM62. Here, we used biochemical insights into CARD9 variant proteins to create a blueprint for IBD therapeutics and recapitulated the mechanism of the CARD9 protective variant using small molecules. We developed a multiplexed bead-based technology to screen compounds for disruption of the CARD9-TRIM62 interaction. We identified compounds that directly and selectively bind CARD9, disrupt TRIM62 recruitment, inhibit TRIM62-mediated ubiquitinylation of CARD9, and demonstrate cellular activity and selectivity in CARD9-dependent pathways. Taken together, small molecules targeting CARD9 illustrate a path toward improved IBD therapeutics. Published under the PNAS license.
Rappaport, Noa; Fishilevich, Simon; Nudel, Ron; Twik, Michal; Belinky, Frida; Plaschkes, Inbar; Stein, Tsippi Iny; Cohen, Dana; Oz-Levi, Danit; Safran, Marilyn; Lancet, Doron
2017-08-18
A key challenge in the realm of human disease research is next generation sequencing (NGS) interpretation, whereby identified filtered variant-harboring genes are associated with a patient's disease phenotypes. This necessitates bioinformatics tools linked to comprehensive knowledgebases. The GeneCards suite databases, which include GeneCards (human genes), MalaCards (human diseases) and PathCards (human pathways) together with additional tools, are presented with the focus on MalaCards utility for NGS interpretation as well as for large scale bioinformatic analyses. VarElect, our NGS interpretation tool, leverages the broad information in the GeneCards suite databases. MalaCards algorithms unify disease-related terms and annotations from 69 sources. Further, MalaCards defines hierarchical relatedness-aliases, disease families, a related diseases network, categories and ontological classifications. GeneCards and MalaCards delineate and share a multi-tiered, scored gene-disease network, with stringency levels, including the definition of elite status-high quality gene-disease pairs, coming from manually curated trustworthy sources, that includes 4500 genes for 8000 diseases. This unique resource is key to NGS interpretation by VarElect. VarElect, a comprehensive search tool that helps infer both direct and indirect links between genes and user-supplied disease/phenotype terms, is robustly strengthened by the information found in MalaCards. The indirect mode benefits from GeneCards' diverse gene-to-gene relationships, including SuperPaths-integrated biological pathways from 12 information sources. We are currently adding an important information layer in the form of "disease SuperPaths", generated from the gene-disease matrix by an algorithm similar to that previously employed for biological pathway unification. This allows the discovery of novel gene-disease and disease-disease relationships. The advent of whole genome sequencing necessitates capacities to go beyond protein coding genes. GeneCards is highly useful in this respect, as it also addresses 101,976 non-protein-coding RNA genes. In a more recent development, we are currently adding an inclusive map of regulatory elements and their inferred target genes, generated by integration from 4 resources. MalaCards provides a rich big-data scaffold for in silico biomedical discovery within the gene-disease universe. VarElect, which depends significantly on both GeneCards and MalaCards power, is a potent tool for supporting the interpretation of wet-lab experiments, notably NGS analyses of disease. The GeneCards suite has thus transcended its 2-decade role in biomedical research, maturing into a key player in clinical investigation.
7 CFR 1435.310 - Sharing processors' allocations with producers.
Code of Federal Regulations, 2011 CFR
2011-01-01
... CREDIT CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Flexible Marketing Allotments For Sugar § 1435.310 Sharing processors' allocations with producers. (a) Every sugar beet and sugarcane processor must provide CCC a certification that: (1) The processor...
7 CFR 1435.310 - Sharing processors' allocations with producers.
Code of Federal Regulations, 2010 CFR
2010-01-01
... CREDIT CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Flexible Marketing Allotments For Sugar § 1435.310 Sharing processors' allocations with producers. (a) Every sugar beet and sugarcane processor must provide CCC a certification that: (1) The processor...
7 CFR 1435.310 - Sharing processors' allocations with producers.
Code of Federal Regulations, 2012 CFR
2012-01-01
... CREDIT CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Flexible Marketing Allotments For Sugar § 1435.310 Sharing processors' allocations with producers. (a) Every sugar beet and sugarcane processor must provide CCC a certification that: (1) The processor...
7 CFR 1435.310 - Sharing processors' allocations with producers.
Code of Federal Regulations, 2014 CFR
2014-01-01
... CREDIT CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Flexible Marketing Allotments For Sugar § 1435.310 Sharing processors' allocations with producers. (a) Every sugar beet and sugarcane processor must provide CCC a certification that: (1) The processor...
7 CFR 1435.310 - Sharing processors' allocations with producers.
Code of Federal Regulations, 2013 CFR
2013-01-01
... CREDIT CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Flexible Marketing Allotments For Sugar § 1435.310 Sharing processors' allocations with producers. (a) Every sugar beet and sugarcane processor must provide CCC a certification that: (1) The processor...
Code of Federal Regulations, 2010 CFR
2010-07-01
...) When a test rule or subsequent Federal Register notice pertaining to a test rule expressly obligates processors as well as manufacturers to assume direct testing and data reimbursement responsibilities. (2... processors voluntarily agree to reimburse manufacturers for a portion of test costs. Only those processors...
Atac, R.; Fischler, M.S.; Husby, D.E.
1991-01-15
A bus switching apparatus and method for multiple processor computer systems comprises a plurality of bus switches interconnected by branch buses. Each processor or other module of the system is connected to a spigot of a bus switch. Each bus switch also serves as part of a backplane of a modular crate hardware package. A processor initiates communication with another processor by identifying that other processor. The bus switch to which the initiating processor is connected identifies and secures, if possible, a path to that other processor, either directly or via one or more other bus switches which operate similarly. If a particular desired path through a given bus switch is not available to be used, an alternate path is considered, identified and secured. 11 figures.
Chatterjee, Siddhartha [Yorktown Heights, NY; Gunnels, John A [Brewster, NY
2011-11-08
A method and structure of distributing elements of an array of data in a computer memory to a specific processor of a multi-dimensional mesh of parallel processors includes designating a distribution of elements of at least a portion of the array to be executed by specific processors in the multi-dimensional mesh of parallel processors. The pattern of the designating includes a cyclical repetitive pattern of the parallel processor mesh, as modified to have a skew in at least one dimension so that both a row of data in the array and a column of data in the array map to respective contiguous groupings of the processors such that a dimension of the contiguous groupings is greater than one.
Atac, Robert; Fischler, Mark S.; Husby, Donald E.
1991-01-01
A bus switching apparatus and method for multiple processor computer systems comprises a plurality of bus switches interconnected by branch buses. Each processor or other module of the system is connected to a spigot of a bus switch. Each bus switch also serves as part of a backplane of a modular crate hardware package. A processor initiates communication with another processor by identifying that other processor. The bus switch to which the initiating processor is connected identifies and secures, if possible, a path to that other processor, either directly or via one or more other bus switches which operate similarly. If a particular desired path through a given bus switch is not available to be used, an alternate path is considered, identified and secured.
Sex of respondent and credit attitudes as predictors of credit card use and debt payment.
McCall, Michael; Eckrich, Donald W
2006-06-01
Researchers have suggested there may be sex differences in attitudes towards credit card possession and use. Undergraduates, 41 men and 41 women, completed a survey regarding their attitudes towards credit, credit card use, and repayment. Analysis indicated sex played a significant moderating role between number of credit cards used and the importance of paying off monthly balances. Women possessed more credit cards than men and engaged in more frequent shopping. Number of credit cards increased with paying off of monthly balances. Data are discussed in terms of the importance of managing credit card debt in an increasingly cashless society.
Victorian era esthetic and restorative dentistry: an advertising trade card gallery.
Croll, Theodore P; Swanson, Ben Z
2006-01-01
A chief means of print advertising in the Victorian era was the "trade card." Innumerable products, companies, and services were highlighted on colorful chromolithographic trade cards, and these became desirable collectible objects which were pasted into scrapbooks and enjoyed by many families. Dentistry- and oral health-related subjects were often depicted on Victorian trade cards, and esthetic and restorative dentistry themes were featured. This review describes the history of advertising trade cards and offers a photographic gallery of dentistry-related cards of the era.
Variable word length encoder reduces TV bandwith requirements
NASA Technical Reports Server (NTRS)
Sivertson, W. E., Jr.
1965-01-01
Adaptive variable resolution encoding technique provides an adaptive compression pseudo-random noise signal processor for reducing television bandwidth requirements. Complementary processors are required in both the transmitting and receiving systems. The pretransmission processor is analog-to-digital, while the postreception processor is digital-to-analog.
Harvey, R; Foulds, L; Housden, T; Bennett, K A; Falzon, D; McNarry, A F; Graham, C
2017-03-01
Significant benefits have been demonstrated with the use of peri-operative checklists. We assessed whether a read-aloud didactic action card would improve performance of cannula cricothyroidotomy in a simulated 'can't intubate, can't oxygenate' scenario. A 17-step action card was devised by an expert panel. Participants in their first 4 years of anaesthetic training were randomly assigned into 'no-card' or 'card' groups. Scenarios were video-recorded for analysis. Fifty-three participants (27 no-card and 26 card) completed the scenario. The number of steps omitted was mean (SD) 6.7 (2.0) in the no-card group vs. 0.3 (0.5); p < 0.001 in the card group, but the no-card group was faster to oxygenation by mean (95% CI) 35.4 (6.6-64.2) s. The Kappa statistic was 0.84 (0.73-0.95). Our study demonstrated that action cards are beneficial in achieving successful front-of-neck access using a cannula cricothyroidotomy technique. Further investigation is required to determine this tool's effectiveness in other front-of-neck access situations, and its role in teaching or clinical management. © 2016 The Association of Anaesthetists of Great Britain and Ireland.
Personal medical information system using laser card
NASA Astrophysics Data System (ADS)
Cho, Seong H.; Kim, Keun Ho; Choi, Hyung-Sik; Park, Hyun Wook
1996-04-01
The well-known hospital information system (HIS) and the picture archiving and communication system (PACS) are typical applications of multimedia to medical area. This paper proposes a personal medical information save-and-carry system using a laser card. This laser card is very useful, especially in emergency situations, because the medical information in the laser card can be read at anytime and anywhere if there exists a laser card reader/writer. The contents of the laser card include the clinical histories of a patient such as clinical chart, exam result, diagnostic reports, images, and so on. The purpose of this system is not a primary diagnosis, but emergency reference of clinical history of the patient. This personal medical information system consists of a personal computer integrated with laser card reader/writer, color frame grabber, color CCD camera and a high resolution image scanner optionally. Window-based graphical user interface was designed for easy use. The laser card has relatively sufficient capacity to store the personal medical information, and has fast access speed to restore and load the data with a portable size as compact as a credit card. Database items of laser card provide the doctors with medical data such as laser card information, patient information, clinical information, and diagnostic result information.
12 CFR 334.91 - Duties of card issuers regarding changes of address.
Code of Federal Regulations, 2010 CFR
2010-01-01
... STATEMENTS OF GENERAL POLICY FAIR CREDIT REPORTING Identity Theft Red Flags § 334.91 Duties of card issuers regarding changes of address. (a) Scope. This section applies to an issuer of a debit or credit card (card... been issued a credit or debit card. (2) Clear and conspicuous means reasonably understandable and...
12 CFR 41.91 - Duties of card issuers regarding changes of address.
Code of Federal Regulations, 2010 CFR
2010-01-01
... FAIR CREDIT REPORTING Identity Theft Red Flags § 41.91 Duties of card issuers regarding changes of address. (a) Scope. This section applies to an issuer of a debit or credit card (card issuer) that is a... means a consumer who has been issued a credit or debit card. (2) Clear and conspicuous means reasonably...
31 CFR 548.409 - Credit extended and cards issued by U.S. financial institutions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 31 Money and Finance: Treasury 3 2010-07-01 2010-07-01 false Credit extended and cards issued by U... SANCTIONS REGULATIONS Interpretations § 548.409 Credit extended and cards issued by U.S. financial..., charge cards, debit cards, or other credit facilities issued by a U.S. financial institution to a person...
41 CFR 109-38.801 - Obtaining SF 149, U.S. Government National Credit Card.
Code of Federal Regulations, 2010 CFR
2010-07-01
.... Government National Credit Card. 109-38.801 Section 109-38.801 Public Contracts and Property Management..., U.S. Government National Credit Card § 109-38.801 Obtaining SF 149, U.S. Government National Credit Card. DOE offices electing to use national credit cards shall request the assignment of billing address...
31 CFR 588.409 - Credit extended and cards issued by U.S. financial institutions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 31 Money and Finance: Treasury 3 2010-07-01 2010-07-01 false Credit extended and cards issued by U... BALKANS STABILIZATION REGULATIONS Interpretations § 588.409 Credit extended and cards issued by U.S... not limited to, charge cards, debit cards, or other credit facilities issued by a U.S. financial...
31 CFR 598.409 - Credit extended and cards issued by U.S. financial institutions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 31 Money and Finance: Treasury 3 2010-07-01 2010-07-01 false Credit extended and cards issued by U... NARCOTICS KINGPIN SANCTIONS REGULATIONS Interpretations § 598.409 Credit extended and cards issued by U.S... existing credit agreements, including, but not limited to, charge cards, debit cards, or other credit...
31 CFR 593.409 - Credit extended and cards issued by U.S. financial institutions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 31 Money and Finance: Treasury 3 2010-07-01 2010-07-01 false Credit extended and cards issued by U... LIBERIAN REGIME OF CHARLES TAYLOR SANCTIONS REGULATIONS Interpretations § 593.409 Credit extended and cards..., including, but not limited to, charge cards, debit cards, or other credit facilities issued by a U.S...
37 CFR 2.207 - Methods of payment.
Code of Federal Regulations, 2010 CFR
2010-07-01
... credit card, except for replenishing a deposit account. Payment of a fee by credit card must specify the amount to be charged to the credit card and such other information as is necessary to process the charge... fees to a credit card. If credit card information is provided on a form or document other than a form...
12 CFR 717.91 - Duties of card issuers regarding changes of address.
Code of Federal Regulations, 2010 CFR
2010-01-01
... AFFECTING CREDIT UNIONS FAIR CREDIT REPORTING Identity Theft Red Flags § 717.91 Duties of card issuers regarding changes of address. (a) Scope. This section applies to an issuer of a debit or credit card (card... means a member who has been issued a credit or debit card. (2) Clear and conspicuous means reasonably...
31 CFR 594.410 - Credit extended and cards issued by U.S. financial institutions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 31 Money and Finance: Treasury 3 2010-07-01 2010-07-01 false Credit extended and cards issued by U... TERRORISM SANCTIONS REGULATIONS Interpretations § 594.410 Credit extended and cards issued by U.S. financial... agreements, including, but not limited to, charge cards, debit cards, or other credit facilities issued by a...
31 CFR 542.409 - Credit extended and cards issued by U.S. financial institutions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 31 Money and Finance: Treasury 3 2010-07-01 2010-07-01 false Credit extended and cards issued by U... SANCTIONS REGULATIONS Interpretations § 542.409 Credit extended and cards issued by U.S. financial..., charge cards, debit cards, or other credit facilities issued by a U.S. financial institution to a person...
12 CFR 571.91 - Duties of card issuers regarding changes of address.
Code of Federal Regulations, 2010 CFR
2010-01-01
... FAIR CREDIT REPORTING Identity Theft Red Flags § 571.91 Duties of card issuers regarding changes of address. (a) Scope. This section applies to an issuer of a debit or credit card (card issuer) that is a... consumer who has been issued a credit or debit card. (2) Clear and conspicuous means reasonably...
31 CFR 547.409 - Credit extended and cards issued by U.S. financial institutions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 31 Money and Finance: Treasury 3 2010-07-01 2010-07-01 false Credit extended and cards issued by U... DEMOCRATIC REPUBLIC OF THE CONGO SANCTIONS REGULATIONS Interpretations § 547.409 Credit extended and cards..., including, but not limited to, charge cards, debit cards, or other credit facilities issued by a U.S...
31 CFR 536.409 - Credit extended and cards issued by U.S. financial institutions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 31 Money and Finance: Treasury 3 2010-07-01 2010-07-01 false Credit extended and cards issued by U... NARCOTICS TRAFFICKING SANCTIONS REGULATIONS Interpretations § 536.409 Credit extended and cards issued by U... any existing credit agreements, including, but not limited to, charge cards, debit cards, or other...
The "Negative" Credit Card Effect: Credit Cards as Spending-Limiting Stimuli in New Zealand
ERIC Educational Resources Information Center
Lie, Celia; Hunt, Maree; Peters, Heather L.; Veliu, Bahrie; Harper, David
2010-01-01
The "credit card effect" describes a finding where greater value is given to consumer items if credit card logos are present. One explanation for the effect is that credit cards elicit spending behavior through associative learning. If this is true, social, economic and historical contexts should alter this effect. In Experiment 1, Year…
Vibration Damping Circuit Card Assembly
NASA Technical Reports Server (NTRS)
Hunt, Ronald Allen (Inventor)
2016-01-01
A vibration damping circuit card assembly includes a populated circuit card having a mass M. A closed metal container is coupled to a surface of the populated circuit card at approximately a geometric center of the populated circuit card. Tungsten balls fill approximately 90% of the metal container with a collective mass of the tungsten balls being approximately (0.07) M.
37 CFR 2.207 - Methods of payment.
Code of Federal Regulations, 2014 CFR
2014-07-01
... credit card, except for replenishing a deposit account. Payment of a fee by credit card must specify the amount to be charged to the credit card and such other information as is necessary to process the charge... fees to a credit card. If credit card information is provided on a form or document other than a form...
41 CFR 109-38.801 - Obtaining SF 149, U.S. Government National Credit Card.
Code of Federal Regulations, 2011 CFR
2011-01-01
.... Government National Credit Card. 109-38.801 Section 109-38.801 Public Contracts and Property Management..., U.S. Government National Credit Card § 109-38.801 Obtaining SF 149, U.S. Government National Credit Card. DOE offices electing to use national credit cards shall request the assignment of billing address...
41 CFR 109-38.801 - Obtaining SF 149, U.S. Government National Credit Card.
Code of Federal Regulations, 2012 CFR
2012-01-01
.... Government National Credit Card. 109-38.801 Section 109-38.801 Public Contracts and Property Management..., U.S. Government National Credit Card § 109-38.801 Obtaining SF 149, U.S. Government National Credit Card. DOE offices electing to use national credit cards shall request the assignment of billing address...
41 CFR 109-38.801 - Obtaining SF 149, U.S. Government National Credit Card.
Code of Federal Regulations, 2013 CFR
2013-07-01
.... Government National Credit Card. 109-38.801 Section 109-38.801 Public Contracts and Property Management..., U.S. Government National Credit Card § 109-38.801 Obtaining SF 149, U.S. Government National Credit Card. DOE offices electing to use national credit cards shall request the assignment of billing address...
12 CFR Rules for Card Issuers That... - by-Transaction Basis
Code of Federal Regulations, 2013 CFR
2013-01-01
... Credit Card Accounts and Open-End Credit Offered to College Students Reevaluation of rate increases. Pt... provisions of Subpart B apply if credit cards are issued and the card issuer and the seller are the same or... creditor have an agreement authorizing the seller to honor the creditor's credit card. 1. Section 226.6(a...
41 CFR 109-38.801 - Obtaining SF 149, U.S. Government National Credit Card.
Code of Federal Regulations, 2014 CFR
2014-01-01
.... Government National Credit Card. 109-38.801 Section 109-38.801 Public Contracts and Property Management..., U.S. Government National Credit Card § 109-38.801 Obtaining SF 149, U.S. Government National Credit Card. DOE offices electing to use national credit cards shall request the assignment of billing address...
37 CFR 2.207 - Methods of payment.
Code of Federal Regulations, 2011 CFR
2011-07-01
... credit card, except for replenishing a deposit account. Payment of a fee by credit card must specify the amount to be charged to the credit card and such other information as is necessary to process the charge... fees to a credit card. If credit card information is provided on a form or document other than a form...
37 CFR 2.207 - Methods of payment.
Code of Federal Regulations, 2013 CFR
2013-07-01
... credit card, except for replenishing a deposit account. Payment of a fee by credit card must specify the amount to be charged to the credit card and such other information as is necessary to process the charge... fees to a credit card. If credit card information is provided on a form or document other than a form...
Accelerating molecular dynamic simulation on the cell processor and Playstation 3.
Luttmann, Edgar; Ensign, Daniel L; Vaidyanathan, Vishal; Houston, Mike; Rimon, Noam; Øland, Jeppe; Jayachandran, Guha; Friedrichs, Mark; Pande, Vijay S
2009-01-30
Implementation of molecular dynamics (MD) calculations on novel architectures will vastly increase its power to calculate the physical properties of complex systems. Herein, we detail algorithmic advances developed to accelerate MD simulations on the Cell processor, a commodity processor found in PlayStation 3 (PS3). In particular, we discuss issues regarding memory access versus computation and the types of calculations which are best suited for streaming processors such as the Cell, focusing on implicit solvation models. We conclude with a comparison of improved performance on the PS3's Cell processor over more traditional processors. (c) 2008 Wiley Periodicals, Inc.
Leung, Vitus J [Albuquerque, NM; Phillips, Cynthia A [Albuquerque, NM; Bender, Michael A [East Northport, NY; Bunde, David P [Urbana, IL
2009-07-21
In a multiple processor computing apparatus, directional routing restrictions and a logical channel construct permit fault tolerant, deadlock-free routing. Processor allocation can be performed by creating a linear ordering of the processors based on routing rules used for routing communications between the processors. The linear ordering can assume a loop configuration, and bin-packing is applied to this loop configuration. The interconnection of the processors can be conceptualized as a generally rectangular 3-dimensional grid, and the MC allocation algorithm is applied with respect to the 3-dimensional grid.
Communications systems and methods for subsea processors
Gutierrez, Jose; Pereira, Luis
2016-04-26
A subsea processor may be located near the seabed of a drilling site and used to coordinate operations of underwater drilling components. The subsea processor may be enclosed in a single interchangeable unit that fits a receptor on an underwater drilling component, such as a blow-out preventer (BOP). The subsea processor may issue commands to control the BOP and receive measurements from sensors located throughout the BOP. A shared communications bus may interconnect the subsea processor and underwater components and the subsea processor and a surface or onshore network. The shared communications bus may be operated according to a time division multiple access (TDMA) scheme.
An Efficient Functional Test Generation Method For Processors Using Genetic Algorithms
NASA Astrophysics Data System (ADS)
Hudec, Ján; Gramatová, Elena
2015-07-01
The paper presents a new functional test generation method for processors testing based on genetic algorithms and evolutionary strategies. The tests are generated over an instruction set architecture and a processor description. Such functional tests belong to the software-oriented testing. Quality of the tests is evaluated by code coverage of the processor description using simulation. The presented test generation method uses VHDL models of processors and the professional simulator ModelSim. The rules, parameters and fitness functions were defined for various genetic algorithms used in automatic test generation. Functionality and effectiveness were evaluated using the RISC type processor DP32.
Experimental testing of the noise-canceling processor.
Collins, Michael D; Baer, Ralph N; Simpson, Harry J
2011-09-01
Signal-processing techniques for localizing an acoustic source buried in noise are tested in a tank experiment. Noise is generated using a discrete source, a bubble generator, and a sprinkler. The experiment has essential elements of a realistic scenario in matched-field processing, including complex source and noise time series in a waveguide with water, sediment, and multipath propagation. The noise-canceling processor is found to outperform the Bartlett processor and provide the correct source range for signal-to-noise ratios below -10 dB. The multivalued Bartlett processor is found to outperform the Bartlett processor but not the noise-canceling processor. © 2011 Acoustical Society of America
A High Performance VLSI Computer Architecture For Computer Graphics
NASA Astrophysics Data System (ADS)
Chin, Chi-Yuan; Lin, Wen-Tai
1988-10-01
A VLSI computer architecture, consisting of multiple processors, is presented in this paper to satisfy the modern computer graphics demands, e.g. high resolution, realistic animation, real-time display etc.. All processors share a global memory which are partitioned into multiple banks. Through a crossbar network, data from one memory bank can be broadcasted to many processors. Processors are physically interconnected through a hyper-crossbar network (a crossbar-like network). By programming the network, the topology of communication links among processors can be reconfigurated to satisfy specific dataflows of different applications. Each processor consists of a controller, arithmetic operators, local memory, a local crossbar network, and I/O ports to communicate with other processors, memory banks, and a system controller. Operations in each processor are characterized into two modes, i.e. object domain and space domain, to fully utilize the data-independency characteristics of graphics processing. Special graphics features such as 3D-to-2D conversion, shadow generation, texturing, and reflection, can be easily handled. With the current high density interconnection (MI) technology, it is feasible to implement a 64-processor system to achieve 2.5 billion operations per second, a performance needed in most advanced graphics applications.
Rapid prototyping and evaluation of programmable SIMD SDR processors in LISA
NASA Astrophysics Data System (ADS)
Chen, Ting; Liu, Hengzhu; Zhang, Botao; Liu, Dongpei
2013-03-01
With the development of international wireless communication standards, there is an increase in computational requirement for baseband signal processors. Time-to-market pressure makes it impossible to completely redesign new processors for the evolving standards. Due to its high flexibility and low power, software defined radio (SDR) digital signal processors have been proposed as promising technology to replace traditional ASIC and FPGA fashions. In addition, there are large numbers of parallel data processed in computation-intensive functions, which fosters the development of single instruction multiple data (SIMD) architecture in SDR platform. So a new way must be found to prototype the SDR processors efficiently. In this paper we present a bit-and-cycle accurate model of programmable SIMD SDR processors in a machine description language LISA. LISA is a language for instruction set architecture which can gain rapid model at architectural level. In order to evaluate the availability of our proposed processor, three common baseband functions, FFT, FIR digital filter and matrix multiplication have been mapped on the SDR platform. Analytical results showed that the SDR processor achieved the maximum of 47.1% performance boost relative to the opponent processor.
NASA Astrophysics Data System (ADS)
Weber, Walter H.; Mair, H. Douglas; Jansen, Dion
2003-03-01
A suite of basic signal processors has been developed. These basic building blocks can be cascaded together to form more complex processors without the need for programming. The data structures between each of the processors are handled automatically. This allows a processor built for one purpose to be applied to any type of data such as images, waveform arrays and single values. The processors are part of Winspect Data Acquisition software. The new processors are fast enough to work on A-scan signals live while scanning. Their primary use is to extract features, reduce noise or to calculate material properties. The cascaded processors work equally well on live A-scan displays, live gated data or as a post-processing engine on saved data. Researchers are able to call their own MATLAB or C-code from anywhere within the processor structure. A built-in formula node processor that uses a simple algebraic editor may make external user programs unnecessary. This paper also discusses the problems associated with ad hoc software development and how graphical programming languages can tie up researchers writing software rather than designing experiments.
Array processor architecture connection network
NASA Technical Reports Server (NTRS)
Barnes, George H. (Inventor); Lundstrom, Stephen F. (Inventor); Shafer, Philip E. (Inventor)
1982-01-01
A connection network is disclosed for use between a parallel array of processors and a parallel array of memory modules for establishing non-conflicting data communications paths between requested memory modules and requesting processors. The connection network includes a plurality of switching elements interposed between the processor array and the memory modules array in an Omega networking architecture. Each switching element includes a first and a second processor side port, a first and a second memory module side port, and control logic circuitry for providing data connections between the first and second processor ports and the first and second memory module ports. The control logic circuitry includes strobe logic for examining data arriving at the first and the second processor ports to indicate when the data arriving is requesting data from a requesting processor to a requested memory module. Further, connection circuitry is associated with the strobe logic for examining requesting data arriving at the first and the second processor ports for providing a data connection therefrom to the first and the second memory module ports in response thereto when the data connection so provided does not conflict with a pre-established data connection currently in use.
Swamydas, Muthulekha; Rodriguez, Carlos A.; Lim, Jean K.; Mendez, Laura M.; Fink, Danielle L.; Hsu, Amy P.; Zhai, Bing; Karauzum, Hatice; Mikelis, Constantinos M.; Rose, Stacey R.; Ferre, Elise M. N.; Yockey, Lynne; Lemberg, Kimberly; Kuehn, Hye Sun; Rosenzweig, Sergio D.; Lin, Xin; Chittiboina, Prashant; Datta, Sandip K.; Belhorn, Thomas H.; Weimer, Eric T.; Hernandez, Michelle L.; Hohl, Tobias M.; Kuhns, Douglas B.; Lionakis, Michail S.
2015-01-01
Candida is the most common human fungal pathogen and causes systemic infections that require neutrophils for effective host defense. Humans deficient in the C-type lectin pathway adaptor protein CARD9 develop spontaneous fungal disease that targets the central nervous system (CNS). However, how CARD9 promotes protective antifungal immunity in the CNS remains unclear. Here, we show that a patient with CARD9 deficiency had impaired neutrophil accumulation and induction of neutrophil-recruiting CXC chemokines in the cerebrospinal fluid despite uncontrolled CNS Candida infection. We phenocopied the human susceptibility in Card9 -/- mice, which develop uncontrolled brain candidiasis with diminished neutrophil accumulation. The induction of neutrophil-recruiting CXC chemokines is significantly impaired in infected Card9 -/- brains, from both myeloid and resident glial cellular sources, whereas cell-intrinsic neutrophil chemotaxis is Card9-independent. Taken together, our data highlight the critical role of CARD9-dependent neutrophil trafficking into the CNS and provide novel insight into the CNS fungal susceptibility of CARD9-deficient humans. PMID:26679537
21 CFR 892.1900 - Automatic radiographic film processor.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Automatic radiographic film processor. 892.1900... (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1900 Automatic radiographic film processor. (a) Identification. An automatic radiographic film processor is a device intended to be used to...
21 CFR 892.1900 - Automatic radiographic film processor.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Automatic radiographic film processor. 892.1900... (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1900 Automatic radiographic film processor. (a) Identification. An automatic radiographic film processor is a device intended to be used to...
21 CFR 892.1900 - Automatic radiographic film processor.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Automatic radiographic film processor. 892.1900... (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1900 Automatic radiographic film processor. (a) Identification. An automatic radiographic film processor is a device intended to be used to...
21 CFR 892.1900 - Automatic radiographic film processor.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Automatic radiographic film processor. 892.1900... (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1900 Automatic radiographic film processor. (a) Identification. An automatic radiographic film processor is a device intended to be used to...
7 CFR 1160.108 - Fluid milk processor.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 9 2013-01-01 2013-01-01 false Fluid milk processor. 1160.108 Section 1160.108... AGREEMENTS AND ORDERS; MILK), DEPARTMENT OF AGRICULTURE FLUID MILK PROMOTION PROGRAM Fluid Milk Promotion Order Definitions § 1160.108 Fluid milk processor. (a) Fluid milk processor means any person who...
7 CFR 1160.108 - Fluid milk processor.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 9 2012-01-01 2012-01-01 false Fluid milk processor. 1160.108 Section 1160.108... Agreements and Orders; Milk), DEPARTMENT OF AGRICULTURE FLUID MILK PROMOTION PROGRAM Fluid Milk Promotion Order Definitions § 1160.108 Fluid milk processor. (a) Fluid milk processor means any person who...
7 CFR 1160.108 - Fluid milk processor.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 9 2014-01-01 2013-01-01 true Fluid milk processor. 1160.108 Section 1160.108... AGREEMENTS AND ORDERS; MILK), DEPARTMENT OF AGRICULTURE FLUID MILK PROMOTION PROGRAM Fluid Milk Promotion Order Definitions § 1160.108 Fluid milk processor. (a) Fluid milk processor means any person who...
21 CFR 892.1900 - Automatic radiographic film processor.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Automatic radiographic film processor. 892.1900... (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1900 Automatic radiographic film processor. (a) Identification. An automatic radiographic film processor is a device intended to be used to...
7 CFR 1160.108 - Fluid milk processor.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 9 2010-01-01 2009-01-01 true Fluid milk processor. 1160.108 Section 1160.108... Agreements and Orders; Milk), DEPARTMENT OF AGRICULTURE FLUID MILK PROMOTION PROGRAM Fluid Milk Promotion Order Definitions § 1160.108 Fluid milk processor. (a) Fluid milk processor means any person who...
7 CFR 1160.108 - Fluid milk processor.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 9 2011-01-01 2011-01-01 false Fluid milk processor. 1160.108 Section 1160.108... Agreements and Orders; Milk), DEPARTMENT OF AGRICULTURE FLUID MILK PROMOTION PROGRAM Fluid Milk Promotion Order Definitions § 1160.108 Fluid milk processor. (a) Fluid milk processor means any person who...
Two sets of business card-sized lists of tips for prevention of bed bug infestations, one for general use around home, the other for travelers. Print a single card or a page of cards for distribution.
International images: business cards.
Gaston, S; Pucci, J
1991-01-01
Nursing specialists engage in a variety of international professional activities. Business cards are an important aspect of establishing a professional image. This article presents recommended business card contents, international etiquette, card design and production, and cared innovations.
British and American attitudes toward credit cards.
Yang, Bijou; James, Simon; Lester, David
2006-04-01
American university students owned more than twice as many credit cards as British university students. However, scores on a credit card attitude scale predicted the number of cards owned by respondents in both countries.
Shared performance monitor in a multiprocessor system
Chiu, George; Gara, Alan G; Salapura, Valentina
2014-12-02
A performance monitoring unit (PMU) and method for monitoring performance of events occurring in a multiprocessor system. The multiprocessor system comprises a plurality of processor devices units, each processor device for generating signals representing occurrences of events in the processor device, and, a single shared counter resource for performance monitoring. The performance monitor unit is shared by all processor cores in the multiprocessor system. The PMU is further programmed to monitor event signals issued from non-processor devices.
Noncoherent parallel optical processor for discrete two-dimensional linear transformations.
Glaser, I
1980-10-01
We describe a parallel optical processor, based on a lenslet array, that provides general linear two-dimensional transformations using noncoherent light. Such a processor could become useful in image- and signal-processing applications in which the throughput requirements cannot be adequately satisfied by state-of-the-art digital processors. Experimental results that illustrate the feasibility of the processor by demonstrating its use in parallel optical computation of the two-dimensional Walsh-Hadamard transformation are presented.
van der Ham, Alida J; Voskes, Yolande; van Kempen, Nel; Broerse, Jacqueline E W; Widdershoven, Guy A M
2013-06-01
The crisis card is a specific form of psychiatric advance directive, documenting mental clients' treatment preferences in advance of a potential psychiatric crisis. In this paper, we aim to provide insight into implementation issues surrounding the crisis card. A Dutch crisis-card project formed the scope of this study. Data were collected through interviews with 15 participants from six stakeholder groups. Identified implementation issues are: (a) The role of the crisis-card counselor, (b) lack of distribution and familiarity, (c) care professionals' routines, and (d) client readiness. The crisis-card counselor appears to play a key role in fostering benefits of the crisis card by supporting clients' perspectives. More structural integration of the crisis card in care processes may enhance its impact, but should be carefully explored. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
Mitsunaga, Erin M
2006-05-01
During the Korean War, International Business Machines (IBM) punch cards were created for every individual involved in military combat. Each card contained all pertinent personal information about the individual and was utilized to keep track of all soldiers involved. However, at present, all of the information known about these punch cards reveals only their format and their significance; there is little to no information on how these cards were created or how to interpret the information contained without the aid of the computer system used during the war. Today, it is believed there is no one available to explain this computerized system, nor do the original computers exist. This decode strategy is the result of an attempt to decipher the information on these cards through the use of all available medical and dental records for each individual examined. By cross-referencing the relevant personal information with the known format of the cards, a basic guess-and-check method was utilized. After examining hundreds of IBM punch cards, however, it has become clear that the punch card method of recording information was not infallible. In some cases, there are gaps of information on cards where there are data recorded on personal records; in others, information is punched incorrectly onto the cards, perhaps as the result of a transcription error. Taken all together, it is clear that the information contained on each individual's card should be taken solely as another form of personal documentation.
NASA Center for Climate Simulation (NCCS) Presentation
NASA Technical Reports Server (NTRS)
Webster, William P.
2012-01-01
The NASA Center for Climate Simulation (NCCS) offers integrated supercomputing, visualization, and data interaction technologies to enhance NASA's weather and climate prediction capabilities. It serves hundreds of users at NASA Goddard Space Flight Center, as well as other NASA centers, laboratories, and universities across the US. Over the past year, NCCS has continued expanding its data-centric computing environment to meet the increasingly data-intensive challenges of climate science. We doubled our Discover supercomputer's peak performance to more than 800 teraflops by adding 7,680 Intel Xeon Sandy Bridge processor-cores and most recently 240 Intel Xeon Phi Many Integrated Core (MIG) co-processors. A supercomputing-class analysis system named Dali gives users rapid access to their data on Discover and high-performance software including the Ultra-scale Visualization Climate Data Analysis Tools (UV-CDAT), with interfaces from user desktops and a 17- by 6-foot visualization wall. NCCS also is exploring highly efficient climate data services and management with a new MapReduce/Hadoop cluster while augmenting its data distribution to the science community. Using NCCS resources, NASA completed its modeling contributions to the Intergovernmental Panel on Climate Change (IPCG) Fifth Assessment Report this summer as part of the ongoing Coupled Modellntercomparison Project Phase 5 (CMIP5). Ensembles of simulations run on Discover reached back to the year 1000 to test model accuracy and projected climate change through the year 2300 based on four different scenarios of greenhouse gases, aerosols, and land use. The data resulting from several thousand IPCC/CMIP5 simulations, as well as a variety of other simulation, reanalysis, and observationdatasets, are available to scientists and decision makers through an enhanced NCCS Earth System Grid Federation Gateway. Worldwide downloads have totaled over 110 terabytes of data.
2010-2011 Performance of the AirNow Satellite Data Processor
NASA Astrophysics Data System (ADS)
Pasch, A. N.; DeWinter, J. L.; Haderman, M. D.; van Donkelaar, A.; Martin, R. V.; Szykman, J.; White, J. E.; Dickerson, P.; Zahn, P. H.; Dye, T. S.
2012-12-01
The U.S. Environmental Protection Agency's (EPA) AirNow program provides maps of real time hourly Air Quality Index (AQI) conditions and daily AQI forecasts nationwide (http://www.airnow.gov). The public uses these maps to make health-based decisions. The usefulness of the AirNow air quality maps depends on the accuracy and spatial coverage of air quality measurements. Currently, the maps use only ground-based measurements, which have significant gaps in coverage in some parts of the United States. As a result, contoured AQI levels have high uncertainty in regions far from monitors. To improve the usefulness of air quality maps, scientists at EPA, Dalhousie University, and Sonoma Technology, Inc. have been working in collaboration with the National Aeronautics and Space Administration (NASA) and the National Oceanic and Atmospheric Administration (NOAA) to incorporate satellite-estimated surface PM2.5 concentrations into the maps via the AirNow Satellite Data Processor (ASDP). These satellite estimates are derived using NASA/NOAA satellite aerosol optical depth (AOD) retrievals and GEOS-Chem modeled ratios of surface PM2.5 concentrations to AOD. GEOS-Chem is a three-dimensional chemical transport model for atmospheric composition driven by meteorological input from the Goddard Earth Observing System (GOES). The ASDP can fuse multiple PM2.5 concentration data sets to generate AQI maps with improved spatial coverage. The goal of ASDP is to provide more detailed AQI information in monitor-sparse locations and augment monitor-dense locations with more information. We will present a statistical analysis for 2010-2011 of the ASDP predictions of PM2.5 focusing on performance at validation sites. In addition, we will present several case studies evaluating the ASDP's performance for multiple regions and seasons, focusing specifically on days when large spatial gradients in AQI and wildfire smoke impact were observed.
Processors for wavelet analysis and synthesis: NIFS and TI-C80 MVP
NASA Astrophysics Data System (ADS)
Brooks, Geoffrey W.
1996-03-01
Two processors are considered for image quadrature mirror filtering (QMF). The neuromorphic infrared focal-plane sensor (NIFS) is an existing prototype analog processor offering high speed spatio-temporal Gaussian filtering, which could be used for the QMF low- pass function, and difference of Gaussian filtering, which could be used for the QMF high- pass function. Although not designed specifically for wavelet analysis, the biologically- inspired system accomplishes the most computationally intensive part of QMF processing. The Texas Instruments (TI) TMS320C80 Multimedia Video Processor (MVP) is a 32-bit RISC master processor with four advanced digital signal processors (DSPs) on a single chip. Algorithm partitioning, memory management and other issues are considered for optimal performance. This paper presents these considerations with simulated results leading to processor implementation of high-speed QMF analysis and synthesis.
77 FR 124 - Biological Processors of Alabama; Decatur, Morgan County, AL; Notice of Settlement
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-03
... ENVIRONMENTAL PROTECTION AGENCY [FRL-9612-9] Biological Processors of Alabama; Decatur, Morgan... reimbursement of past response costs concerning the Biological Processors of Alabama Superfund Site located in... Ms. Paula V. Painter. Submit your comments by Site name Biological Processors of Alabama Superfund...
ERIC Educational Resources Information Center
Porter, Dennis
One recommendation of the 1989 California Strategic Plan for Adult Education is the use of EduCard. EduCard, the Adult Education Access Card, is a means of giving learners access to information about educational opportunities and providing administrators with machine-readable information on learners' prior education and traiing. Three models are:…
Jeu de cartes or Jeu Descartes: Business Cards in a French Course for the Professions.
ERIC Educational Resources Information Center
Gegerias, Mary
This paper discusses the use of French business cards in a college-level French language and culture course for professionals. Among other assignments, students were each given a different card and asked to speak about the design of their card, the business represented, idiomatic expressions and historical allusions on the card, and the use of…
31 CFR 543.409 - Credit extended and cards issued by U.S. financial institutions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 31 Money and Finance: Treasury 3 2010-07-01 2010-07-01 false Credit extended and cards issued by U...'IVOIRE SANCTIONS REGULATIONS Interpretations § 543.409 Credit extended and cards issued by U.S. financial..., charge cards, debit cards, or other credit facilities issued by a U.S. financial institution to a person...
12 CFR 222.91 - Duties of card issuers regarding changes of address.
Code of Federal Regulations, 2010 CFR
2010-01-01
... described in § 222.90(a) that issues a debit or credit card (card issuer). (b) Definitions. For purposes of this section: (1) Cardholder means a consumer who has been issued a credit or debit card. (2) Clear and... notification of a change of address for a consumer's debit or credit card account and, within a short period of...
31 CFR 541.408 - Credit extended and cards issued by U.S. financial institutions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 31 Money and Finance: Treasury 3 2010-07-01 2010-07-01 false Credit extended and cards issued by U... ZIMBABWE SANCTIONS REGULATIONS Interpretations § 541.408 Credit extended and cards issued by U.S. financial..., charge cards, debit cards, or other credit facilities issued by a U.S. financial institution to a person...
16 CFR 681.2 - Duties of card issuers regarding changes of address.
Code of Federal Regulations, 2010 CFR
2010-01-01
... applies to a person described in § 681.1(a) that issues a debit or credit card (card issuer). (b... of address if it receives notification of a change of address for a consumer's debit or credit card... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Duties of card issuers regarding changes of...
49 CFR 375.221 - May I use a charge or credit card plan for payments?
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 5 2010-10-01 2010-10-01 false May I use a charge or credit card plan for... card plan for payments? (a) You may provide in your tariff for the acceptance of charge or credit cards for the payment of freight charges. Accepting charge or credit card payments is different than...
49 CFR 375.221 - May I use a charge or credit card plan for payments?
Code of Federal Regulations, 2013 CFR
2013-10-01
... 49 Transportation 5 2013-10-01 2013-10-01 false May I use a charge or credit card plan for... card plan for payments? (a) You may provide in your tariff for the acceptance of charge or credit cards for the payment of freight charges. Accepting charge or credit card payments is different than...
49 CFR 375.221 - May I use a charge or credit card plan for payments?
Code of Federal Regulations, 2012 CFR
2012-10-01
... 49 Transportation 5 2012-10-01 2012-10-01 false May I use a charge or credit card plan for... card plan for payments? (a) You may provide in your tariff for the acceptance of charge or credit cards for the payment of freight charges. Accepting charge or credit card payments is different than...
49 CFR 375.221 - May I use a charge or credit card plan for payments?
Code of Federal Regulations, 2014 CFR
2014-10-01
... 49 Transportation 5 2014-10-01 2014-10-01 false May I use a charge or credit card plan for... card plan for payments? (a) You may provide in your tariff for the acceptance of charge or credit cards for the payment of freight charges. Accepting charge or credit card payments is different than...
49 CFR 375.221 - May I use a charge or credit card plan for payments?
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 5 2011-10-01 2011-10-01 false May I use a charge or credit card plan for... card plan for payments? (a) You may provide in your tariff for the acceptance of charge or credit cards for the payment of freight charges. Accepting charge or credit card payments is different than...
MalaCards: an integrated compendium for diseases and their annotation
Rappaport, Noa; Nativ, Noam; Stelzer, Gil; Twik, Michal; Guan-Golan, Yaron; Iny Stein, Tsippi; Bahir, Iris; Belinky, Frida; Morrey, C. Paul; Safran, Marilyn; Lancet, Doron
2013-01-01
Comprehensive disease classification, integration and annotation are crucial for biomedical discovery. At present, disease compilation is incomplete, heterogeneous and often lacking systematic inquiry mechanisms. We introduce MalaCards, an integrated database of human maladies and their annotations, modeled on the architecture and strategy of the GeneCards database of human genes. MalaCards mines and merges 44 data sources to generate a computerized card for each of 16 919 human diseases. Each MalaCard contains disease-specific prioritized annotations, as well as inter-disease connections, empowered by the GeneCards relational database, its searches and GeneDecks set analyses. First, we generate a disease list from 15 ranked sources, using disease-name unification heuristics. Next, we use four schemes to populate MalaCards sections: (i) directly interrogating disease resources, to establish integrated disease names, synonyms, summaries, drugs/therapeutics, clinical features, genetic tests and anatomical context; (ii) searching GeneCards for related publications, and for associated genes with corresponding relevance scores; (iii) analyzing disease-associated gene sets in GeneDecks to yield affiliated pathways, phenotypes, compounds and GO terms, sorted by a composite relevance score and presented with GeneCards links; and (iv) searching within MalaCards itself, e.g. for additional related diseases and anatomical context. The latter forms the basis for the construction of a disease network, based on shared MalaCards annotations, embodying associations based on etiology, clinical features and clinical conditions. This broadly disposed network has a power-law degree distribution, suggesting that this might be an inherent property of such networks. Work in progress includes hierarchical malady classification, ontological mapping and disease set analyses, striving to make MalaCards an even more effective tool for biomedical research. Database URL: http://www.malacards.org/ PMID:23584832
Factors Determining Availability, Utilization and Retention of Child Health Card in Western Nepal.
Paudel, K P; Bajracharya, D C; Karki, K; K C, A
2016-05-01
The immunization card is revised with addition of general information about child health and is later called as child health card. This card is a tool used by Health Management Information System in Nepal. It is important for tracking the records of immunization. Aim is to identify the factors determining the availability, utilization and retention of the child health card in Western Nepal. A cross sectional study was conducted among mothers having children < 24 months old from Gorkha (Western Hill) and Nawalparasi (Western Terai) districts. The sample size for the study was 600 and systematic random sampling was used to select the mothers having less than 24 months old children. Data entry and analysis was done by using SPSS. Qualitative data was analyzed by making matrix. The average age of respondents was 24 years. The majority of respondents have gained higher level education. Retention of the card was found to be 82.2%. 90.3% retention was seen among 0-12 months children age group whereas it was 74 % among12 to 24 months age group. The reasons for less retention were torn by the child/played by child (54.6%) followed by lack of proper place,unaware about importance and poor quality of card.The new child health cards were insufficient, compelling use of both new and old cards which created problem in consistency. Regarding utilization of child health card, it was found to be used for birth registration and for further studies in abroad. The areas of utilization of child health card should be broadened so that the retention of card can be increased. The main reasons for less retention of the card are torn by children and lack of the proper place.
19. SECOND FLOOR, CARDING MACHINE, BY 'CARDING SPECIALISTS, CO., LTD., ...
19. SECOND FLOOR, CARDING MACHINE, BY 'CARDING SPECIALISTS, CO., LTD., PELLON LAND WORKS, HALIFAX, ENGLAND, SN #M2983 14 (EST. DATE 1940'S+). - Bamberg Cotton Mill, Main Street, Bamberg, Bamberg County, SC
High-density, fail-in-place switches for computer and data networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coteus, Paul W.; Doany, Fuad E.; Hall, Shawn A.
A structure for a network switch. The network switch may include a plurality of spine chips arranged on a plurality of spine cards, where one or more spine chips are located on each spine card; and a plurality of leaf chips arranged on a plurality of leaf cards, wherein one or more leaf chips are located on each leaf card, where each spine card is connected to every leaf chip and the plurality of spine chips are surrounded on at least two sides by leaf cards.
Health care report cards: what about consumers' perspectives?
McGee, J; Knutson, D
1994-10-01
Though the report card style is seen by many as a way to create better-informed consumers, very little is actually known about how consumers will respond to health care report cards. Report cards are only one of many factors that influence health care decision making. Much consumer-oriented effort and fine-tuning will be required to make report cards effective. Using the approach called "social marketing" as a framework, specific examples are used to outline some ideas for more intensive pursuit of consumers' perspectives in the design and distribution of report cards.
Disaster-hardened imaging POD for PACS
NASA Astrophysics Data System (ADS)
Honeyman-Buck, Janice; Frost, Meryll
2005-04-01
After the events of 9/11, many people questioned their ability to keep critical services operational in the face of massive infrastructure failure. Hospitals increased their backup and recovery power, made plans for emergency water and food, and operated on a heightened alert awareness with more frequent disaster drills. In a film-based radiology department, if a portable X-ray unit, a CT unit, an Ultrasound unit, and an film processor could be operated on emergency power, a limited, but effective number of studies could be performed. However, in a digital department, there is a reliance on the network infrastructure to deliver images to viewing locations. The system developed for our institution uses several imaging PODS, a name we chose because it implied to us a safe, contained environment. Each POD is a stand-alone emergency powered network capable of generating images and displaying them in the POD or printing them to a DICOM printer. The technology we used to create a POD consists of a computer with dual network interface cards joining our private, local POD network, to the hospital network. In the case of an infrastructure failure, each POD can and does work independently to produce CTs, CRs, and Ultrasounds. The system has been tested during disaster drills and works correctly, producing images using equipment technologists are comfortable using with very few emergency switch-over tasks. Purpose: To provide imaging capabilities in the event of a natural or man-made disaster with infrastructure failure. Method: After the events of 9/11, many people questioned their ability to keep critical services operational in the face of massive infrastructure failure. Hospitals increased their backup and recovery power, made plans for emergency water and food, and operated on a heightened alert awareness with more frequent disaster drills. In a film-based radiology department, if a portable X-ray unit, a CT unit, an Ultrasound unit, and an film processor could be operated on emergency power, a limited, but effective number of studies could be performed. However, in a digital department, there is a reliance on the network infrastructure to deliver images to viewing locations. The system developed for our institution uses several imaging PODS, a name we chose because it implied to us a safe, contained environment. Each POD is on both the standard and the emergency power systems. All the vendor equipment that produces images is on a private, stand-alone network controlled either by a simple or a managed switch. Included in each POD is a dry-process DICOM printer that is rarely used during normal operations and a display workstation. One node on the private network is a PACS application processor (AP) with two network interface cards, one for the private network, one for the standard PACS network. During ordinary daily operations, all acquired images pass through this AP and are routed to the PACS archives, web servers, and workstations. However, if the power and network to much of the hospital were to fail, the stand-alone POD could still function. Images are routed to the AP, but cannot forward to the main network. However, they can be routed to the printer and display in the POD. They are also stored on the AP to continue normal routing when the infrastructure is restored. Results: The imaging PODS have been tested in actual disaster testing where the infrastructure was intentionally removed and worked as designed. To date, we have not had to use them in a real-life scenario and we hope we never do, but we feel we have a reasonable level of emergency imaging capability if we ever need it. Conclusions: Our testing indicates our PODS are a viable way to continue medical imaging in the face of an emergency with a major part of our network and electrical infrastructure destroyed.
Pradhan, Sukanta Kumar; De, Sachinandan
2017-01-01
The nucleotide-binding and oligomerization domain (NOD)-containing protein 1 (NOD1) plays the pivotal role in host-pathogen interface of innate immunity and triggers immune signalling pathways for the maturation and release of pro-inflammatory cytokines. Upon the recognition of iE-DAP, NOD1 self-oligomerizes in an ATP-dependent fashion and interacts with adaptor molecule receptor-interacting protein 2 (RIP2) for the propagation of innate immune signalling and initiation of pro-inflammatory immune responses. This interaction (mediated by NOD1 and RIP2) helps in transmitting the downstream signals for the activation of NF-κB signalling pathway, and has been arbitrated by respective caspase-recruitment domains (CARDs). The so-called CARD-CARD interaction still remained contradictory due to inconsistent results. Henceforth, to understand the mode and the nature of the interaction, structural bioinformatics approaches were employed. MD simulation of modelled 1:1 heterodimeric complexes revealed that the type-Ia interface of NOD1CARD and the type-Ib interface of RIP2CARD might be the suitable interfaces for the said interaction. Moreover, we perceived three dynamically stable heterotrimeric complexes with an NOD1:RIP2 ratio of 1:2 (two numbers) and 2:1. Out of which, in the first trimeric complex, a type-I NOD1-RIP2 heterodimer was found interacting with an RIP2CARD using their type-IIa and IIIa interfaces. However, in the second and third heterotrimer, we observed type-I homodimers of NOD1 and RIP2 CARDs were interacting individually with RIP2CARD and NOD1CARD (in type-II and type-III interface), respectively. Overall, this study provides structural and dynamic insights into the NOD1-RIP2 oligomer formation, which will be crucial in understanding the molecular basis of NOD1-mediated CARD-CARD interaction in higher and lower eukaryotes. PMID:28114344
Helping Students Design HyperCard Stacks.
ERIC Educational Resources Information Center
Dunham, Ken
1995-01-01
Discusses how to teach students to design HyperCard stacks. Highlights include introducing HyperCard, developing storyboards, introducing design concepts and scripts, presenting stacks, evaluating storyboards, and continuing projects. A sidebar presents a HyperCard stack evaluation form. (AEF)
20. SECOND FLOOR, CARDING MACHINE, BY 'CARDING SPECIALISTS, CO., LTD., ...
20. SECOND FLOOR, CARDING MACHINE, BY 'CARDING SPECIALISTS, CO., LTD., PELLON LAND WORKS, HALIFAX, ENGLAND, SN #M2983 14 (EST. DATE 1940'S+), OPPOSITE END - Bamberg Cotton Mill, Main Street, Bamberg, Bamberg County, SC
Thermal transfer structures coupling electronics card(s) to coolant-cooled structure(s)
David, Milnes P; Graybill, David P; Iyengar, Madhusudan K; Kamath, Vinod; Kochuparambil, Bejoy J; Parida, Pritish R; Schmidt, Roger R
2014-12-16
Cooling apparatuses and coolant-cooled electronic systems are provided which include thermal transfer structures configured to engage with a spring force one or more electronics cards with docking of the electronics card(s) within a respective socket(s) of the electronic system. A thermal transfer structure of the cooling apparatus includes a thermal spreader having a first thermal conduction surface, and a thermally conductive spring assembly coupled to the conduction surface of the thermal spreader and positioned and configured to reside between and physically couple a first surface of an electronics card to the first surface of the thermal spreader with docking of the electronics card within a socket of the electronic system. The thermal transfer structure is, in one embodiment, metallurgically bonded to a coolant-cooled structure and facilitates transfer of heat from the electronics card to coolant flowing through the coolant-cooled structure.
Klemic, Gladys [Naperville, IL; Bailey, Paul [Chicago, IL; Breheny, Cecilia [Yonkers, NY
2008-09-02
The present invention relates to a citizen's dosimeter. More specifically, the invention relates to a small, portable, personal dosimetry device designed to be used in the wake of a event involving a Radiological Dispersal Device (RDD), Improvised Nuclear Device (IND), or other event resulting in the contamination of large area with radioactive material or where on site personal dosimetry is required. The card sized dosimeter generally comprises: a lower card layer, the lower card body having an inner and outer side; a upper card layer, the layer card having an inner and outer side; an optically stimulated luminescent material (OSLM), wherein the OSLM is sandwiched between the inner side of the lower card layer and the inner side of the upper card layer during dosimeter radiation recording, a shutter means for exposing at least one side of the OSLM for dosimeter readout; and an energy compensation filter attached to the outer sides of the lower and upper card layers.
Wada, Kazushige; Nittono, Hiroshi
2004-06-01
The reasoning process in the Wason selection task was examined by measuring card inspection times in the letter-number and drinking-age problems. 24 students were asked to solve the problems presented on a computer screen. Only the card touched with a mouse pointer was visible, and the total exposure time of each card was measured. Participants were allowed to cancel their previous selections at any time. Although rethinking was encouraged, the cards once selected were rarely cancelled (10% of the total selections). Moreover, most of the cancelled cards were reselected (89% of the total cancellations). Consistent with previous findings, inspection times were longer for selected cards than for nonselected cards. These results suggest that card selections are determined largely by initial heuristic processes and rarely reversed by subsequent analytic processes. The present study gives further support for the heuristic-analytic dual process theory.
Reminder card helps patients remember OCs.
1999-11-01
Organon has developed the Reminder Card to help women patients remember their regular intake of oral contraceptive (OC) pills. About 50% of women take birth control pills as prescribed, 25% miss a pill per month, and 25% miss two or more pills in the same time frame. The plastic card, about the size and shape of a credit card, contains a microchip timer. Reminder cards are available to providers who use the Starter Kits issued by the company for new-start patients on the Mircette OC. When patients begin their first pack of pills, they select the time of day they prefer to have the Reminder Card emit its tiny beep. The time is set into the microchip timer and the card is programmed to sound automatically at the pre-set time each day for the next three months. The direction for using the Reminder Card is outlined.
Multiple core computer processor with globally-accessible local memories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shalf, John; Donofrio, David; Oliker, Leonid
A multi-core computer processor including a plurality of processor cores interconnected in a Network-on-Chip (NoC) architecture, a plurality of caches, each of the plurality of caches being associated with one and only one of the plurality of processor cores, and a plurality of memories, each of the plurality of memories being associated with a different set of at least one of the plurality of processor cores and each of the plurality of memories being configured to be visible in a global memory address space such that the plurality of memories are visible to two or more of the plurality ofmore » processor cores.« less
Scalable load balancing for massively parallel distributed Monte Carlo particle transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Brien, M. J.; Brantley, P. S.; Joy, K. I.
2013-07-01
In order to run computer simulations efficiently on massively parallel computers with hundreds of thousands or millions of processors, care must be taken that the calculation is load balanced across the processors. Examining the workload of every processor leads to an unscalable algorithm, with run time at least as large as O(N), where N is the number of processors. We present a scalable load balancing algorithm, with run time 0(log(N)), that involves iterated processor-pair-wise balancing steps, ultimately leading to a globally balanced workload. We demonstrate scalability of the algorithm up to 2 million processors on the Sequoia supercomputer at Lawrencemore » Livermore National Laboratory. (authors)« less
Parallel processor-based raster graphics system architecture
Littlefield, Richard J.
1990-01-01
An apparatus for generating raster graphics images from the graphics command stream includes a plurality of graphics processors connected in parallel, each adapted to receive any part of the graphics command stream for processing the command stream part into pixel data. The apparatus also includes a frame buffer for mapping the pixel data to pixel locations and an interconnection network for interconnecting the graphics processors to the frame buffer. Through the interconnection network, each graphics processor may access any part of the frame buffer concurrently with another graphics processor accessing any other part of the frame buffer. The plurality of graphics processors can thereby transmit concurrently pixel data to pixel locations in the frame buffer.
NASA Astrophysics Data System (ADS)
Dave, Gaurav P.; Sureshkumar, N.; Blessy Trencia Lincy, S. S.
2017-11-01
Current trend in processor manufacturing focuses on multi-core architectures rather than increasing the clock speed for performance improvement. Graphic processors have become as commodity hardware for providing fast co-processing in computer systems. Developments in IoT, social networking web applications, big data created huge demand for data processing activities and such kind of throughput intensive applications inherently contains data level parallelism which is more suited for SIMD architecture based GPU. This paper reviews the architectural aspects of multi/many core processors and graphics processors. Different case studies are taken to compare performance of throughput computing applications using shared memory programming in OpenMP and CUDA API based programming.
Code of Federal Regulations, 2010 CFR
2010-01-01
... provisions of Subpart B apply if credit cards are issued and (1) the card issuer and the seller are the same... 226.10. Section 226.11. This section applies when a card issuer receives a payment or other credit... Bill on a Transaction-by-Transaction Basis The following provisions of Subpart B apply if credit cards...
Eigensolution of finite element problems in a completely connected parallel architecture
NASA Technical Reports Server (NTRS)
Akl, F.; Morel, M.
1989-01-01
A parallel algorithm is presented for the solution of the generalized eigenproblem in linear elastic finite element analysis. The algorithm is based on a completely connected parallel architecture in which each processor is allowed to communicate with all other processors. The algorithm is successfully implemented on a tightly coupled MIMD parallel processor. A finite element model is divided into m domains each of which is assumed to process n elements. Each domain is then assigned to a processor or to a logical processor (task) if the number of domains exceeds the number of physical processors. The effect of the number of domains, the number of degrees-of-freedom located along the global fronts, and the dimension of the subspace on the performance of the algorithm is investigated. For a 64-element rectangular plate, speed-ups of 1.86, 3.13, 3.18, and 3.61 are achieved on two, four, six, and eight processors, respectively.
Extended performance electric propulsion power processor design study. Volume 2: Technical summary
NASA Technical Reports Server (NTRS)
Biess, J. J.; Inouye, L. Y.; Schoenfeld, A. D.
1977-01-01
Electric propulsion power processor technology has processed during the past decade to the point that it is considered ready for application. Several power processor design concepts were evaluated and compared. Emphasis was placed on a 30 cm ion thruster power processor with a beam power rating supply of 2.2KW to 10KW for the main propulsion power stage. Extension in power processor performance were defined and were designed in sufficient detail to determine efficiency, component weight, part count, reliability and thermal control. A detail design was performed on a microprocessor as the thyristor power processor controller. A reliability analysis was performed to evaluate the effect of the control electronics redesign. Preliminary electrical design, mechanical design and thermal analysis were performed on a 6KW power transformer for the beam supply. Bi-Mod mechanical, structural and thermal control configurations were evaluated for the power processor and preliminary estimates of mechanical weight were determined.
Wald, Ingo; Ize, Santiago
2015-07-28
Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.
Sequence information signal processor
Peterson, John C.; Chow, Edward T.; Waterman, Michael S.; Hunkapillar, Timothy J.
1999-01-01
An electronic circuit is used to compare two sequences, such as genetic sequences, to determine which alignment of the sequences produces the greatest similarity. The circuit includes a linear array of series-connected processors, each of which stores a single element from one of the sequences and compares that element with each successive element in the other sequence. For each comparison, the processor generates a scoring parameter that indicates which segment ending at those two elements produces the greatest degree of similarity between the sequences. The processor uses the scoring parameter to generate a similar scoring parameter for a comparison between the stored element and the next successive element from the other sequence. The processor also delivers the scoring parameter to the next processor in the array for use in generating a similar scoring parameter for another pair of elements. The electronic circuit determines which processor and alignment of the sequences produce the scoring parameter with the highest value.
Conditional load and store in a shared memory
Blumrich, Matthias A; Ohmacht, Martin
2015-02-03
A method, system and computer program product for implementing load-reserve and store-conditional instructions in a multi-processor computing system. The computing system includes a multitude of processor units and a shared memory cache, and each of the processor units has access to the memory cache. In one embodiment, the method comprises providing the memory cache with a series of reservation registers, and storing in these registers addresses reserved in the memory cache for the processor units as a result of issuing load-reserve requests. In this embodiment, when one of the processor units makes a request to store data in the memory cache using a store-conditional request, the reservation registers are checked to determine if an address in the memory cache is reserved for that processor unit. If an address in the memory cache is reserved for that processor, the data are stored at this address.
Krishnamurthy, Vani; Satish, Suchitha; Doreswamy, Srinivasa Murthy; Vimalambike, Manjunath Gubbanna
2016-07-01
Cytological evaluation of body fluids is an important diagnostic technique. Cytocentrifuge has contributed immensely to improve the diagnostic yield of the body fluids. Cytocentrifuge requires a filter card for absorbing the cell free fluid. This is the only consumable which needs to be purchased from the manufacturer at a significant cost. To compare the cell density in cytocentrifuge preparations made from commercially available filter cards with custom made filter cards. This was a prospective analytical study undertaken in department of pathology of a tertiary care centre. A 300 GSM handmade paper with the absorbability similar to the conventional card was obtained and fashioned to suit the filter card slot of the cytospin. Thirty seven body fluids were centrifuged using both conventional and custom made filter card. The cell density was measured as number of cells per 10 high power fields. The median cell density was compared using Mann-Whitney U test. The agreement between the values was analysed using Bland Altman analysis. The median cell count per 10 High power field (HPF) with conventional card was 386 and that with custom made card was 408. The difference was not statistically significant (p = 0.66). There was no significant difference in the cell density and alteration in the morphology between the cell preparations using both the cards. Custom made filter card can be used for cytospin cell preparations of body fluids without loss of cell density or alteration in the cell morphology and at a very low cost.
Code of Federal Regulations, 2011 CFR
2011-04-01
... information processors: form of application and amendments. 242.609 Section 242.609 Commodity and Securities....609 Registration of securities information processors: form of application and amendments. (a) An application for the registration of a securities information processor shall be filed on Form SIP (§ 249.1001...
Code of Federal Regulations, 2010 CFR
2010-04-01
... information processors: form of application and amendments. 242.609 Section 242.609 Commodity and Securities....609 Registration of securities information processors: form of application and amendments. (a) An application for the registration of a securities information processor shall be filed on Form SIP (§ 249.1001...
Optical Associative Processors For Visual Perception"
NASA Astrophysics Data System (ADS)
Casasent, David; Telfer, Brian
1988-05-01
We consider various associative processor modifications required to allow these systems to be used for visual perception, scene analysis, and object recognition. For these applications, decisions on the class of the objects present in the input image are required and thus heteroassociative memories are necessary (rather than the autoassociative memories that have been given most attention). We analyze the performance of both associative processors and note that there is considerable difference between heteroassociative and autoassociative memories. We describe associative processors suitable for realizing functions such as: distortion invariance (using linear discriminant function memory synthesis techniques), noise and image processing performance (using autoassociative memories in cascade with with a heteroassociative processor and with a finite number of autoassociative memory iterations employed), shift invariance (achieved through the use of associative processors operating on feature space data), and the analysis of multiple objects in high noise (which is achieved using associative processing of the output from symbolic correlators). We detail and provide initial demonstrations of the use of associative processors operating on iconic, feature space and symbolic data, as well as adaptive associative processors.
Enabling Future Robotic Missions with Multicore Processors
NASA Technical Reports Server (NTRS)
Powell, Wesley A.; Johnson, Michael A.; Wilmot, Jonathan; Some, Raphael; Gostelow, Kim P.; Reeves, Glenn; Doyle, Richard J.
2011-01-01
Recent commercial developments in multicore processors (e.g. Tilera, Clearspeed, HyperX) have provided an option for high performance embedded computing that rivals the performance attainable with FPGA-based reconfigurable computing architectures. Furthermore, these processors offer more straightforward and streamlined application development by allowing the use of conventional programming languages and software tools in lieu of hardware design languages such as VHDL and Verilog. With these advantages, multicore processors can significantly enhance the capabilities of future robotic space missions. This paper will discuss these benefits, along with onboard processing applications where multicore processing can offer advantages over existing or competing approaches. This paper will also discuss the key artchitecural features of current commercial multicore processors. In comparison to the current art, the features and advancements necessary for spaceflight multicore processors will be identified. These include power reduction, radiation hardening, inherent fault tolerance, and support for common spacecraft bus interfaces. Lastly, this paper will explore how multicore processors might evolve with advances in electronics technology and how avionics architectures might evolve once multicore processors are inserted into NASA robotic spacecraft.
48 CFR 413.301 - Governmentwide commercial purchase card.
Code of Federal Regulations, 2010 CFR
2010-10-01
... purchase card. 413.301 Section 413.301 Federal Acquisition Regulations System DEPARTMENT OF AGRICULTURE....301 Governmentwide commercial purchase card. USDA policy and procedures on use of the Governmentwide commercial purchase card are established in Departmental Regulation Series 5000. ...
A Cashless Society? The Plastic Revolution. Resources in Technology.
ERIC Educational Resources Information Center
Ritz, John M.
1993-01-01
Relates the history of credit cards, their evolution to current forms, and innovations (debit cards, token cards, smart cards). Considers their sociocultural impact. Provides a design brief, including objectives, resources, evaluation criteria, outcomes, and a quiz. (SK)
Engineering software development with HyperCard
NASA Technical Reports Server (NTRS)
Darko, Robert J.
1990-01-01
The successful and unsuccessful techniques used in the development of software using HyperCard are described. The viability of the HyperCard for engineering is evaluated and the future use of HyperCard by this particular group of developers is discussed.
Hot Chips and Hot Interconnects for High End Computing Systems
NASA Technical Reports Server (NTRS)
Saini, Subhash
2005-01-01
I will discuss several processors: 1. The Cray proprietary processor used in the Cray X1; 2. The IBM Power 3 and Power 4 used in an IBM SP 3 and IBM SP 4 systems; 3. The Intel Itanium and Xeon, used in the SGI Altix systems and clusters respectively; 4. IBM System-on-a-Chip used in IBM BlueGene/L; 5. HP Alpha EV68 processor used in DOE ASCI Q cluster; 6. SPARC64 V processor, which is used in the Fujitsu PRIMEPOWER HPC2500; 7. An NEC proprietary processor, which is used in NEC SX-6/7; 8. Power 4+ processor, which is used in Hitachi SR11000; 9. NEC proprietary processor, which is used in Earth Simulator. The IBM POWER5 and Red Storm Computing Systems will also be discussed. The architectures of these processors will first be presented, followed by interconnection networks and a description of high-end computer systems based on these processors and networks. The performance of various hardware/programming model combinations will then be compared, based on latest NAS Parallel Benchmark results (MPI, OpenMP/HPF and hybrid (MPI + OpenMP). The tutorial will conclude with a discussion of general trends in the field of high performance computing, (quantum computing, DNA computing, cellular engineering, and neural networks).
Authentication techniques for smart cards
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, R.A.
1994-02-01
Smart card systems are most cost efficient when implemented as a distributed system, which is a system without central host interaction or a local database of card numbers for verifying transaction approval. A distributed system, as such, presents special card and user authentication problems. Fortunately, smart cards offer processing capabilities that provide solutions to authentication problems, provided the system is designed with proper data integrity measures. Smart card systems maintain data integrity through a security design that controls data sources and limits data changes. A good security design is usually a result of a system analysis that provides a thoroughmore » understanding of the application needs. Once designers understand the application, they may specify authentication techniques that mitigate the risk of system compromise or failure. Current authentication techniques include cryptography, passwords, challenge/response protocols, and biometrics. The security design includes these techniques to help prevent counterfeit cards, unauthorized use, or information compromise. This paper discusses card authentication and user identity techniques that enhance security for microprocessor card systems. It also describes the analysis process used for determining proper authentication techniques for a system.« less
Mammography screening credit card and compliance.
Schapira, D V; Kumar, N B; Clark, R A; Yag, C
1992-07-15
Screening for breast cancer using mammography has been shown to be effective in reducing mortality from breast cancer. The authors attempted to determine if use of a wallet-size plastic screening "credit" card would increase participants' compliance for subsequent mammograms when compared with traditional methods of increasing compliance. Two hundred and twenty consecutive women, ages 40-70 years, undergoing their first screening mammography were recruited and assigned randomly to four groups receiving (1) a reminder plastic credit card (2) reminder credit card with written reminder; (3) appointment card; and (4) verbal recommendation. Return rates of the four groups were determined after 15 months. The return rate for subsequent mammograms was significantly higher for participants (72.4%) using the credit card than for participants (39.8%) exposed to traditional encouragement/reminders (P less than 0.0001). The credit card was designed to show the participant's screening anniversary, and the durability of the card may have been a factor in increasing the return rate. The use of reminder credit cards may increase compliance for periodic screening examinations for other cancers and other chronic diseases.
High-performance ultra-low power VLSI analog processor for data compression
NASA Technical Reports Server (NTRS)
Tawel, Raoul (Inventor)
1996-01-01
An apparatus for data compression employing a parallel analog processor. The apparatus includes an array of processor cells with N columns and M rows wherein the processor cells have an input device, memory device, and processor device. The input device is used for inputting a series of input vectors. Each input vector is simultaneously input into each column of the array of processor cells in a pre-determined sequential order. An input vector is made up of M components, ones of which are input into ones of M processor cells making up a column of the array. The memory device is used for providing ones of M components of a codebook vector to ones of the processor cells making up a column of the array. A different codebook vector is provided to each of the N columns of the array. The processor device is used for simultaneously comparing the components of each input vector to corresponding components of each codebook vector, and for outputting a signal representative of the closeness between the compared vector components. A combination device is used to combine the signal output from each processor cell in each column of the array and to output a combined signal. A closeness determination device is then used for determining which codebook vector is closest to an input vector from the combined signals, and for outputting a codebook vector index indicating which of the N codebook vectors was the closest to each input vector input into the array.
On the relationship between parallel computation and graph embedding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gupta, A.K.
1989-01-01
The problem of efficiently simulating an algorithm designed for an n-processor parallel machine G on an m-processor parallel machine H with n > m arises when parallel algorithms designed for an ideal size machine are simulated on existing machines which are of a fixed size. The author studies this problem when every processor of H takes over the function of a number of processors in G, and he phrases the simulation problem as a graph embedding problem. New embeddings presented address relevant issues arising from the parallel computation environment. The main focus centers around embedding complete binary trees into smaller-sizedmore » binary trees, butterflies, and hypercubes. He also considers simultaneous embeddings of r source machines into a single hypercube. Constant factors play a crucial role in his embeddings since they are not only important in practice but also lead to interesting theoretical problems. All of his embeddings minimize dilation and load, which are the conventional cost measures in graph embeddings and determine the maximum amount of time required to simulate one step of G on H. His embeddings also optimize a new cost measure called ({alpha},{beta})-utilization which characterizes how evenly the processors of H are used by the processors of G. Ideally, the utilization should be balanced (i.e., every processor of H simulates at most (n/m) processors of G) and the ({alpha},{beta})-utilization measures how far off from a balanced utilization the embedding is. He presents embeddings for the situation when some processors of G have different capabilities (e.g. memory or I/O) than others and the processors with different capabilities are to be distributed uniformly among the processors of H. Placing such conditions on an embedding results in an increase in some of the cost measures.« less
Research on the SIM card implementing functions of transport card
NASA Astrophysics Data System (ADS)
Li, Yi; Wang, Lin
2015-12-01
This paper is based on the analysis for theory and key technologies of contact communication, contactless communication card and STK menu, and proposes complete software and hardware solution for achieving convenience and secure mobile payment system on SIM card.
48 CFR 1313.301 - Governmentwide commercial purchase card.
Code of Federal Regulations, 2010 CFR
2010-10-01
... purchase card. 1313.301 Section 1313.301 Federal Acquisition Regulations System DEPARTMENT OF COMMERCE....301 Governmentwide commercial purchase card. The Department's procedures for the use and control of the Governmentwide commercial purchase card are set forth in CAM 1313.301. ...